patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11863338 | DETAILED DESCRIPTION Techniques described herein support generation of one or more communication channels associated with a data object within a data management platform. A user of a database system (or tenant in the case of a multi-tenant database system) may store information and data for users, customers, organizations, etc. in a database and manage the stored information with a program such as a data management platform. For example, a user may manage and store data and metadata for exchanges, opportunities, deals, assets, customer information, and the like. The data management platform may support multiple data objects (e.g., data records) and a group of users may be linked to each data object. For example, one or more users may follow or otherwise be granted access to review and/or edit one or more data objects or records. An organization or tenant may use communication channel generation depicted herein to schedule and manage communications between the users of the organization or tenant. A group-based communication platform may support a quantity of channels configured for group-based communications. Each channel may be accessible by a specific set of users based on permissions that are set for that channel, and respective users may post messages to a channel. In some examples, administrative users or employees associated with the organization or a tenant of the organization (e.g., a marketing team) may communicate on a communication channel on a communication platform (e.g., group-based communication platform). For example, the communication platform may support communication channels that are organized by topic, and team members may use these communication channels (e.g., chat threads) to discuss about those topics. Conventionally, the communication platform may be separate from the data management platform (e.g., different servers, different programs or applications, etc.), and data associated with the data management platform may be confined to computing systems that support the data management platform. In other words, there may not be a way to establish one or many channels associated with a data object (e.g., data record) in a separate communication platform directly from a page in the data management platform, which may limit some aspects of working within the data management platform. Additionally, users of the data management platform may not be able to set a privacy level for a communication channel, which may limit capabilities associated with the channel. Techniques described herein support generation of a communication channel in a communication platform from a data management platform (e.g., directly from a page displaying a data record within the data management platform). Thus, techniques described herein provide communications between a data management platform and a communication platform. For example, the techniques described herein may enable users of the data management platform to create a communication channel associated with various data objects directly from the data management platform. Likewise, the techniques described herein may enable users to interact with the data management platform from within the communication platform. The described techniques may support improved workflow efficiency, reduced communication resource overhead, and higher user satisfaction, among other benefits. Aspects of the present disclosure may provide for improved cross-platform functionality between a data management platform and a communication platform. In particular, techniques of the present disclosure provide for an automatic system to create a communication channel from a record page of a data management platform, thus improving the user experience. In some examples, a user of the data management platform may generate a communication channel using an “action” button located on a record page. In some examples, the data management platform may receive, via a user interface of the data management platform storing a set of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the data management platform. In some examples, the communication channel may be for a data object of the set of data objects stored in the data management platform. In some examples, in response to receiving the user input, the data management platform may retrieve a group of users that are linked to the data object within the data management platform. A user may select the “action” button on the records page to input fields in a form. The data management platform may then display (in response to receiving a selection of the “action” button) a list of options for generating the communication channel. The list of options may include the group of users (e.g., identifiers of members to include in the conversation) for including in the communication channel, a privacy level for the communication channel (e.g., an indication of whether the discussion is public or private), and an identifier of the communication channel (e.g., an initial conversation name). The user may then submit the form, and once the form is submitted, a communication channel is created with the members provided in the form and the visibility set to public or private. For instance, the data management platform may generate an executable packet of instructions for ingesting into the group-based communication platform based on an input to the list of options displayed to the user. The data management platform may then transmit the executable packet of instructions for ingesting into the group-based communication platform. Thus, aspects of the present disclosure provide an association between a data record and a communication channel. Aspects of the present disclosure may be implemented to realize one or more of the following advantages. The described techniques may provide improved cross-platform compatibility between a data management platform and a communication platform. For example, the techniques described herein may enable users of the communication platform to interact with (e.g., affect, change, update, modify) data records stored at or otherwise controlled by the data management platform. Similarly, the described techniques may enable the data management platform to display information related to a data record within the communication platform. By supporting bidirectional communications between the data management platform and the communication platform, the described techniques may enable users to communicate with greater efficiency, lower communication resource overhead, reduced latency, and higher user satisfaction, among other benefits. Aspects of the disclosure are initially described in the context of an environment supporting an on-demand database service. Aspects of the disclosure are illustrated by and described with reference to data processing systems, user interfaces, and process flows. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to methods to generate communication channel for data objects. FIG.1illustrates an example of a system100for cloud computing that supports methods to generate communication channel for data objects in accordance with various aspects of the present disclosure. The system100includes cloud clients105, contacts110, cloud platform115, and data center120. Cloud platform115may be an example of a public or private cloud network. A cloud client105may access cloud platform115over network connection135. The network may implement transfer control protocol and internet protocol (TCP/IP), such as the Internet, or may implement other network protocols. A cloud client105may be an example of a user device, such as a server (e.g., cloud client105-a), a smartphone (e.g., cloud client105-b), or a laptop (e.g., cloud client105-c). In other examples, a cloud client105may be a desktop computer, a tablet, a sensor, or another computing device or system capable of generating, analyzing, transmitting, or receiving communications. In some examples, a cloud client105may be operated by a user that is part of a business, an enterprise, a non-profit, a startup, or any other organization type. A cloud client105may interact with multiple contacts110. The interactions130may include communications, opportunities, purchases, sales, or any other interaction between a cloud client105and a contact110. Data may be associated with the interactions130. A cloud client105may access cloud platform115to store, manage, and process the data associated with the interactions130. In some cases, the cloud client105may have an associated security or permission level. A cloud client105may have access to certain applications, data, and database information within cloud platform115based on the associated security or permission level, and may not have access to others. Contacts110may interact with the cloud client105in person or via phone, email, web, text messages, mail, or any other appropriate form of interaction (e.g., interactions130-a,130-b,130-c, and130-d). The interaction130may be a business-to-business (B2B) interaction or a business-to-consumer (B2C) interaction. A contact110may also be referred to as a customer, a potential customer, a lead, a client, or some other suitable terminology. In some cases, the contact110may be an example of a user device, such as a server (e.g., contact110-a), a laptop (e.g., contact110-b), a smartphone (e.g., contact110-c), or a sensor (e.g., contact110-d). In other cases, the contact110may be another computing system. In some cases, the contact110may be operated by a user or group of users. The user or group of users may be associated with a business, a manufacturer, or any other appropriate organization. Cloud platform115may offer an on-demand database service to the cloud client105. In some cases, cloud platform115may be an example of a multi-tenant database system. In this case, cloud platform115may serve multiple cloud clients105with a single instance of software. However, other types of systems may be implemented, including—but not limited to—client-server systems, mobile device systems, and mobile network systems. In some cases, cloud platform115may support CRM solutions. This may include support for sales, service, marketing, community, analytics, applications, and the Internet of Things. Cloud platform115may receive data associated with contact interactions130from the cloud client105over network connection135, and may store and analyze the data. In some cases, cloud platform115may receive data directly from an interaction130between a contact110and the cloud client105. In some cases, the cloud client105may develop applications to run on cloud platform115. Cloud platform115may be implemented using remote servers. In some cases, the remote servers may be located at one or more data centers120. Data center120may include multiple servers. The multiple servers may be used for data storage, management, and processing. Data center120may receive data from cloud platform115via connection140, or directly from the cloud client105or an interaction130between a contact110and the cloud client105. Data center120may utilize multiple redundancies for security purposes. In some cases, the data stored at data center120may be backed up by copies of the data at a different data center (not pictured). Subsystem125may include cloud clients105, cloud platform115, and data center120. In some cases, data processing may occur at any of the components of subsystem125, or at a combination of these components. In some cases, servers may perform the data processing. The servers may be a cloud client105or located at data center120. A cloud platform115may include one or more application servers that support a communication platform, a data management platform, a service that manages communications between the data management platform and the communication platform, a global endpoint, or a combination thereof. In some examples, these platforms, services, and endpoints may be supported by the same application server. In other examples, these platforms, services, and endpoints may be supported by separate application servers. As described herein, a data management platform may include a communication service that may enable users to develop and manage communications between a tenant of a multi-tenant system (e.g., a cloud client105) and a set of users (e.g., customers) corresponding to the tenant (e.g., contacts110). In some cases, the communication service may enable communications between multiple users of a cloud-based data management platform (e.g., a cloud client105) In some cases, these users may communicate within a communication channel of a communication platform. However, systems may not support generation of communication channel at the communication platform from the cloud-based data management platform between the communication process flow management service and the communication platform. In other words, users may be unable to interact with (e.g., generate, update, change, modify) a communication channel from within the data management platform. Additionally or alternatively, the users may be unable to receive updates or data objects associated with a communication channel within the communication platform for viewing, discussion, and subsequent interaction. Aspects of the present disclosure support generation of communication channel associated with a group-based communication platform. In particular, aspects of the present disclosure provide for communication channel generation from a cloud-based data management platform, which may enable users to develop and manage communication channels (e.g., add people to communication channel, set privacy level of the communication channel) from within the cloud-based data management platform. For example, an authenticated user of the cloud-based data management platform may update a configuration of a communication channel by interacting with a user interface associated with the cloud-based data management platform. Moreover, the techniques described herein may enable users to interact with communication channel on different devices (e.g., smartphones, tablets, mobile devices). In some examples, a user may perform an authentication procedure to connect an account associated with a communication platform with an account associated with a data management platform. Once authenticated, the user may use the data management platform to interact with communication channels stored at or otherwise controlled by the communication platform. The data management platform may store the account information of the user, and may use this information to authenticate subsequent requests from the user. As an example, a user may submit a request to generate a communication channel by interacting with a user interface associated with a data management platform. For example, a user interacting with a data record stored in the data management platform may use an “action” button to initiate generation of a communication channel associated with the data record. The user may further indicate an initial conversation name, an indication of whether the discussion is public or private, and identifiers of members to include in the conversation. The user may then submit the request. Once submitted, the request may be sent to an intermediary service that manages communications between the communication platform and the data management platform. The intermediary service may perform an authentication procedure to verify the identity of the user. Once authenticated, the request may be routed to the communication platform. In some examples, the communication platform may generate a communication channel in accordance with the request. In some instances, the user may add more participants to the communication channel (after generation). Additionally or alternatively, the user may associated the generated communication channel with an existing communication channel. Thus, the techniques described herein may enable users to interact with the communication platform from the data management platform (e.g., via an intermediary service supporting application programming interface (API)), and may also enable the data management platform to post or display information within the communication platform. Thus, the described techniques may support generation of communication channel from the cloud-based data management platform. It should be appreciated by a person skilled in the art that one or more aspects of the disclosure may be implemented in a system100to additionally or alternatively solve other problems than those described above. Furthermore, aspects of the disclosure may provide technical improvements to “conventional” systems or processes as described herein. However, the description and appended drawings only include example technical improvements resulting from implementing aspects of the disclosure, and accordingly do not represent all of the technical improvements provided within the scope of the claims. FIG.2illustrates an example of a block diagram200that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The devices of block diagram200may implement or be implemented by aspects of the data processing system100as described inFIG.1. The block diagram200includes a server205which may be implemented by aspects of a cloud platform115or a subsystem125described with reference toFIG.1. The server205executes a communication generation service210. The block diagram200further includes a user device215(depicted as a cluster of devices), and a communication platform250, which may be implemented by aspects of a cloud platform115or a subsystem125described with reference toFIG.1. Example user devices215may include computing devices, smart devices, virtual assistants, etc., and the user device215may include servers supporting such systems (e.g., search servers, application servers, etc.). The user device215may transmit a set of user input to the server205. The set of user input may be used to generate a communication channel using a user interface supported by the server205. In some examples, the systems or servers supporting the server205(e.g., data management platform) may include computing systems that are logically or physically separated from systems or servers supporting the communication platform250. As described herein, the block diagram200may support creation, configuration, and implementation of various communication channels that provide communications between a set of users (e.g., a set of users associated with a tenant). For example, the users associated may use the communication generation service210to perform actions that include processor-executable instructions for generation of communication channels. For example, a user may input an instruction that, when executed by a processor, selects users (e.g., customers) to be included in a communication channel. The communication platform250may support a chat or instant messaging service used for various business functionalities. For example, teams associated with a tenant (of a multi-tenant system supported by the block diagram200) may use the communication platform250to manage communication channels supported by the data management platform (e.g., server205). Users may use the communication platform250to discuss aspects of the data management platform. For example, users of the communication platform250may decide to reconfigure or interact with a data record included in the server205(e.g., data management platform). However, because the server205and the communication platform250may be implemented in separate computing systems and/or executed by separate applications, some features of the server205may be incompatible with the communication platform250. Thus, if a user wishes to communicate with another user within the communication platform250, the user may be unable to generate the communication channel from the data management platform (hosted by server205). Further, a user may be unable to post or otherwise display data associated with the data record into a communication channel of the communication platform250without manually inputting the data into a chat window of the communication platform250. Techniques described herein may support improved cross-platform compatibility between the server205hosting the data management platform and the communication platform250. An instruction interface of the server205may include an input identification component230. The server205may receive a user input220via the instruction interface. The user input220may include a user input to generate a communication channel of a group-based communication platform (e.g., communication platform250) that is separate from the cloud-based data management platform (hosted by server205). In some examples, the communication channel may be for a data object of a set of data objects stored in the cloud-based data management platform. For example, a user may use an “action” button displayed on a user interface to initiate generation of a communication channel. Upon initiation of the generation of communication channel, the instruction interface of the server205may display a list of options for generating the communication channel. In some examples, the list of options may include a group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. In response to displaying the list of options, the input identifying component230may receive an input to the list of options for generating the communication channel. An option identifying component240may identify a name of the communication channel based on the input to the list of options, Additionally or alternatively, the option identifying component240may identify whether the communication channel is public or private based on the input to the list of options. In some examples, the option identifying component240may identify a selection of a subset of group of users for including in the communication channel based on the input to the list of options. In some examples, the option identifying component240may identify the subset of the group of users using metadata245for different clients (e.g., cloud clients105ofFIG.1). The option identifying component240may generate an executable packet of instructions for ingesting into the group-based communication platform based on the input to the list of options. For example, the option identifying component240may generate the executable packet of instructions based on identifying the name of the communication channel, whether the communication channel is public or private, and the selection of the subset of group of users. In some examples, the user may initiate generation of a communication channel (using an “action” button) associated with a data object. The interface associated with the server205may display a list of users associated with the data record. In some examples, the group of users may include a first group of users that have access to the data object, a second group of users that follow the data object, or a combination thereof. The user may select a subset of users to be included in as participants in the communication channel. Additionally or alternatively, the user may indicate whether the communication channel is to be public, private (e.g., participants may have access to the communication channel) or partially private (e.g., a subset of users in addition to the participants may have access to the communication channel). The instruction generation component230may transmit the executable packet of instructions for ingesting into the communication platform250. Upon ingestion, the communication platform250may generate a communication channel with the subset of users as participants. In some examples, an association component260may determine an existing communication channel in the communication platform250for the data object of the set of data objects. For instance, after generation of a communication channel, the user may choose to associate the channel with an existing communication channel. Additionally or alternatively, the association component260may associate the generated communication channel with an existing communication channel using stored associations265. In some examples, the interface at the server205may display a list of communication channels for the user. For example, the user may be a member of a set of communication channels included in the list of communication channels. the user may select an option to display all communication channels that the user is a part of Additionally or alternatively, the input identification component230may receive a second user input to display a list of communication channels of the communication platform250associated with the data object of the set of data objects. The communication generation service210may display a list of communication channels associated with the data object in response to receiving the second user input. In some examples, the input identification component230may receive second input to the list of options for generating a second communication channel. The instruction generation component235may generate a second executable packet of instructions for ingesting into the communication platform250based on the second input to the list of options. In some examples, the communication channel and the second communication channel may both be for a common data object. That is, components of the block diagram200may support generation of multiple communication channels associated with a common data object. Additionally or alternatively, the components of the block diagram200may support generation of multiple communication channels associated with different data objects. In some examples, different communication channels may include same or different groups of users and may have same or different privacy levels. In some examples, the associated component260may associate a generated communication channel with a second data object of the set of data objects. FIG.3illustrates an example of a user interface300that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The user interface300may correspond to a desktop or mobile or other user interface type. In some cases, additional user interface types may be supported for implementing generating communication channels associated with a data object (e.g., data record). The user interface300may support a view of communication channel generation from a data management platform. As depicted herein, a user of a device may be associated with a tenant of a multi-tenant database which may use the cloud platform for data management. The user interface300may be a part of a cloud platform that supports multiple data records. The user interface300may display a view of a data object supported by an application server. As depicted in the example ofFIG.3, the user interface300may support a set of data records configured for use in an application. As depicted in the example ofFIG.3, the set of data records may include a data record “updated user story” included in the pane312. The user interface300may support generation of communication channel associated with a relevant data record. In the example depicted herein, a user device may generate a communication channel associated with the data record “updated user story.” The user interface300may include a section304indicating details related to the data record. Additionally or alternatively, the user interface300may include a section306identifying followers associated with the data record. As depicted in the example ofFIG.3, in addition to the list of followers displayed on the user interface300, the user may be able to search for additional followers. A user may select “discuss in communication channel” option308to generate a communication channel. Upon receiving the user input, the cloud-based data management platform including the user interface300may determine that the user input is to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform. The user interface300may display the list of options310upon receiving a selection of option308from the user. The list of options310may include an option to create a new communication channel and an option to associate a communication channel to an existing communication channel. Additionally, the list of options310may include an identifier of the communication channel, a group of users for including in the communication channel, and a privacy level for the communication channel. As depicted in the example ofFIG.3, the list of options310may include an input box for the user to enter a channel name. The list of options310may further include an option for the user to add participants. In some examples, the data management platform may retrieve a group of users associated with the data record. The group of users that are linked to the data object may include at least one of a first group of users that have access to the data object, a second group of users that follow the data object, or a combination thereof. Additionally or alternatively, the list of options310may include an option to make the communication channel public and an option to make the communication channel private. The data management platform may then generate an executable packet of instructions for ingesting into the group-based communication platform based on an input to the list of options310. The executable packet of instructions may include at least one of a communication channel identifier, a user identifier for a creator the executable packet of instructions, a user identifier for a modifier of the executable packet of instructions, a user identifier for a member of the data object, a user identifier for an owner of the data object, a privacy indicator, a record identifier, or a combination thereof. The data management platform may then transmit the executable packet of instructions for ingesting into the group-based communication platform. Thus, by implementing the techniques for generating a communication channel, the user interface300may improve user experience. FIG.4illustrates an example of an instruction generation method400that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The instruction generation method400may be used to generate a communication channel on a communication platform. A user of a cloud-based data management platform may use one or more options displayed on a user interface of the cloud-based data management platform to generate a communication channel on a group-based communication platform. A user may provide an input to a list of options to create a new communication channel. In the example ofFIG.4, the data management platform may generate an executable packet of instructions for ingesting into the group-based communication platform based on an input to the list of options. The executable packet of instructions may include record assignment415. The data management platform may use parameters for a record in a table405and parameters for a user object in a table410may be used to generate record assignment415. The record assignment415may include at least one of a communication channel identifier, a user identifier for a creator the executable packet of instructions (created by), a user identifier for a modifier of the executable packet of instructions, a user identifier for a member of the data object, a user identifier for an owner of the data object, a privacy indicator, a record identifier (record ID and record assignment name), or a combination thereof. The record identifier in the record assignment415may be received from the parameters for the record in the table405. Additionally or alternatively, the owner of the record assignment415may be received from the parameters for the user object in the table410. The data management platform then ingests the record assignment415to a communication platform through a cloud platform460. The communication platform may generate a communication channel420upon reception of the set of executable instructions (record assignment415). In some examples, the communication channel420may include a channel name, an indicator of the creator of the channel, an indicator of the modifier of the channel, an indicator of the owner of the channel and a channel identifier. FIG.5illustrates an example of a process flow500that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The process flow500may implement aspects of system100ofFIG.1and includes a user device505(e.g., a set of user devices), which may be examples of devices associated with cloud client105ofFIG.1. Example user devices505include computing devices, smart devices, virtual assistants, etc. and the user device505may include servers supporting such systems (e.g., search servers, application servers, etc.). The process flow500further includes an application server510, which may be an example of aspects of cloud platform115ofFIG.1and may be an example of aspects of the application server205ofFIG.2(e.g., a database system, application server, etc.), and may support a cloud-based data management platform. The process flow500further includes a communication server512, which may be an example of aspects of the communication platform250ofFIG.2(e.g., a database system, application server, etc.), and may support a group-based communication platform. The application server510may load, in a user interface of a cloud-based data management platform, an option to generate a communication channel. At515, the user device505may transmit a selection of a source element and an event associated with the source element. In some examples, the user device505may transmit, via a user interface of cloud-based data management platform storing a set of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform. In some examples, the communication channel may be for a data object of the set of data objects. At520, the application server510may retrieve, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The application server510may then display, via the user interface, a list of options for generating the communication channel, the list of options including the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. At525, the user device505may transmit input to a list of options. The application server510may receive the input to the list of options for generating the communication channel. At530, the application server510may identify a name of the communication channel based on the input to the list of options. The application server510may identify whether the communication channel is public or private based at least in part on the input to the list of options. Additionally or alternatively, the application server510may identify a selection of a subset of group of users for including in the communication channel based on the input to the list of options. At535, the application server510may generate an executable packet of instructions for ingesting into the group-based communication platform based on the input to the list of options. In some examples, the executable packet of instructions may include at least one of a communication channel identifier, a user identifier for a creator the executable packet of instructions, a user identifier for a modifier of the executable packet of instructions, a user identifier for a member of the data object, a user identifier for an owner of the data object, a privacy indicator, a record identifier, or a combination thereof. At540, the application server510may transmit the executable packet of instructions for ingesting into the group-based communication platform. At545, the communication server512may generate a communication channel based on the executable packet of instructions transmitted by the application server510. FIG.6shows a block diagram600of a device605that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The device605may include an input module610, an output module615, and a communication channel component620. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The input module610may manage input signals for the device605. For example, the input module610may identify input signals based on an interaction with a modem, a keyboard, a mouse, a touchscreen, or a similar device. These input signals may be associated with user input or processing at other components or devices. In some cases, the input module610may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system to handle input signals. The input module610may send aspects of these input signals to other components of the device605for processing. For example, the input module610may transmit input signals to the communication channel component620to support methods to generate communication channel for data objects. In some cases, the input module610may be a component of an I/O controller810as described with reference toFIG.8. The output module615may manage output signals for the device605. For example, the output module615may receive signals from other components of the device605, such as the communication channel component620, and may transmit these signals to other components or devices. In some examples, the output module615may transmit output signals for display in a user interface, for storage in a database or data store, for further processing at a server or server cluster, or for any other processes at any number of devices or systems. In some cases, the output module615may be a component of an I/O controller810as described with reference toFIG.8. For example, the communication channel component620may include an input component625, a user information component630, a display component635, an instruction generation component640, an instruction transmission component645, or any combination thereof. In some examples, the communication channel component620, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the input module610, the output module615, or both. For example, the communication channel component620may receive information from the input module610, send information to the output module615, or be integrated in combination with the input module610, the output module615, or both to receive information, transmit information, or perform various other operations as described herein. The communication channel component620may support communication channel creation in accordance with examples as disclosed herein. The input component625may be configured as or otherwise support a means for receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The user information component630may be configured as or otherwise support a means for retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The display component635may be configured as or otherwise support a means for displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The instruction generation component640may be configured as or otherwise support a means for generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The instruction transmission component645may be configured as or otherwise support a means for transmitting the executable packet of instructions for ingesting into the group-based communication platform. FIG.7shows a block diagram700of a communication channel component720that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The communication channel component720may be an example of aspects of a communication channel component or a communication channel component620, or both, as described herein. The communication channel component720, or various components thereof, may be an example of means for performing various aspects of methods to generate communication channel for data objects as described herein. For example, the communication channel component720may include an input component725, a user information component730, a display component735, an instruction generation component740, an instruction transmission component745, a communication channel component750, an association component755, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communication channel component720may support communication channel creation in accordance with examples as disclosed herein. The input component725may be configured as or otherwise support a means for receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The user information component730may be configured as or otherwise support a means for retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The display component735may be configured as or otherwise support a means for displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The instruction generation component740may be configured as or otherwise support a means for generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The instruction transmission component745may be configured as or otherwise support a means for transmitting the executable packet of instructions for ingesting into the group-based communication platform. In some examples, the input component725may be configured as or otherwise support a means for receiving, via the user interface, the input to the list of options for generating the communication channel, wherein generating the executable packet of instructions is based at least in part on the received input. In some examples, the communication channel component750may be configured as or otherwise support a means for identifying a name of the communication channel based at least in part on the input to the list of options. In some examples, the communication channel component750may be configured as or otherwise support a means for identifying whether the communication channel is public or private based at least in part on the input to the list of options. In some examples, the communication channel component750may be configured as or otherwise support a means for identifying a selection of a subset of group of users for including in the communication channel based at least in part on the input to the list of options, wherein generating the executable packet of instructions is based at least in part on identifying the name of the communication channel, whether the communication channel is public or private, and the selection of the subset of group of users. In some examples, the communication channel component750may be configured as or otherwise support a means for determining an existing communication channel in the group-based communication platform for the data object of the plurality of data objects. In some examples, the communication channel component750may be configured as or otherwise support a means for associating the communication channel with the existing communication channel. In some examples, the user information component730may be configured as or otherwise support a means for identifying a user associated with the user input to generate the communication channel of the group-based communication platform. In some examples, the display component735may be configured as or otherwise support a means for displaying, via the user interface, a list of communication channels for the user, wherein the user is a member of a plurality of communication channels included in the list of communication channels. In some examples, the input component725may be configured as or otherwise support a means for receiving, via the user interface, the input to the list of options for generating the communication channel. In some examples, the input component725may be configured as or otherwise support a means for receiving, via the user interface, a second input to the list of options for generating a second communication channel. In some examples, the instruction generation component740may be configured as or otherwise support a means for generating a second executable packet of instructions for ingesting into the group-based communication platform based at least in part on the second input to the list of options, wherein the communication channel and the second communication channel are both for the data object of the plurality of data objects. In some examples, the communication channel includes a first group of users and the second communication channel includes a second group of users. In some examples, the first group of users and the second group of users are the same or different. In some examples, the communication channel and the second communication channel have the same privacy levels or different privacy levels. In some examples, the association component755may be configured as or otherwise support a means for associating the generated communication channel with a second data object of the plurality of data objects. In some examples, the input component725may be configured as or otherwise support a means for receiving a second user input to display a list of communication channels of the group-based communication platform associated with the data object of the plurality of data objects. In some examples, the display component735may be configured as or otherwise support a means for displaying, via the user interface, the list of communication channels in response to receiving the second user input. In some examples, the input component725may be configured as or otherwise support a means for receiving an indication indicating that the communication channel is generated at the group-based communication platform. In some examples, the group of users that are linked to the data object comprise at least one of a first group of users that have access to the data object, a second group of users that follow the data object, or a combination thereof. In some examples, the executable packet of instructions comprises at least one of a communication channel identifier, a user identifier for a creator the executable packet of instructions, a user identifier for a modifier of the executable packet of instructions, a user identifier for a member of the data object, a user identifier for an owner of the data object, a privacy indicator, a record identifier, or a combination thereof. FIG.8shows a diagram of a system800including a device805that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The device805may be an example of or include the components of a device605as described herein. The device805may include components for bi-directional data communications including components for transmitting and receiving communications, such as a communication channel component820, an I/O controller810, a database controller815, a memory825, a processor830, and a database835. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus840). The I/O controller810may manage input signals845and output signals850for the device805. The I/O controller810may also manage peripherals not integrated into the device805. In some cases, the I/O controller810may represent a physical connection or port to an external peripheral. In some cases, the I/O controller810may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller810may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller810may be implemented as part of a processor830. In some examples, a user may interact with the device805via the I/O controller810or via hardware components controlled by the I/O controller810. The database controller815may manage data storage and processing in a database835. In some cases, a user may interact with the database controller815. In other cases, the database controller815may operate automatically without user interaction. The database835may be an example of a single database, a distributed database, multiple distributed databases, a data store, a data lake, or an emergency backup database. Memory825may include random-access memory (RAM) and ROM. The memory825may store computer-readable, computer-executable software including instructions that, when executed, cause the processor830to perform various functions described herein. In some cases, the memory825may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor830may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor830may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor830. The processor830may be configured to execute computer-readable instructions stored in a memory825to perform various functions (e.g., functions or tasks supporting methods to generate communication channel for data objects). The communication channel component820may support communication channel creation in accordance with examples as disclosed herein. For example, the communication channel component820may be configured as or otherwise support a means for receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The communication channel component820may be configured as or otherwise support a means for retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The communication channel component820may be configured as or otherwise support a means for displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The communication channel component820may be configured as or otherwise support a means for generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The communication channel component820may be configured as or otherwise support a means for transmitting the executable packet of instructions for ingesting into the group-based communication platform. By including or configuring the communication channel component820in accordance with examples as described herein, the device805may support techniques for generating communication channels. For example, the techniques described herein may enable users of a data management platform to interact with (e.g., affect, change, update, modify) communication channels stored at or otherwise controlled by a group-based communication platform. By supporting generating communication channels from the data management platform, the device805may enable users to update and manage communication channels with greater efficiency, lower communication resource overhead, reduced latency, and higher user satisfaction, among other benefits. FIG.9shows a flowchart illustrating a method900that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The operations of the method900may be implemented by an application server or its components as described herein. For example, the operations of the method900may be performed by an application server as described with reference toFIGS.1through8. In some examples, an application server may execute a set of instructions to control the functional elements of the application server to perform the described functions. Additionally or alternatively, the application server may perform aspects of the described functions using special-purpose hardware. At905, the method may include receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The operations of905may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of905may be performed by an input component725as described with reference toFIG.7. At910, the method may include retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The operations of910may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of910may be performed by a user information component730as described with reference toFIG.7. At915, the method may include displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The operations of915may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of915may be performed by a display component735as described with reference toFIG.7. At920, the method may include generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The operations of920may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of920may be performed by an instruction generation component740as described with reference toFIG.7. At925, the method may include transmitting the executable packet of instructions for ingesting into the group-based communication platform. The operations of925may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of925may be performed by an instruction transmission component745as described with reference toFIG.7. FIG.10shows a flowchart illustrating a method1000that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The operations of the method1000may be implemented by an application server or its components as described herein. For example, the operations of the method1000may be performed by an application server as described with reference toFIGS.1through8. In some examples, an application server may execute a set of instructions to control the functional elements of the application server to perform the described functions. Additionally or alternatively, the application server may perform aspects of the described functions using special-purpose hardware. At1005, the method may include receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The operations of1005may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1005may be performed by an input component725as described with reference toFIG.7. At1010, the method may include retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The operations of1010may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1010may be performed by a user information component730as described with reference toFIG.7. At1015, the method may include displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The operations of1015may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1015may be performed by a display component735as described with reference toFIG.7. At1020, the method may include receiving, via the user interface, the input to the list of options for generating the communication channel, wherein generating the executable packet of instructions is based at least in part on the received input. The operations of1020may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1020may be performed by an input component725as described with reference toFIG.7. At1025, the method may include identifying a name of the communication channel based at least in part on the input to the list of options. The operations of1025may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1025may be performed by a communication channel component750as described with reference toFIG.7. At1030, the method may include identifying whether the communication channel is public or private based at least in part on the input to the list of options. The operations of1030may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1030may be performed by a communication channel component750as described with reference toFIG.7. At1035, the method may include identifying a selection of a subset of group of users for including in the communication channel based at least in part on the input to the list of options. The operations of1035may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1035may be performed by a communication channel component750as described with reference toFIG.7. At1040, the method may include generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. In some examples, generating the executable packet of instructions is based at least in part on identifying the name of the communication channel, whether the communication channel is public or private, and the selection of the subset of group of users. The operations of1040may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1040may be performed by an instruction generation component740as described with reference toFIG.7. At1045, the method may include transmitting the executable packet of instructions for ingesting into the group-based communication platform. The operations of1045may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1045may be performed by an instruction transmission component745as described with reference toFIG.7. FIG.11shows a flowchart illustrating a method1100that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The operations of the method1100may be implemented by an application server or its components as described herein. For example, the operations of the method1100may be performed by an application server as described with reference toFIGS.1through8. In some examples, an application server may execute a set of instructions to control the functional elements of the application server to perform the described functions. Additionally or alternatively, the application server may perform aspects of the described functions using special-purpose hardware. At1105, the method may include receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The operations of1105may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1105may be performed by an input component725as described with reference toFIG.7. At1110, the method may include retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The operations of1110may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1110may be performed by a user information component730as described with reference toFIG.7. At1115, the method may include displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The operations of1115may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1115may be performed by a display component735as described with reference toFIG.7. At1120, the method may include generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The operations of1120may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1120may be performed by an instruction generation component740as described with reference toFIG.7. At1125, the method may include transmitting the executable packet of instructions for ingesting into the group-based communication platform. The operations of1125may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1125may be performed by an instruction transmission component745as described with reference toFIG.7. At1130, the method may include determining an existing communication channel in the group-based communication platform for the data object of the plurality of data objects. The operations of1130may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1130may be performed by a communication channel component750as described with reference toFIG.7. At1135, the method may include associating the communication channel with the existing communication channel. The operations of1135may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1135may be performed by a communication channel component750as described with reference toFIG.7. FIG.12shows a flowchart illustrating a method1200that supports methods to generate communication channel for data objects in accordance with aspects of the present disclosure. The operations of the method1200may be implemented by an application server or its components as described herein. For example, the operations of the method1200may be performed by an application server as described with reference toFIGS.1through8. In some examples, an application server may execute a set of instructions to control the functional elements of the application server to perform the described functions. Additionally or alternatively, the application server may perform aspects of the described functions using special-purpose hardware. At1205, the method may include receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects. The operations of1205may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1205may be performed by an input component725as described with reference toFIG.7. At1210, the method may include retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform. The operations of1210may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1210may be performed by a user information component730as described with reference toFIG.7. At1215, the method may include displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel. The operations of1215may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1215may be performed by a display component735as described with reference toFIG.7. At1220, the method may include generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options. The operations of1220may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1220may be performed by an instruction generation component740as described with reference toFIG.7. At1225, the method may include transmitting the executable packet of instructions for ingesting into the group-based communication platform. The operations of1225may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1225may be performed by an instruction transmission component745as described with reference toFIG.7. At1230, the method may include identifying a user associated with the user input to generate the communication channel of the group-based communication platform. The operations of1230may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1230may be performed by a user information component730as described with reference toFIG.7. At1235, the method may include displaying, via the user interface, a list of communication channels for the user, wherein the user is a member of a plurality of communication channels included in the list of communication channels. The operations of1235may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1235may be performed by a display component735as described with reference toFIG.7. A method for communication channel creation is described. The method may include receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects, retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform, displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel, generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options, and transmitting the executable packet of instructions for ingesting into the group-based communication platform. An apparatus for communication channel creation is described. The apparatus may include a processor, memory coupled with the processor, and instructions stored in the memory. The instructions may be executable by the processor to cause the apparatus to receive, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects, retrieve, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform, display, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel, generate an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options, and transmit the executable packet of instructions for ingesting into the group-based communication platform. Another apparatus for communication channel creation is described. The apparatus may include means for receiving, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects, means for retrieving, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform, means for displaying, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel, means for generating an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options, and means for transmitting the executable packet of instructions for ingesting into the group-based communication platform. A non-transitory computer-readable medium storing code for communication channel creation is described. The code may include instructions executable by a processor to receive, via a user interface of cloud-based data management platform storing a plurality of data objects, a user input to generate a communication channel of a group-based communication platform that is separate from the cloud-based data management platform, wherein the communication channel is for a data object of the plurality of data objects, retrieve, in response to receiving the user input, a group of users that are linked to the data object within the cloud-based data management platform, display, via the user interface, a list of options for generating the communication channel, the list of options comprising the group of users for including in the communication channel, a privacy level for the communication channel, and an identifier of the communication channel, generate an executable packet of instructions for ingesting into the group-based communication platform based at least in part on an input to the list of options, and transmit the executable packet of instructions for ingesting into the group-based communication platform. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, via the user interface, the input to the list of options for generating the communication channel, wherein generating the executable packet of instructions may be based at least in part on the received input. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying a name of the communication channel based at least in part on the input to the list of options, identifying whether the communication channel may be public or private based at least in part on the input to the list of options, and identifying a selection of a subset of group of users for including in the communication channel based at least in part on the input to the list of options, wherein generating the executable packet of instructions may be based at least in part on identifying the name of the communication channel, whether the communication channel may be public or private, and the selection of the subset of group of users. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for determining an existing communication channel in the group-based communication platform for the data object of the plurality of data objects and associating the communication channel with the existing communication channel. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying a user associated with the user input to generate the communication channel of the group-based communication platform and displaying, via the user interface, a list of communication channels for the user, wherein the user may be a member of a plurality of communication channels included in the list of communication channels. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, via the user interface, the input to the list of options for generating the communication channel, receiving, via the user interface, a second input to the list of options for generating a second communication channel, and generating a second executable packet of instructions for ingesting into the group-based communication platform based at least in part on the second input to the list of options, wherein the communication channel and the second communication channel may be both for the data object of the plurality of data objects. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the communication channel includes a first group of users and the second communication channel includes a second group of users. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the first group of users and the second group of users may be the same or different. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the communication channel and the second communication channel may have the same privacy levels or different privacy levels. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for associating the generated communication channel with a second data object of the plurality of data objects. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving a second user input to display a list of communication channels of the group-based communication platform associated with the data object of the plurality of data objects and displaying, via the user interface, the list of communication channels in response to receiving the second user input. Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving an indication indicating that the communication channel may be generated at the group-based communication platform. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the group of users that may be linked to the data object comprise at least one of a first group of users that may have access to the data object, a second group of users that follow the data object, or a combination thereof. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the executable packet of instructions comprises at least one of a communication channel identifier, a user identifier for a creator the executable packet of instructions, a user identifier for a modifier of the executable packet of instructions, a user identifier for a member of the data object, a user identifier for an owner of the data object, a privacy indicator, a record identifier, or a combination thereof. It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, aspects from two or more of the methods may be combined. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable ROM (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. | 86,911 |
11863339 | DETAILED DESCRIPTION The principles and operation of an apparatus or a method according to the present invention may be understood with reference to the figures and the accompanying description wherein identical or similar components (either hardware or software) appearing in different figures are denoted by identical reference numerals. The drawings and descriptions are conceptual only. In actual practice, a single component can implement one or more functions; alternatively or in addition, each function can be implemented by a plurality of components and devices. In the figures and descriptions, identical reference numerals indicate those components that are common to different embodiments or configurations. Identical numerical references (in some cases, even in the case of using different suffix, such as5,5a,5band5c) refer to functions or actual devices that are either identical, substantially similar, similar, or having similar functionality. It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the apparatus, system, and method of the present invention, as represented in the figures herein, is not intended to limit the scope of the invention, as claimed, but is merely representative of embodiments of the invention. It is to be understood that the singular forms “a,” “an,” and “the” herein include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including, for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide. Each of devices herein may consist of, include, be part of, or be based on, a part of, or the whole of, the computer11or the system10shown inFIG.1. Each of the servers herein may consist of, may include, or may be based on, a part or a whole of the functionalities or structure (such as software) of any server described in the '604 patent, such as the web server, the proxy server, or the acceleration server. Each of the clients or devices herein may consist of, may include, or may be based on, a part or a whole of the functionalities or structure (such as software) of any client or device described in the '604 patent, such as the peer, client, or agent devices. Each of the servers herein may consist of, may include, or may be based on, a part or a whole of the functionalities or structure (such as software) of any server described in the '044 patent, such as the web server, the proxy server, or the acceleration server. Each of the clients or devices herein may consist of, may include, or may be based on, a part or a whole of the functionalities or structure (such as software) of any client or device described in the '044 patent, such as the peer, client, or agent devices. Each of the tunnel devices herein may consist of, may include, or may be based on, a part or a whole of the functionalities or structure (such as software) of any tunnel device described in the '044 patent, such as the peer, client, or agent devices. Any of the steps or the flow charts described herein may be included as a Software Development Kit (SDK) that is provided as a non-transitory computer readable medium containing computer instructions. The SDK may be installed in a respective device, either client or a server, to be executed by a processor in that device. An example of an arrangement70for retrieving content by the requesting client device31afrom the web server22bis shown inFIG.7. Multiple Internet-connected devices may serve as tunnel devices, such as a tunnel #1 laptop device33a, a tunnel #2 smartphone device33b, a tunnel #3 laptop device33c, a tunnel #4 desktop device33d, and a tunnel #5 ‘Smart TV’ device33e. The content fetching may be handled, managed, and aided by using a Super-Proxy (SP) server72and a Tunnel Bank (TB) server71. The TB server71is used for storing a list of the available tunnel devices, such as their IP addresses together with attribute values that corresponds to one or more attribute types. The available tunnels list is stored in a memory73that is part of, integrated with, connected to, or in communication with, the TB server71. The SP server72receives the content request from the requesting client31a, and manages the content fetching using the TB server71. The TB server71and the SP server72may be separated devices located at different geographic locations, as shown in the arrangement70, may be located in a single location, or may be integrated into a single device or server that combines the functionalities of both servers. Any device that is available for communicating over the Internet113may serve as a tunnel device. A tunnel device may consist of, include, be part of, or be based on, a part of, or the whole of, the computer11or the system10shown inFIG.1. Any tunnel device may be any computer system, either stationary (such as the desktop33d) or portable (such as the laptop33c). Further, any tunnel device may be a smartphone (such as the smartphone33b), or may be an appliance, such as the television set33e. Further, any tunnel device herein may comprise, consists of, or include a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a cellular handset, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or a non-portable device. Further, any device or network element herein may comprise, consist of, or include a major appliance (white goods) and may be an air conditioner, dishwasher, clothes dryer, drying cabinet, freezer, refrigerator, kitchen stove, water heater, washing machine, trash compactor, microwave oven and induction cooker. The appliance may similarly be a ‘small’ appliance such as TV set, CD or DVD player, camcorder, still camera, clock, alarm clock, video game console, HiFi or home cinema, telephone or answering machine Furthermore, a tunnel device may be integrated with an appliance. The appliance primary function may be associated with food storage, handling, or preparation, such as microwave oven, an electric mixer, a stove, an oven, or an induction cooker for heating food, or the appliance may be a refrigerator, a freezer, a food processor, a dishwasher, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker. Further, the appliance primary function may be associated with environmental control such as temperature control, and the appliance may consist of, or may be part of, an HVAC system, an air conditioner or a heater. Furthermore, the appliance primary function may be associated with cleaning, such as a washing machine, a clothes dryer for cleaning clothes, or a vacuum cleaner. The appliance primary function may be associated with water control or water heating. The appliance may be an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. The appliance may be a handheld computing device or a battery-operated portable electronic device, such as a notebook or laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), an image processing device, a digital camera, or a video recorder. The integration with the appliance may involve sharing a component such as housing in the same enclosure, sharing the same connector such as sharing a power connector for connecting to a power source, where the integration involves sharing the same connector for being powered from the same power source. The integration with the appliance may involve sharing the same power supply, sharing the same processor, or mounting onto the same surface. While 5 tunnel devices are shown in the example of the arrangement70, any number of tunnels may be equally used. Preferably, the number of tunnel devices that are used may be above 5,000, 10,000, 20,000, 50,000, 100,000, 200,000, 500,000, 1,000,000, 2,000,000, 5,000,000, or 10,000,000. A tunnel device may connects to the Internet113directly, such as the tunnel #133aand tunnel #233bshown to directly connect to the Internet113as part of the arrangement70shown inFIG.7. Direct connection herein refers to the ability of any Internet connected device or server, such as the TB server71and the SP server72, to communicate, or too initiate a communication session, with the Internet-connected device. Alternatively, a tunnel device may be connected to the Internet via a filtering device, such as a router, gateway, or a firewall. For example, the tunnel #333cis shown connected to the Internet113via a router device (or functionality)74, and the tunnel #433dis shown connected to the Internet113via the a firewall device (or functionality)75. Such filtering devices are typically used for data security, and may filter communication to, or from, the Internet relating to a connected device. In one example, only pre-approved IP addresses may initiate a communication session over the Internet with a device connected via such filtering mechanism. For example, the TB server71or the SP server72may not initiate a communication the tunnel #333cor with the tunnel #433d, since such communication may be blocked by the respective router device74or firewall device75. In one example, the two servers cooperatively used for assisting in the content fetching, namely the SP server72and the TB server71, are owned, operated, managed, or controlled by a same entity76, as shown in an arrangement70ashown inFIG.7a. In such a case, the entity76may provide the service of fetching content from the web server22bvia the various tunnels as a service, which may be a paid service. Any content herein may consist of, or may comprise, data such as files, text, numbers, audio, voice, multimedia, video, images, music, computer programs or any other sequence of instructions, as well as any other form of information represented as a string of bits, bytes, or characters. In one example, the content may include, be a part of, or a whole of, a URL or a website page. Each tunnel device may be associated with one or more attribute values corresponding to one or more attribute types. A table100shown inFIG.10describes an example of various attributes types and values or various (available for use) tunnel devices. A top row101names the attribute type of other tunnel related information, and each of the other rows may correspond to a single tunnel device. For example, a first content row101amay correspond to the tunnel #133a, a second row101bmay correspond to the tunnel #233b, a third row101cmay correspond to the tunnel #333c, a fourth row101dmay correspond to the tunnel #433d, a fifth row101emay correspond to the tunnel #533e, a sixth row101fmay correspond to a sixth tunnel, and a seventh row101gmay correspond to a seventh tunnel. An attribute type may relate to a timing of an operation or activity by a tunnel device. A first column102a, named ‘Date-Time’, may correspond to timing of an event relating to the respective tunnel operation, such as a last time when the tunnel device connected to the Internet, or when the tunnel device connected to a specific entity, such as to the TB server71or the SP server72. In the examples shown in the table100, a relating timing information relating the first tunnel corresponding to the first row101ais shown as a date March 5 and a time 19:35, a relating timing information relating the second tunnel corresponding to the second row101bis shown as a date March 5 and a time 19:38, a relating timing information relating the third tunnel corresponding to the third row101cis shown as a date May 5 and a time 00:05, a relating timing information relating the fourth tunnel corresponding to the fourth row101dis shown as a date November 5 and a time 00:07, a relating timing information relating the fifth tunnel corresponding to the fifth row101eis shown as a date December 5 and a time 00:15, a relating timing information relating the sixth tunnel corresponding to the sixth row101fis shown as a date December 5 and a time 05:38, and a relating timing information relating the seventh tunnel corresponding to the seventh row101gis shown as a date December 5 and a time 22:13. Alternatively or in addition, the attribute type may be associated with the communication link involving the connecting of a tunnel device to the Internet113. For example, the type of connection of the device may be used as an attribute type, such as being a wired or a wireless connection. Further, the related attribute type may include the protocol or technology used for connecting the respective tunnel to the Internet113, as exampled in a column ‘Connection Type’102ein the table100. In the examples shown in the table100, a relating communication protocol information relating the first tunnel corresponding to the first row101ais shown as a value of Very High Speed Subscriber Line (VDSL) technology, a relating communication protocol information relating the second tunnel corresponding to the second row101bis shown as a value of Third Generation (3G), a relating communication protocol information relating the third tunnel corresponding to the third row101cis shown as a value of Data Over Cable Service Interface Specification (DOCSIS), a relating communication protocol information relating the fourth tunnel corresponding to the fourth row101dis shown as a value of Asymmetric Digital Subscriber Line (ADSL), a relating communication protocol information relating the fifth tunnel corresponding to the fifth row101eis shown as a value of WiFi, a relating communication protocol information relating the sixth tunnel corresponding to the sixth row101fis shown as a value of 4G.LTE, and a relating communication protocol information relating the seventh tunnel corresponding to the seventh row101gis shown as a value of ADSL. Alternatively or in addition, the attribute type may be associated with the communication link involving the communication of a tunnel device with another entity over the Internet113, such as communication with the TB server71, the SP server72, or the web server22b. For example, the bandwidth (BW) or the RTT of such communication of the device may be used as an attribute type, as exampled in columns ‘BW’102gand ‘RTT’102hin the table100. In the examples shown in the table100, a relating communication metrics information relating the first tunnel corresponding to the first row101ais shown as a BW value of 1000 (Kb/s) and a RTT value of 30 (ms), a relating communication metrics information relating the second tunnel corresponding to the second row101bis shown as a BW value of 350 (Kb/s) and a RTT value of 70 (ms), a relating communication metrics information relating the third tunnel corresponding to the third row101cis shown as a BW value of 2500 (Kb/s) and a RTT value of 540 (ms), a relating communication metrics information relating the fourth tunnel corresponding to the fourth row101dis shown as a BW value of 1400 (Kb/s) and a RTT value of 170 (ms), a relating communication metrics information relating the fifth tunnel corresponding to the fifth row101eis shown as a BW value of 1200 (Kb/s) and a RTT value of 120 (ms), a relating communication metrics information relating the sixth tunnel corresponding to the sixth row101fis shown as a BW value of 2100 (Kb/s) and a RTT value of 230 (ms), and a relating communication metrics information relating the seventh tunnel corresponding to the seventh row101gis shown as a BW value of 800 (Kb/s) and a RTT value of 310 (ms). Alternatively or in addition, the attribute type may be associated with the tunnel connection scheme to the Internet, such as identification of the ISP or the associated ASN relating to the ISP, to the tunnel device, or to the Internet connection scheme. In the examples shown in the table100, a column named ‘ASN’102dmay be used, a value of the ASN corresponding to the first row101ais shown as 3215 (corresponding to Orange France), a value of the ASN corresponding to the second row101bis shown as 3209 (corresponding to Vodafone Germany), a value of the ASN corresponding to the third row101cis shown as 12079 (corresponding to Verizon Wireless USA), a value of the ASN corresponding to the fourth row101dis shown as 16345 (corresponding to Beeline Russia), a value of the ASN corresponding to the fifth row101eis shown as 30148 (corresponding to Zain Saudi-Arabia), a value of the ASN corresponding to the sixth row101fis shown as 9498 (corresponding to Bharti Airtel India), and a value of the ASN corresponding to the seventh row101gis shown as 11419 (corresponding to Telefonica Brazil). Alternatively or in addition, the attribute type may be associated with the tunnel device itself, such as its location. The location may be based on an actual physical geographical location or an IP geolocation. In the examples shown in the table100, a column named ‘Geographical Location’102cmay be used. A value of the location corresponding to the first row101ais shown as ‘Paris, France’, a value of the location corresponding to the second row101bis shown as ‘Munich, Germany’, a value of the location corresponding to the third row101cis shown as ‘Boston, MA, USA’, a value of the location corresponding to the fourth row101dis shown as ‘Moskow, Russia’, a value of the location corresponding to the fifth row101eis shown as ‘Riad, Saudi-Arabia’, a value of the location corresponding to the sixth row101fis shown as ‘Mumbai, India’, and a value of the location corresponding to the seventh row101gis shown as ‘San-Paulo, Brazil’. Alternatively or in addition, the attribute type may be associated with the tunnel device itself, such as its structure, functionalities, or features. The attribute type may relate to hardware, software, or any combination thereof. For example, the type of the tunnel device may be used, such as being stationary or portable. Further, the processing power or the processor type may be used. For example, the type, make, and version of the any software may be used, such as the operating system, as exampled in an ‘Operating System’ column102fin the table100. In the examples shown in the table100, a relating operating system relating to the first tunnel corresponding to the first row101ais shown as ‘Chrome 2.0’, a relating operating system relating to the second tunnel corresponding to the second row101bis shown as ‘iOS 3.0’, a relating operating system relating to the third tunnel corresponding to the third row101cis shown as ‘Windows 10’, a relating operating system relating the fourth tunnel corresponding to the fourth row101dis shown as ‘Windows 7’, a relating operating system relating the fifth tunnel corresponding to the fifth row101eis shown as ‘Android 2.0’, a relating operating system relating the sixth tunnel corresponding to the sixth row101fis shown as ‘iOS 4.0’, and a relating operating system relating the seventh tunnel corresponding to the seventh row101gis shown as ‘Chrome 3.0’. The tunnels devices may primarily be identified by their corresponding IP address, as exampled in a ‘Tunnel IP Address’ column102bin the table100. In the examples shown in the table100, an IP address of the first tunnel corresponding to the first row101ais shown as 80.12.105.150, an IP address of the second tunnel corresponding to the second row101bis shown as 176.94.1.17, an IP address of the third tunnel corresponding to the third row101cis shown as 162.115.192.24, an IP address of relating the fourth tunnel corresponding to the fourth row101dis shown as 83.220.232.67, an IP address of the fifth tunnel corresponding to the fifth row101eis shown as 185.93.228.98, an IP address of the sixth tunnel corresponding to the sixth row101fis shown as 59.144.192.23, and an IP address of the seventh tunnel corresponding to the seventh row101gis shown as 200.196.224.89. The general flow of the system operation for fetching content (such as URL) to the requesting client31afrom the web server22busing tunnels based on the arrangement70shown inFIG.7, is described in a flow chart80inFIG.8. A “Registration and Connection” step81is continuously executed, in which devices that are available to serve as tunnels are initiating communication with the TB server71. During this initial communication session, the tunnel device registers with the TB server71, and provides one or more attributes values associated with various attributes types. Alternatively or in addition, the attributes values are estimated, calculated, or otherwise obtained based on the communication link with the tunnel device. As part of the registration process, a record that includes the IP address of the registering tunnel device is added to the tunnels list73stored with the TB server71. In one example, the records are stored as the table100shown inFIG.10, where a row represents a record of a single tunnel device. In addition to registration by adding a record to the tunnels list73, the tunnel device opens a lasting connection via the Internet with the TB server71. Such connection preferably allows the TB server71to initiate communication with the registering tunnel device even after the registration phase is over and as long as the connection is sustained, such as by using TCP keepalive mechanism. The open connection, preferably a TCP connection, allows the TB server71to initiate communication with the connected tunnel device even through any intermediary blocking or filtering apparatus, such as the router74or the firewall device75. The connection may be terminated upon the tunnel device closing the connection, such as when powering off or disconnecting from the Internet. Upon disconnecting from a tunnel device, the respective record in the tunnels list73in the TB server71is erased, notifying that this tunnel device is no more available to be used as a tunnel device. The connection process may involve establishing a connection (directly or via a server) between the registering tunnel device and the TB server71. The handshaking between the two devices involves forming the connection by exchanging communication-related information. The formed connection may be used later for efficiently exchange data between the devices. In one example, the communication between the devices uses TCP, and the pre-connection is used for establishing a connection by forming ‘passive open’, involving exchanging SYN, SYN-ACK, and ACK messages. In another example, a VPN is formed between the devices, and the tunneling or the VPN establishment is performed as part of the pre-connection phase. The tunnel endpoints are authenticated before secure VPN tunnels can be established. User-created remote-access VPNs may use passwords, biometrics, two-factor authentication, or any other cryptographic methods. Network-to-network tunnels often use passwords or digital certificates, and permanently store the key in order to allow a tunnel to establish automatically, without intervention from a user. In one example, the number of tunnel devices that have been registered with the TB server71(or the number of IP addresses) and are available to be used as tunnel device is above 10,000, 20,000, 50,000, 100,000, 200,000, 500,000, 1,000,000, 2,000,000, 5,000,000, or 10,000,000. The content fetching scheme starts in a “Content Request” step82, where the requesting client sends a request message to the SP server72. The request message preferably includes the requested content, such as a URL (and/or identification of the web server22b). The client device31amay also include (as part of, or appended to, the request message) criteria for selecting tunnel devices to be used for fetching the requested content from the web server22b, as part of a “Tunnel Selection” step83. For example, the request message may include identification of an attribute type, and associated values for tunnels selection. The client device31amay use a single value, so that only tunnel devices associated with this single value will be used. Alternatively or in addition, the client device31amay use multiple values, so that only tunnel devices associated with one of these values will be used. Alternatively or in addition, the client device31amay use a range of values, so that only tunnel devices associated with one of the values in the range will be used. For example, the client device31amay define a minimum value (selecting only tunnel devices associated with values at or above the minimum value), may define a maximum value (selecting only tunnel devices associated with values at or below the maximum value), or may define both minimum and maximum values (selecting only tunnel devices associated with values at or above the minimum value and at or below the maximum value). For example, in a case where the attribute value is a location, the request message may define a location of Munich, Germany. Assuming that the available tunnel devices are detailed in the table100inFIG.10, only the tunnel device (such as the tunnel #233b) associated with the second row101bmay be selected. Alternatively or in addition, the request message may define a location of Europe. In such a case, the tunnel device (such as the tunnel #233b) associated with the second row101b, or the tunnel device (such as the tunnel #133a) associated with the first row101a, may be selected, since both location values are in Europe. While the location values are exampled in table100as cities, any location may be used as IP geolocation or physical geographical location, such as country, state or province, city, street address, or ZIP code). In one example, a tunnel device location may be obtained using its built-in Global Positioning System (GPS), and may include the latitude, longitude, and timezone of the device location. Similarly, in a case where the attribute value is an RTT, the request message may define a RTT over 300 ms (300 ms minimum), so that either the tunnel device (such as the tunnel #333c) associated with the third row101c(having 540 ms), or the tunnel device associated with the seventh row101g(having 310 ms), may be selected. Similarly, in a case where the attribute value is an RTT, the request message may define a RTT below 80 ms (maximum), so that either the tunnel device (such as the tunnel #133a) that is associated with the first row101a(having 30 ms), or the tunnel device (such as the tunnel #233b) that is associated with the second row101b(having 70 ms), may be selected. Similarly, in a case where the attribute value is an BW, the request message may define a BW below 2200 Kb/s and above 2000 Kb/s, the tunnel device associated with the sixth row101f(having 2100 Kb/s), may be selected. In the “Tunnel Selection” step83, the TB server71selects a tunnel device for use from the tunnel list stored in the storage73, according to the criteria received from the requesting client as part of the “Content Request” step82. It is noted that some requests may not include any criteria, and in such a case any available tunnel device may be selected by the TB server71. Once a tunnel device is selected by the TB server71, the request for content is routed, by the TB server71, the SP server72, or any cooperation thereof, to the selected tunnel device. In turn, the tunnel device forwards the request for content, using tunneling or proxy scheme, to the web server22b, as part of a “Using Tunnel” step84. It is noted that such tunneling provides anonymity and untraceability, where the web server22bis only aware of the request from the selected tunnel device, and is ignorant to the identity of the origin of the request, namely the requesting client31a, which is not exposed to the web server22b. For example, in case where the requesting client31ais in a location A, and the selected tunnel device that is used is in a location B, the web server22bmay only be aware (such as by using IP geolocation) to the request arrival from the location B. The requested content is then sent to the selected tunnel device, which in turn submits the fetched content to the requesting client31aas part of a “Content Fetching” step85, thus completing the cycle of request-response from the point-of-view of the client device31a, and ending in an “END” step86. Hence, the ‘Content Fetch’ cycle, that may be a ‘URL Fetch’ flow-chart87in the case where the content is a single URL, may be defined, starting from the requesting client device31aissuing a content request to the SP server72, until the fetched content is received by the requesting client device31aas part of the “Content Fetching” step85. The fetched content may be stored in the client device in any volatile or non-volatile memory, or may be stored in a local cache as described in U.S. Pat. No. 8,135,912 to the Shribman et al. entitled: “System and Method of Increasing Cache Size”, which is incorporated in its entirety for all purposes as if fully set forth herein. The content is stored with its related metadata or any other identifiers, so it can be easily detected and fetched when later required. While retrieving a single URL (or other content) is exampled in the flow chart80, any number of URLs may be equally retrieved by the requesting client31a. Each URL fetching may be according to, or based on, the flow chart87shown as part of the flow chart80inFIG.8. For example, the requesting client31amay request multiple web pages of the same web site. Assuming fetching of N web pages (or any other N URLs), the first URL may be fetched by executing “URL #1 Fetch” flow chart87a, the second URL may be fetched by executing a “URL #2 Fetch” flow chart87b, the third URL may be fetched by executing a “URL #3 Fetch” flow chart87c, and so on, until the N-th URL may be fetched by executing a “URL #N Fetch” flow chart87n, where each of the URL fetching scheme may be according to, or based on, the flow chart87shown as part of the flow chart80inFIG.8. The various fetching schemes may be executed in parallel, starting in a “START” step91and ending in an “END” step92, as shown in the flow chart90ainFIG.9a. Alternatively or in addition, the various fetching schemes may be executed in series, starting in the “START” step91and ending in the “END” step92, as shown in the flow chart90binFIG.9b. In one example, the same tunnel device is selected in two, or in all, of fetching activities named “URL #1 Fetch” flow chart87ato the “URL #N Fetch” flow chart87n. Alternatively or in addition, a different tunnel device is selected for each of fetching activities named “URL #1 Fetch” flow chart87ato the “URL #N Fetch” flow chart87n, which is preferred from anonymity point of view. A schematic messaging flow diagram110describing the registration phase as part of the “Registration and Connection” phase81is shown inFIG.11. Each of the tunnel devices initiates a communication session with the TB server71, notifying its availability to serve as a tunnel device. As part of the communication, each of the tunnel devices may transmit one or attribute values pertaining to one or more attribute types. As part of the registration phase81, the TB server71adds a record (row) for each available tunnel device to the tunnels list or table in memory73, such as adding a row for each new available tunnel device to table100shown inFIG.10. In the example of the arrangement70, the tunnel #133aconnects via a data path111a, the tunnel #233bconnects via a data path111b, the tunnel #333cconnects via a data path111c, the tunnel #433dconnects via a data path111d, and the tunnel #533econnects via a data path111e. As part of the “Registration and Connection” phase81, a sustained connection is established between the registered tunnel devices and the TB server71, such as by using TCP keepalive mechanism. Shown pictorially in an arrangement110ashown inFIG.11arelating to the example of the arrangement70, the tunnel #133aconnection is shown as a dashed line112a, the tunnel #233bconnection is shown as a dashed line112b, the tunnel #333cconnection is shown as a dashed line112c, the tunnel #433dconnection is shown as a dashed line112d, and the tunnel #533econnection is shown as a dashed line112e. Such sustained connection (such as by using TCP keepalive mechanism) allows the TB server71to initiate connection with any of the registered and available tunnel devices, even in the case when a filtering apparatus, such as a router (for example the router74) or a gateway (for example the gateway75), is connected between a tunnel device and the Internet113. The connection process involves establishing a connection (directly or via a server), where the handshaking between the TB server71and each of tunnel devices involves forming the connection by exchanging communication-related information. The formed connection may be used later for efficiently exchange data between the devices. In one example, the communication between the devices uses TCP, and the pre-connection is used for establishing a connection by forming ‘passive open’, involving exchanging SYN, SYN-ACK, and ACK messages. In another example, a VPN is formed between the devices, and the tunneling or the VPN establishment is performed as part of the pre-connection phase. The tunnel endpoints are authenticated before secure VPN tunnels can be established. User-created remote-access VPNs may use passwords, biometrics, two-factor authentication, or any other cryptographic methods. Network-to-network tunnels often use passwords or digital certificates, and permanently store the key in order to allow a tunnel to establish automatically, without intervention from a user. The process of fetching content, corresponding to the “Content Request” step82that is part of the ‘URL Fetch’ flow chart87, starts with the requesting client31asend a request for content to the SP server72, as shown in a message path121ashown as part of a messaging chart120shown inFIG.12. In one example, such request only comprises an identification (such as a URL) of the requested content. Preferably, the request includes a guidance regarding selection of a tunnel device that will be used for fetching the requested content. In one example, the request includes, either as integral part of the request, as an appended message, or as a separate message, the attribute type and an attribute value, to be used for selecting the tunnel device to be used. In another example, multiple values, or a range of values are defined for the attribute type that serves as a criterion. Further, multiple attributes types may be used, each associated with a value or with multiple values. The content request message, as well as the attributes types and values information, may be sent over the message path121ausing a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the SOCKS, WebSocket (ws), which may be WebSocket Secure (wss), or HTTP Proxy protocol may be used, where the client device31aexecutes a client side protocol, and the SP server72executes a server side protocol. In response to receiving the content request over the message path121a, the SP server72forward the content request, along with the tunnel selection criteria, to the TB server71, shown as a message path131ain the messaging chart120ashown inFIG.12a. The message sent over the message path131amay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP, HTTPS, Socket Secure (SOCKS), WebSocket (ws), which may be WebSocket Secure (wss), or HTTP Proxy protocol may be used, where the SP server72executes a client side protocol, and the TB server71executers a server side protocol. Alternatively or in addition, the SP server72may execute the server side protocol, and the TB server71may executer the client side protocol. As part of the “Tunnel Selection” phase83, according to a pre-set of criteria, according to the attributes type and values that were received from the client device31aas part of the message path121a, or according to any combination thereof, the TB server71uses the tunnels list stored in the memory73, which may include the table100, for selecting a tunnel device to be used. In one example, the attribute type is location and the value is Moskow, Russia, hence the tunnel #433d, which record is included in the fourth row101dof the table100, is suitable to be selected, and is selected by the TB server71to serve the specific content request from the client device31a. In one example, the tunnel device to be used may be randomly selected, allowing, for example, for load balancing. In one example, by randomly selecting different tunnel devices for multiple content pieces of content (such as multiple web pages of the same web site) from the same content source, the web server22bsenses a distributed requesting schemes, and further cannot attribute the requests to the client device31a, further providing anonymity and untraceability. Randomness is commonly implemented by using random numbers, defined as a sequence of numbers or symbols that lack any pattern and thus appear random, are often generated by a random number generator. Randomness is described, for example, in IETF RFC 1750 “Randomness Recommendations for Security” (December 1994), which is incorporated in its entirety for all purposes as if fully set forth herein. A random number generator (having either analog or digital output) can be hardware based, using a physical process such as thermal noise, shot noise, nuclear decaying radiation, photoelectric effect or other quantum phenomena. Alternatively, or in addition, the generation of the random numbers can be software based, using a processor executing an algorithm for generating pseudo-random numbers which approximates the properties of random numbers. In a case where no criteria for selecting is directed by the requesting client31a, the TB server71may randomly select a tunnel device from the group or list of all currently available tunnel devices. Similarly, in a case where there are multiple tunnel devices that are available and all of them satisfy the criteria set (such as all of them are associated with a defined value, or are within the range of defined values, relating to a specific attribute type), the TB server71may randomly select a tunnel device from the group or list of all currently available tunnel devices that also satisfy the defined criteria. Upon completing the selection of the tunnel #433d, the TB server71forwards the requested content identification to the selected tunnel #433d, shown as a message path131bin the messaging chart120bshown inFIG.12b. Such communication uses the established connection111d(such as the TCP connection) that was established during the “Registration and Connection” phase81, allowing for communication via the firewall75. The message sent over the message path131bmay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP, HTTPS, Socket Secure (SOCKS), WebSocket (ws), which may be WebSocket Secure (wss), or HTTP Proxy protocol may be used, where the TB server71executes a server side protocol, and the tunnel #433dexecutes a client side protocol. Alternatively or in addition, the TB server71may executes a client side protocol, and the tunnel #433dmay execute a server side protocol. In response to the request message131b, the selected tunnel #433dsends a request for the identified content to the appropriate server that stores the required content, exampled to be the web server22b, shown as a message path131cin a messaging chart120binFIG.12b. Thus, the “Using Tunnel” phase84is completed where the request arrives at the content source, namely the web server22b. The message sent over the message path131cmay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP or HTTPS protocol may be used, where the web server22bexecutes a server side protocol, and the tunnel #433dexecutes a client side protocol. Further, any tunneling protocol or mechanism may be used where the selected tunnel, which is the tunnel #433din the example herein, serves as a tunnel between the TB server71and the web server22b. The requested content is then fetched from the web server22bto the requesting client31a, as part of the “Content Fetching” phase85, along the ‘opposite’ route of the request flow. As shown in a messaging chart130shown inFIG.13, the content is first sent from the web server22bto the selected tunnel #433dalong a message path131d, which in turn sends it to the TB server71along a message path131e, which in turn sends it to the SP server72along a message path131f, arriving at the requesting client31aalong a message path131g, completing the request/response cycle from the client device31apoint of view. The same protocol or protocols used for forwarding the request from the client device31ato the web server22bmay be equally used for any portion of the ‘return’ path of the requested content from the web server22bto the client device31a. Alternatively or in addition, the return path may use different protocol or protocols than the ones used in the requesting path. The TB server71generally executes a flowchart140shown inFIG.14. The TB server71generally executes in parallel at least a “Connection Handler” flow chart140aand a “Request Handler” flow chart140b. The “Connection Handler” flow chart140ainvolves identifying a device that is available to server as a tunnel device. For each such device, a record of the device and its associated various attributes values is formed, stored and maintained, together with establishing a continuous connection with the tunnel device, corresponding to the “Registration and Connection” phase81and the messaging charts110and110arespectively shown inFIGS.11and11a. The TB server71continuously listen and wait for tunnel devices to initiate a communication. Upon receiving a communication request from a potential tunnel device, such as from the tunnel #233bshown as message path111bin the chart110, the TB server71accepts the communication from the tunnel device, as part of an “Accept and Open Connection” step141. In addition to the tunnel device IP address, information regarding the connection timing, the tunnel device type, connection functionalities, operating system, processing power, and other values relating to various attribute types are obtained (such as from the tunnel device itself, from the connection, or otherwise), and stored as a record in the tunnels list73, which may be in a form of a row in the table100, as part of an “Add to Table” step142. The tunnel device is then available for being selected for use in a fetching content operation, and the selection may be based on the respective information in the record in the table100. In order to allow for the TB server71to initiate communication with this available tunnel device, a continuous connection is established as part of an “Establish Connection” step143. For example, a TCP connection112b(using TCP keepalive mechanism) may be used as shown in the chart110a. Upon sensing that there is no response from this tunnel device as part of a “Detect Disconnection” step143a, such as not receiving a keepalive message reply after a set interval, the TB server71assumes that this tunnel device is no longer available to be used as a tunnel device for content fetching operation, and the respective record is deleted from the table100as part of a “Remove from table” step144. The “Connection Handler” flow chart140ais repeated for every tunnel device, so that a large number of such instances are performed simultaneously and independently. The “Request Handler” flow chart140binvolves selecting a tunnel device from the available ones based on a request from the SP server72, and using the selected tunnel device for fetching the requested content. The “Request Handler” flow chart140bis repeated for each content (such as URL) request from the client device31aconveyed to it from the SP server72, so that a large number of such instances of this operation are performed simultaneously and independently. First, a content request is received from the SP server72as part of a “Receive Request from SF” step145, corresponding to the message path131ashown in the messaging chart120b. In general, the request includes a replica of the content request received from the requesting client31a. Based on pre-set criteria and criteria that is part of the received request, the TB server71selects a tunnel device from the available ones, as part of a “Select Tunnel” step146, which correspond to the “Tunnel Selection” phase83. As part of a “Send Request to Tunnel” step147, which corresponds to the message path131bshown in the messaging chart120band performed as part of the “Using Tunnel” phase84, the identification of the requested content of forwarded to the selected tunnel device, exampled as the tunnel #433din the example herein. After the content if fetched by the selected tunnel device #433dfrom the web server22b, it is forwarded and received by the TB server71as part of a “Receive Content from Tunnel” step148, which corresponds to the message path131eshown in the messaging chart130and performed as part of the “Content Fetching” phase85. The handling of the content requested is completed by sending the fetched content as a response to the SP server72request as part of a “Send Content to SP” step149, which corresponds to the message path131fshown in the messaging chart130and performed as part of the “Content Fetching” phase85. The SP server72generally executes a flowchart150shown inFIG.15for each piece of information or content (such as a single URL) requested by the client device31a. The operation starts when a content request is received from the client device31aas part of a “Receive Request from Client” step151, which corresponds to the message path121ashown in the messaging chart120and performed as part of the “Content Request” phase82. The request is forwarded by the SP server72to the TB server71as part of a “Send Request to TB” step152, which corresponds to the message path131ashown in the messaging chart120a, and received by the TB server71as part of the “Receive Request from SP” step145. Upon the content arriving to the TB server71, it is forwarded by the TB server71to the requesting SP server72as part of the “Send Content to SP” step149, and received as part of a “Receive Content from TB” step153, which corresponds to the message path131fshown in the messaging chart130and performed as part of the “Content Fetching” phase85. The received content is then sent to the requesting client31aas part of a “Send Content to Client” step154, which corresponds to the message path131gshown in the messaging chart130and performed as part of the “Content Fetching” phase85. SSL Sniffing. SSL (Secure Sockets Layer) certificates are used to secure online communication and transactions with encryption. The SSL encryption technology creates encrypted connections between a user/web browser and website/web-server. SSL certificate makes sure that all communication that gets transmitted through a browser/website/server is encrypted and decrypted in such a manner that only the sender and the recipient would be able to see it in the decrypted form. SSL sniffing refers to the intercepting and reading of SSL encrypted traffic using an MI™ (Man in the Middle) proxy. SSL sniffing works in different ways. In some SSL implementations, the MI™ proxy is used to redirect the end user in a communication to a non-HTTPS website and then sniff the non-encrypted traffic in that site. At the same time, requests would be relayed to and from the HTTPS site via a proxy. The man in the middle can alternatively grab the HTTPS traffic and present a valid HTTPS certificate to the end user. The certificate would need to be trusted on the end user machine. This the end user machine would need to be compromised or a trusted certificate has to be obtained. The man in the middle would then relay traffic to the actual HTTPS site and at the same time look at the unencrypted traffic, sitting in the middle of it all. There is another option too-grabbing the encrypted traffic and recording it, in the hope that in future, technology would help decrypt the data. An implementation example of SSL Sniffing, which extracts hostname from SSL by parsing TLC/SNI record (sni.js), is described in a web-page by ‘Marek's—totally not insane—idea of the day’ (dated Jun. 16, 2012) entitled: “Dissecting SSL handshake”, which is incorporated in its entirety for all purposes as if fully set forth herein. SSL Sniffing is further described in Netronome Systems, Inc. white-paper published 2010 (2-10) entitled: “Examining SSL-encrypted Communications”, which is incorporated in its entirety for all purposes as if fully set forth herein. A system, method and computer program product for guaranteeing a data transaction over a network using SSL sniffing are disclosed in U.S. Pat. No. 7,853,795 to Dick et al. entitled: “System, method and computer program product for guaranteeing electronic transactions”, which is incorporated in its entirety for all purposes as if fully set forth herein. When a data transaction between at least a server and a client is detected on a network, data transmitted via the network between the server and client during the data transaction is captured. At least one identifier is associated with the captured data. A timestamp is also generated for the captured data. The timestamp includes information therein identifying at least a portion of the identifier(s). The captured data, the identifier(s) and the timestamp are stored in one or more data stores. The identifier(s) associated with the stored captured data is also mapped to an entry in an index to permit retrieval of the stored data from the data store via the index. In one example, the message received by the SP server72from the client device31aas part of the “Receive Request from Client” step151is according to HTTPS protocol, where part or all of the message is encrypted using TLS or SSL. In such a case, the SP server72(or the TB server71), may use SSL Sniffing for extracting the content identifier (such as the requested URL), for extracting any attribute values included in the message, for extracting any other information that is included in the message and is required for system operation. The SP server72may use SSL Sniffing that includes parsing the SSL handshake, such as parsing the ClientHello and ServerHello parts of the CONNECT request in the TLS handshaking. In an example where the client device31asends an HTTPS request that includes ‘CONNECT amazon.com’, the SP server72replies with a message consisting of: ‘HTTP/1.1 200 OK’, and continues to apply pkg/util/tls.js Handshake:extract_sni to all following messages from the client device31a. If a message contains SNI and it is amazon.com, or the message does not contain SNI—the SP server72sends the ClientHello to Amazon web server (which may be the web server22b), and start listening for the ServerHello while applying the Handshake:extract_cert_names to all received messages therefrom, until the certificate part is being received and parsed. If the received server certificate is for amazon.com and not a different/blocked host, the SP server72sends a response back to the client device31aand begins tunneling data without parsing. For each piece of information or content (such as a single URL) requested a client device, such as the exampled client device31a, generally executes a flowchart160shown inFIG.16. It is noted that multiple content fetching operations may be performed in parallel or in series, as described regarding the flow charts90aand90babove. Any content fetching operation start sending a content request to the SP server72as part of a “Send Request to SF” step161, and the request is received by the SP server72as part of the “Receive Request from Client” step151. This action corresponds to the message path121ashown in the messaging chart120and performed as part of the “Content Request” phase82. Upon availability of the requested content at the SP server72, the content is sent to the client device31aas part of the “Send Content to Client” step154, and is received by the client device31aas part of a “Receive Content from SF” step162, which corresponds to the message path131gshown in the messaging chart130and performed as part of the “Content Fetching” phase85. In one example, the client device31aneed only to know the IP address of the SP server72, and need only to identify the requested content and the criteria (if any) for selecting a tunnel for fetching this content. The request message sent to the SP server72may include identification of the requested content, such as a URL. In one example, the client device31adoes not impose any limitations or does not provide any criteria or limitations for selecting a tunnel device for a specific requested content. In such a case, the tunnel selection by the TB server71as part of the “Select Tunnel” step146is not limited by the client, and any internal selection rules or mechanisms may be used. Alternatively or in addition, the client device31adefines specific limitations or criteria for selecting a tunnel device for a specific requested content. Such criteria may involve defining attributes types, and a value of values relating to each attribute values. In such a case, the tunnel selection by the TB server71as part of the “Select Tunnel” step146is limited by the client, and the client set limitations will apply in addition to any internal selection rules or mechanisms may be used. Alternatively or in addition, the client device31amay define a specific tunnel device, for example identified by a specific IP address, to be used for a specific requested content. For example, the web server22bmay differently respond to a content requesting device, based on past interactions with that device. In such a case, the client device31amay execute a flow chart160ashown inFIG.16a. In such a case, an identification of the tunnel device that was selected as used for fetching the specific content is also sent from SP server72to the client device31a, in addition to sending the fetched content from the SP server72as part of the “Send Content to Client” step154, receiving it by the client device31aas part of a “Receive Content from SF” step162. The tunnel identification is stored by the client device31aas part of a “Save Tunnel IP” step162a. In a next content fetching cycle initiated by the client device31a, such as when the content is to be fetched from the same web server22b, the content request as part of the “Send Request to SP” step161is appended to further include the specific tunnel device IP address to be used, retrieved after being stored in prior operation as part of the “Save Tunnel IP” step162a, as part of a “Send Tunnel IP to SF” step161a. The request for a specific tunnel device is then forwarded by the SP server72to the TB server71as part of the message path131a, and then the TB server71selects the requested tunnel device for fetching the content, as part of the “Select Tunnel” step146. Each of the tunnel devices, such as the tunnel #133a, the tunnel #233b, the tunnel #333c, the tunnel #433d, and the tunnel #533e, generally executes a flowchart170shown inFIG.17. Upon connecting to the Internet, upon deciding to serve as a tunnel server, or upon having the ability to serve as a tunnel device, the tunnel device initiates connection to the TB server71, as part of an “Initiate TB Connection” step171, respectively corresponding to the message paths111a,111b,111c,111d, and111e. The connection initiation as part of the “Initiate TB Connection” step171is responded by the TB server71as part of the “Accept and Open Connection” step141in the flow chart140a, and is performed as part of the “Registration and Connection” phase81. In an arrangement where a tunnel selection is based on attribute values, the tunnel device send the corresponding values, such as the operating system type and version (corresponding to the column102fin the table100), and any other value relating to any other attribute type, as part of a “Send Attribute Value” step172, so the value (associated with the tunnel device IP address, for example) may be added to the tunnel registry as part of the tunnels list memory73, such as adding a row to the table100by the TB server71as part of the “Add to Table” step142. After initializing the communication, the tunnel device and the TB server71sustain a connection, such as a TCP connection using the TCP keepalive mechanism, as part of an “Establish Connection” step173and the “Establish Connection” step143, respectively illustrated in the messaging chart110aas message dashed lines112a,112b,112c,112d, and112e. The establishing of the sustained connection between the tunnel device and the TB server71completes the “Registration and Connection” phase81in the flow chart80. In a case where a tunnel device is selected by the TB server71as part of the “Select Tunnel” step146, the TB server71send to the selected tunnel device as part of the “Send Request to Tunnel” step147the content request, which is received as part of a “Receive Request from TB” step174, corresponding to the message path131bshown in the example of selecting the tunnel #433din the messaging chart120b. In response, the selected tunnel device forward the request to the relevant web server, such as the web server22b, as part of a “Send Request to Web Server” step175, corresponding to the message path131cshown in the example of selecting the tunnel #433din the messaging chart120b, thus completing the “Using Tunnel” phase84in the flow chart80shown inFIG.8. As part of the “Content fetching” phase85, the content retrieved from the web server22b(as a response to the request) is received by the selected tunnel device as part of a “Receive Content from Web Server” step176(corresponding to message path131din the messaging chart130), and is then forwarded (or ‘tunneled’) to the TB server71as part of a “Send Content to TB” step177, to be received by the TB server71as part of the “Receive Content from Tunnel” step148, corresponding to message path131ein the messaging chart130. The operation from “Receive Request from TB” step174to the “Send Content to TB” step177may be repeated each time the tunnel is selected. The connection established in the “Establish Connection” step173is sustained after each such content tunneling operation, allowing for additional tunneling operation to be performed using the same tunnel. The same tunnel may be selected for the same web server22b, such as for different URLs of the same web page stored in the web server22b. Alternatively or in addition, the same tunnel may be used for different web servers, such as for retrieving different web pages or web sites associated with different web servers. In one example, one or more of the tunnel devices are used primarily for purposes other than serving as tunnel devices. In such a case, the tunnel functionality or operation, such as executing the flow chart170shown inFIG.17, is executed in the background or when the device is idling from other activities, preferably with the knowledge of the tunnel device owner and user, and preferably with minimum interference or interaction with other processes, operations, or activities of the tunnel device. In one example, a tunnel device may be a dedicated device, primarily installed, used, or operated for serving as a tunnel device, such as primarily (or solely) for executing the tunnel-related flow chart170shown inFIG.17. In one example, the tunnel #133ais such a dedicated tunnel device, shown used as a tunnel in a messaging chart180shown inFIG.18. In one example, the dedicated tunnel device #133amay be owned, operated, or used by an entity76awhich also owns, operates, or uses the TB server71and the SP server72, as pictorially illustrated in the arrangement180ashown inFIG.18a. While a single dedicated device in exampled in the arrangement180, multiple such devices may equally be used, and these dedicated tunnel devices may also be owned, operated, or used by the same entity76a. The using of dedicated tunnel devices allows to provide more available tunnel anytime, and reduces the need of relying of availability third party devices. Further, such dedicated devices may be optimized for their primary tunneling functionality. While the system operation was exampled above where each tunnel device is associated with a single IP address, multiple IP addresses may be equally associated with any tunnel device. In one example, the dedicated tunnel device33ashown in the arrangement180may be addressed using multiple IP addresses, such as by using multihoming. The dedicated tunnel device33a(or any tunnel device) may execute the tunnel process170for each of the IP addresses, either in parallel or sequentially (or a combination thereof), thus allowing the savings resulting by using a single hardware device with a single Internet connection executing multiple tunnel functionalities. Alternatively, multiple Internet connection may be used, where one or more IP addresses are associated with each Internet connection. Dedicated tunnels may be implemented as client devices, or preferably as server, such as located as part of data centers. Preferably, the dedicated tunnels, either as client devices or as servers in data centers, are installed in many location around the world, allowing for better load balancing due to the widespread distribution, as well as providing large variety of potential locations or IP geolocations that may be selected as location attribute values by client devices. A dedicated tunnel device may be associated with more than 1,000, 2,000, 5,000, 10,000, 20,000, 50,000 or 100,000 distinct IP addresses. Further, tunnel devices may be owned, used, or operated by consumers. In such a case, their availability is only controlled by the user. For example, by turning off the device, such as at night, or by being located at no Internet connection locations, the tunnel devices become not available to be used for tunneling functionality. In contrast, dedicated tunnel devices may be available to be selected and used at any time, all year round (usually spoken “twenty-four seven”), and as such may allow the service provider76ato provide stable and consistent tunneling service to client devices. In addition, dedicated tunnel devices that are owned, operated, or controlled by the service provider76a, obviate the need for distributing the tunnel functionality, such as a software code that implements the tunnel flow chart170, to various devices. In general, the tasks performed by the TB server71, as part of the operation of the flow chart140shown inFIG.14, may be partitioned into two main objectives: Selecting a tunnel device, such as the “Select Tunnel” step146, and being in the ‘tunneling’ path of fetching the content, such as the “Receive Content from Tunnel” step148and the “Send Content to SP” step149. In one examplary arrangement, the TB server71is focused only on the tunnel selecting operation and is not taking part in the “Content Fetching” phase85. A messaging chart arrangement190that supports the obviating of the TB server from being part of the content fetching path is shown inFIG.19. In response to the tunnel #433dexampled as being selected and communicated by the TB server71over the message path131bdescribed above, the selected tunnel #433dinitiates a communication with the SP server72over a message path191. Any technique or technology may be used for directing the selected tunnel #433dto connect to the SP server72, preferably a NAT traversal-based technique. Preferably, after the initial communication between the selected tunnel #433dand the SP server72is made, the connection (shown as a dashed line192) is sustained, such as by using TCP keepalive and part of a TCP Connect scheme, similar to, or different from, the connection111dthat is established between the tunnel #433dand the TB server71. Once the connection192is established and sustained, the SP server72may initiate communication with the selected tunnel #433d. In one example, the SP server72sends the identification of the requested content (such as a URL) to the selected tunnel #433d, shown as a message path193in a messaging chart190ainFIG.19aSimilar to the example shown inFIG.13above and the related description, the selected tunnel #433dperforms the tunneling functionality by forwarding the content request to the web server22bover the message path131c, and receiving the requested content over the message path131d. However, the requested content is then forwarded to the requesting device, namely the SP server72, over a message path194illustrated as part of a messaging chart190binFIG.19b, rather than being forwarded to the TB server71over the message path131eas described above. In turn, the received content from the selected tunnel #433dis forwarded by the SP server72to the requesting client31aover the message path131gas described above. The mechanism of the “Content Fetching” phase85that is described in the messaging chart190binvolves the selected tunnel #433dreceiving the content from the web server22bover the message path131d, forwarding the content from the selected tunnel #433dover the message path194to the SP server72, which in turn send the fetched content as a response to the requesting client31aover the message path131g. Such content path is preferred since the ‘tunneling’ via the TB server71using the message paths131eand131fis obviated, providing one less hop of carrying information from the web server22bto the client device31a, thus providing less latency, higher reliability, and less costs associated with the additional traffic, hardware and processing power required for handling the unnecessary tunneling via the TB server71. Further, such scheme allows to optimize the structure and functionalities of the TB server71for tunnel selection activities. In the alternative arrangement described inFIGS.19-19b, the TB server71generally executes a flowchart200shown inFIG.20, which is based on the flowchart140shown inFIG.14. The TB server71generally executes in parallel at least the unchanged “Connection Handler” flow chart140aand a “Selection Handler” flow chart201, which may replace the “Request Handler” flow chart140b, which is direct to selecting a tunnel device according to a criteria. As part of processing a content request from the client device31a, the TB server71receives from the SP server72, over the message path131ashown in the messaging chart190, criteria (or a criterion) for selecting a tunnel device to be used for delivering the requested content, as part of a “Receive Criteria from SF” step202. While as part of the “Receive Request from SP” step145that is part of the flow chart140bthe TB server71was also notified of the identification of the requested content, such identification is not required in this alternative scheme, since the TB server71is no longer part of the actual content request and fetching data paths. In one example, the same message, including also the content identification is sent from the SP server72to the TB server71over the message path131a, so that the “Receive Criteria from SP” step202may be rendered to be the same as the “Receive Request from SF” step145described above. After a tunnel device is selected as part of the “Select Tunnel” step146, the TB server71sends a message to the selected tunnel #433dover the message path131b, directing it to initiate communication (such as by using NAT traversal) with the SP server72, as part of the “Connect and Direct Tunnel” step203. In the scheme shown inFIG.19, the tunnel selection phase83is completed, and the involvement of the TB server71in the fetching process ends after directing the selected tunnel #433din the “Connect and Direct Tunnel” step203. In the alternative arrangement described inFIGS.19-19b, the SP server72generally executes a flowchart210shown inFIG.21, which is based on the flowchart150shown inFIG.15. The SP server72generally executes the flowchart210shown inFIG.21for each piece of information or content (such as a single URL) requested by the client device31a. The operation starts when a content request is received from the client device31aas part of the “Receive Request from Client” step151, which corresponds to the message path121ashown in the messaging chart120and performed as part of the “Content Request” phase82. A request from the client device31amay include both identification of the requested content and criteria for selecting a tunnel device, such as the attribute type to use and the associated attribute value or values. As part of a “Send Criteria to TB” step212, the criteria set by the client device31afor selection of a tunnel device, as part of the request, is sent to the TB server71, without the content identification part, over the message path131a, to be received by the TB server71as part of the “Receive Criteria from SP” step202. Alternatively, the message sent includes the whole content request information, similar to, or identical to, the “Send Request to TB” step152in the flow chart150, which corresponds to the message path131ashown in the messaging chart120a, and received by the TB server71as part of the “Receive Request from SF” step145. As part of an “Accept and Open Connection” step213, the SP server72receives a communication initiated by the selected tunnel #433d, shown as a message path191, and the connection between the SP server72and the selected tunnel #433dis sustained as part of an “Establish Connection” step214. The sustained connection is illustrated as a message path192, and may be based on TCP connection that uses the TCP keepalive mechanism, similar to the connection111dbetween the selected tunnel #433dand the TB server71. The sustained connection allows the SP server72to initiate communication with the tunnel #433d, even in the presence of a filtering device such as a router or the firewall75. Using the established connection192, the SP server72forwards the content identification to the selected tunnel #433das part of a “Send Request to Tunnel” step215, illustrated as message path193in a messaging chart190ashown inFIG.19a, and in response the selected tunnel #433dprovides ‘tunneling’ by forwarding the request to the web server22bover the message path131c, as part of the “Using Tunnel” phase84. The content fetched by the selected tunnel #433dis in turn sent to the SP server72, and received over the message path194illustrated in a messaging chart190bshown inFIG.19b, as part of a “Receive Content from Tunnel” step216. Similar to the flow chart150above, the SP server72then forward the fetched content as a response to the client device31arequest over the message path131gas part of the “Send Content to Client” step154, completing the “Content Fetching” phase85. In the alternative arrangement described inFIGS.19-19b, the selected tunnel device, such as the exampled tunnel device #433d, generally executes a flowchart220shown inFIG.22, which is based on the flowchart170shown inFIG.17. The selected tunnel device generally executes the flowchart220shown inFIG.22each time it is selected as a tunnel device by the TB server71. Using the established connection111d, the tunnel #433dreceives an instruction from the TB server71(that is sent as part of the “Connect and Direct Tunnel” step203of the flow chart201) to connect to the SP server72, as part of a “Receive Direct from TB” step221over the message path131b. In response, as part of a “Initiate SP Connection” step222, the tunnel device #433dconnects to the SP server72, and then a sustained connection, shown as the message path192, is formed as part of a “Establish Connection” step223, corresponding to the “Establish Connection” step214in the flow chart210. A content request sent by the SP server72as part of the “Send Request to Tunnel” step215(in the flow chart210) is received by the selected tunnel #433das part of a “Receive Request from SF” step224, illustrated as the message path193in the messaging chart190ashown inFIG.19a. Similar to the flow chart170above, the selected tunnel device forward the request to the relevant web server, such as the web server22b, as part of the “Send Request to Web Server” step175, corresponding to the message path131cshown in the example of selecting the tunnel #433din the messaging chart190a, thus completing the “Using Tunnel” phase84in the flow chart80shown inFIG.8. As part of the “Content fetching” phase85, the content retrieved from the web server22b(as a response to the request) is received by the selected tunnel device as part of the “Receive Content from Web Server” step176(corresponding to the message path131din the messaging chart130), and is then forwarded (or ‘tunneled’) to the SP server71as part of a “Send Content to SP” step225, and received by the SP server72as part of the “Receive Content from Tunnel” step216, corresponding to message path194in the messaging chart190b. Any of the steps or the flow charts to be executed by a tunnel device, may be included as a Software development kit (SDK) that is provided as a non-transitory computer readable medium containing computer instructions. The SDK may be installed in a respective tunnel device, to be executed by a processor in that device, appended to another software program or application installed on the tunnel device. An attribute type is used herein to include any characteristic, feature, aspect, property, or any other piece of information where one tunnel device is different from another tunnel device. The attribute type may be associated with the tunnel device itself, such as its hardware, software, or any combination thereof, the tunnel device environment, such as its location, or a connectivity related feature or capability, such as relating to Internet connectivity. Each available tunnel device may be associated with a value (or multiple value, such as a range) for each attribute type. The attribute values may be stored in the tunnels list memory73that is part of, or connected to, the TB server71, that may be, for example, in the form of the table100shown inFIG.10. The table100examples in the “Geographic Location” column102can attribute type relating to the location of tunnel devices, which may be actual geographical location or may be based on IP Geolocation. In the example of the “Geographic Location” column102c, the attributes values are in the form of cities, such as the city of Munich, Germany in the second row101bthat corresponds to a tunnel device having an IP address of 176.94.1.17, and the city of Mumbai, India in the sixth row101fthat corresponds to a tunnel device having an IP address of 59.144.192.23. While city is exampled as values, any other physical geographical location or region may be used, such as country, state or province, city, street address, ZIP code, or any combination thereof. Similarly, an attribute type may correspond to the Internet connection of a tunnel device, as the table100examples in the “ASN” column102drelating to the ASN (or ISP name or any other identification). In the example of the “ASN” column102d, the attributes values are in the form of digits that represent the ASN (or ISP), such as the ASN3215in the first row101athat corresponds to a tunnel device having an IP address of 80.12.105.150, and the ASN11419in the seventh row101gthat corresponds to a tunnel device having an IP address of 200.196.224.89. Any other identification of ASN, ISP, or any other Internet connection relating mechanism or identity may be equally used. Another attribute type that may correspond to the technology used for interconnecting a tunnel device to the Internet, as the table100examples in the “Connection Type” column102erelating to the technology or connection scheme. Similarly, the attribute type may correspond to a tunnel device hardware or software, type, version, or any combination thereof, such as the table100examples in the “Operating System” column102f. Alternatively or in addition, an attribute type may correspond to estimated or measured communication related features, such as the bandwidth as exampled in the “BW” column102gor the “RTT” column102h. The BW or RTT may relate to the tunnel estimated or measured communication properties (such as parameters measured in previous transactions) with the web server22b(such as over the message paths131cor131d), with the TB server71(such as over the message paths131band131e), or with the SP server72(such as over the message paths191and194). In one example, a single attribute type is used for distinguishing between the various available tunnel devices. In this case, the client device31a, as part of the “Send Request to SF” step161, sends to the SP server72over the message path121aa value (or multiple values, such as a range) requested for the selected tunnel that is to be used in fetching the requested content. The value (or multiple values, such as a range) is received by the SP server72as part of the “Receive Request from Client” step151, and forwarded to the TB server71over the message path131aas part of “Send request to TB” step152. The value (or multiple values, such as a range) is received by the TB server71as part of the “Receive Request from SF”145, and is used as a criteria for selecting a tunnel device for this content fetching transaction as part of the “Select Tunnel” step146. In one example, a single value is requested, and the TB server71thus selects a tunnel device having a value that is identical to the requested value from the client device31a. For example, assuming an attribute type of operating system and a value of “Window 7”, since there is only a single tunnel, being the tunnel represented in the fourth row101dhaving an IP address of 83.220.232.67, this tunnel is selected. In a case where multiple available tunnel devices in the table100are associated with the requested value, one of these available tunnel is selected, such as using random selection. In another example, few values are requested. For example, assuming an attribute type of ‘connection type’ and values of “ADSL or VDSL”, there are three tunnel devices that may be selected, namely the first row101a(a tunnel device having an IP address of 80.12.105.150), the fourth row101d(a tunnel device having an IP address of 83.220.232.67), and the seventh row101g(a tunnel device having an IP address of 200.196.224.89). Any one of these tunnel devices may be selected, such as using random selection. Similarly, the client device31amay define a range of values, typically where numeral values are involved, such as in the attribute type relating to column “BW”102gor the “RTT” column102h. For example, the client device31amay define a “RTT” attribute type having a range between 200 ms (minimum value) and 400 ms (maximum value), directing the selection of the tunnel device represented in the six row101f(a tunnel device having an IP address of 59.144.192.23) or the tunnel device represented in the seventh row101g(a tunnel device having an IP address of 200.196.224.89), in the example of the table100. Similarly, the client device31amay define only a minimum value, or only a maximum value. For example, a maximum RTT value of 100 ms results in the first row101aand second row101b. Alternatively or in addition, the selection of the tunnel device to be used (as part of the “Select Tunnel” step146), or the priorities assigned to them, may be based on the available communication attributes or their history. For example, based on the costs associated with the usage of a network, the higher cost network may have lower priority and less used than lower cost or free network. In another example, a high quality network, such as having a higher available bandwidth or throughput, lower communication errors or packet loss, lower hops to destination, or lower transfer delay time, is having higher priority that a lower quality network. The system may use Bit Error Rate (BER), Received Signal Strength Indicator (RSSI), Packet Loss Ratio (PLR), Cyclic Redundancy Check (CRC) and other indicators or measures associated with the communication channel associated with a network interface, and may be based on, use, or include the methodology and schemes described in RFC 2544 entitled: “Benchmarking Methodology for Network Interconnect Devices”, and ITU-T Y.1564 entitled: “Ethernet Service Activation Test Methodology”, which are both incorporated in their entirety for all purposes as if fully set forth herein. The network quality grade may be affected by the history of using such a network, for example during a pre-set period before the process of selection of a network interface. In one example, the network interface where the last proper packet was received from may be selected as the interface to be used for the next packet to be transmitted. The system may further use, or be based on, the schemes and technologies described in U.S. Pat. No. 7,027,418 to Gan et al. entitled: “Approach for Selecting Communications Channels Based on Performance”, which is incorporated in its entirety for all purposes as if fully set forth herein. Hence, for any value or range of value defined, a tunnel device to be used may be selected from a set of available tunnel devices, which is a subset of all available tunnel devices that match the requested value or range of values. In one example, the client device31amay use two attributes types, and a value (or a group of values) associated with each attribute type. In such a case, two subsets are formed, one for each attribute, which each subset includes of all available tunnel devices that match the respective requested value (or range of values) for each attribute types. The client device31amay further define a subset that is resulted by an operation on the two subsets. For example, the client device31amay define to select a tunnel from a set that is a union of the two subsets (an ‘or’ operation), where the union (denoted by U) of a collection of sets is the set of all elements in the collection, an intersection of the two sets (an ‘and’ operation), where the intersection A∩B of two sets A and B is the set that contains all elements of A that also belong to B (or equivalently, all elements of B that also belong to A), but no other elements, a set difference or complement operation (where the complement of a set A refers to elements not in A), or asymmetric difference operation the symmetric difference, also known as the disjunctive union, which is the set of elements which are in either of the sets and not in their intersection. For example, in a case of defining a value of BW equal or above 1500 Kb/s ‘and’ an RTT below 300 ms, the resulted intersection subset includes only the tunnel device represented in the sixth row101f, while in a case of a value of BW equal or above 1500 Kb/s ‘or’ an RTT below 300 ms, the resulted union subset includes all rows except the seventh row101g. Similarly, three or more attributes values may be defined relating to three of more attribute types. In one example, the entity76or76aforms a system that may be used to provide a service to client devices. The service allows the client device (such as the client device31a) to quickly and anonymously fetch content from a web server, such as the web server22b. The service level may be measured, or the service may be billed for, if applicable, for example, using the following parameters (individually or combined): Content amount. In this example, the amount of data relating to the content fetched from a data server (such as the web server22b) is measured and logged, by the SP server72or the TB server71. Alternatively or in addition, the client device31amay log or send the amount of content fetched. Number of tunnels: The number of tunnel devices that were available to a client device, or the number of tunnel devices that were actually used, may be used as an indication to the service level. Location: The service level may be measured or billed based on the country of the data server, from which the content is fetched, is located. Similarly, the service level may be measured or billed based the country the client device, to which the content is fetched, is located. In the messaging chart190bshown inFIG.19b, and in the messaging chart130shown inFIG.13, a single TB server71is used. However, multiple TB servers may equally be used, such as for load balancing or for performance optimization. In one example, the tunnel list73, such as in the form of a table100, is split among multiple databases stored in, or connected to, multiple servers using database sharding. Such an arrangement is shown in a messaging chart230shown inFIG.23, which is based on the corresponding messaging chart130. In addition to the TB server71, a TB server71aand a TB server71bare connected to the Internet and may be used. While three TB servers are exampled inFIG.23, two, four, five, or any other number of TB servers may equally be used. The messaging chart230examples the SP server72selecting the TB server71a, rather than using the TB server71as shown in the messaging chart130. Similar to the former described operation, the SP server72forward a request to the TB server71aover a message path131a1, and the TB server71amay in turn select the tunnel device #433d, and send a message to it over a message path131b1, followed by establishing of the connection111d1. Similarly, an arrangement employing multiple TB servers is shown in a messaging chart230ashown inFIG.23a, which is based on the corresponding messaging chart190b, where the TB server71ais used instead of the TB server71. Each of the TB servers may execute the flow chart140shown inFIG.14or the flow chart200shown inFIG.20, and may store a table including tunnel devices, in the form, of the table100. Preferably, load balancing is achieved where the total available tunnel devices (or IP addresses) are split, such as evenly, between the available TB servers. For example, one third of the available tunnel devices may be associated with the TB server71, another third with the TB server71a, and the rest third with the TB server71b. Preferably, the allocation of tunnel devices (or IP addresses) between the available TB servers may be based on an attribute type, such as the attribute types described associated with the different tunnel devices. In one example, a geographical location may be used. The various TB servers may be located geographically distributed around the world, and tunnel devices are allocated based on their perspective geographical location, either actual location or IP location. For example, tunnel devices may be allocated to respective TB servers based on their continent, country, region or state, or city. For example, one TB server, such as the TB server71, may be located in Europe, handling all tunnel devices having an actual geographical location, or IP geolocation, within Europe, such as in Germany or France, a second TB server, such as the TB server71a, may be located in North America, handling all tunnel devices having an actual geographical location, or IP geolocation, within North America, such as in U.S.A. or Canada, and a third TB server, such as the TB server71b, may be located in Asia, handling all tunnel devices having an actual geographical location, or IP geolocation, within Asia such as in China or Russia. In such a case, the SP server72may select the appropriate TB server to use based on the attribute value received from the requesting client31aover the message path121a, as part of the “Receive Request from Client” step151. An SP server72operation in the case of multiple TB servers arrangement is described in a flow chart240shown inFIG.24, which is based on the corresponding flow chart150shown inFIG.15. As part of a “Select TB” step241, a specific TB server, such as the TB server71ain the example of the messaging chart230, is selected, and the operation continues with working with this selected TB server, such as in a “Send Request to Selected TB” step242. Similarly, an SP server72operation in the case of multiple TB servers arrangement is described in a flow chart240ashown inFIG.24a, which is based on the corresponding flow chart210shown inFIG.21. As part of the “Select TB” step241, a specific TB server, such as the TB server71ain the example of the messaging chart230, is selected, and the operation continues with working with this selected TB server, such as in the “Send Request to Selected TB” step242. The TB server may be randomly selected, as part of the “Select TB” step241, or may be based on an attribute value received from the client device31a, such as geographical location. A tunnel device operation, such as the elected tunnel device #433d, in the case of multiple TB servers arrangement is described in a flow chart240bshown inFIG.24b, which is based on the corresponding flow chart170shown inFIG.17. As part of a “Select TB” step241, a specific TB server, such as the TB server71ain the example of the messaging chart230, is selected, and the operation continues with working with this selected TB server, such as in an “Initiate TB Connection” step171. Similarly, a tunnel device operation in the case of multiple TB servers arrangement is described in a flow chart240cshown inFIG.24c, which is based on the corresponding flow chart220shown inFIG.22. As part of the “Select TB” step241, a specific TB server, such as the TB server71ain the example of the messaging chart230, is selected, and the operation continues with working with this selected TB server, such as in the “Initiate TB Connection” step171. The TB server may be randomly selected, as part of the “Select TB” step241, or may be based on an attribute value received from the client device31a, such as geographical location. In one example, a DNS resolution is required for fetching the content from the web server22b. In one example, the DNS resolution is performed by the requesting client31a, as illustrated in a messaging chart250shown inFIG.25. Before requesting the content from the SP server72, the client device31auses a DNS server251for a DNS resolution, shown as a message path252a. Then, the request sent to the SP server72over the message path121aincludes the resolution result, so there is no need for any DNS activity afterwards. Any DNS server may be used as the DNS server251by the client device31a. In one example, a specific DNS server251is used, which is operated, controlled, or managed by an entity76bas illustrated in a messaging chart250ashown inFIG.25a, which also operates, controls, or manage the TB server71and the SP server72. This entity76bmay be the same entity as the entity76a(or76) described above. The client device31aoperation, including a “DNS Resolution” step261is described in a flow chart260shown inFIG.26, which is based on the corresponding flow chart160shown inFIG.16. Alternatively or in addition, the DNS resolution may be performed by the SP server72, as illustrated in a messaging chart270shown inFIG.27. Before requesting for a tunnel device allocation or the content from the TB server71, the SP server72use a DNS server251for a DNS resolution, shown as a message path252b. Then, the request that is sent to the selected tunnel device includes the resolution result, so there is no need for any DNS activity afterwards. The SP server72operation, including a “DNS Resolution” step261is described in a flow chart280shown inFIG.28, which is based on the corresponding flow chart150shown inFIG.15. Alternatively or in addition, the SP server72operation, including a “DNS Resolution” step261may be as described in a flow chart280ashown inFIG.28a, which is based on the corresponding flow chart240shown inFIG.24. Alternatively or in addition, the DNS resolution may be performed by the selected tunnel device, such as the tunnel device #433d, as illustrated in a messaging chart290shown inFIG.29. Before requesting the content from the web server22b, the tunnel device #433duse a DNS server251for a DNS resolution, shown as a message path252c. Then, the request that is sent to the web server22bincludes the resolution result. The tunnel device #433doperation, including a “DNS Resolution” step261is described in a flow chart300shown inFIG.30, which is based on the corresponding flow chart170shown inFIG.17. Alternatively or in addition, the tunnel device #433doperation, including a “DNS Resolution” step261may be as described in a flow chart300ashown inFIG.30a, which is based on the corresponding flow chart220shown inFIG.22. In the example of the messaging chart180shown inFIG.18above, the tunnel #133awas described as a dedicated device, which is primarily installed and used to serve as a tunnel device, or as concurrent multiple tunnel devices, each associated with a different IP address. However, one or more of the tunnel devices may be non-dedicated ones, where their primary functionality or use is other than serving as a tunnel device. For example, the device may be intended to be owned, controlled, or used by a human operator, for various functionalities. In one example, the main functionality may be to serve as a smartphone, such as for making telephone call over a cellular network, as exampled in the tunnel #233b. In such a case, the tunnel functionality is associated with lower priority compared to other tasks or functionalities performed by the device. Furthermore, it is preferred that the tunnel functionality does not affect in any way, the primary functions of the device, and will not interfere or degrade any other task of functionality provided by the device. Preferably, the tunnel related functionality will be operated only when the device is idling, such as not providing any current service or performing any task of interaction with the human user, preferably so the effect of performing any tunnel functionality is hardly or not noticed in any way by the human operator. As used herein, the term “idle state” is used to refer to a state in which a device and/or one or more resources of the device are not being used to perform operations considered to be of a sufficiently high priority, or device resources are not being used at a level of intensity, that the operations should not be interrupted or competed with by, or such resources should not be diverted to any extent to, one or more relatively lower priority operations. In one example, ‘idle state’ refers to a state where the human user is not interacting with the device, and hence is not aware of any interfering with any process or task performed. The term “idle condition” is used in connection with some embodiments to refer to a condition that indicates whether and/or an extent to which the device has entered and/or exited such an idle state. Preferably, a tunnel device performs its tunnel related tasks only when in the idle state, so that the human user or operator is not affected by, or aware of, the tunnel related activity. An example of a state diagram310of a tunnel device, such as the tunnel #233b, the tunnel #333c, the tunnel #433d, or the tunnel #533e, is shown inFIG.31. Upon powering the device, a POWER-UP state311is established, during which the computerized system is initialized, such as by booting the operating system and connecting to the Internet. Upon completing the POWER-UP311sequence, when normal, operative, runtime environment is attained, and the device may provide its primary functions or functionalities, the device shifts (shown as a line315a) to an ‘ACTIVE’ state312, and stays in this state as long as the primarily functions or tasks are used. During the ‘ACTIVE’ state312, an idle condition is continuously monitored, and when such idle condition is detected (shown as an ‘IDLE’ Detect line315b), the device sends a message to the TB server71regarding entering an ‘IDLE’ state313in the “Notify TB” step314a, such as by using the established connection111d, which is followed (shown as a line315c) by entering the ‘IDLE’ state313. Preferably, the tunnel device is selected by the TB server71(as part of the “Select Tunnel” step146) during the ‘IDLE’ state313, allowing for minimum intervention or interfering with the primary tasks and functionalities of the tunnel device. In one example, the tunnel device connects to the TB server71as part of the “Initiate TB Connection” step171, sends the attribute value as part of the “Send Attribute Value” step172, and establishes the TCP connection as part of the “Establish Connection” step173immediately after completing the POWER UP state311, as part of the shift to the ACTIVE state312shown as the shift line315a. However, in such a case, the tunnel device may not be selected by the TB server71as part of the “Select Tunnel” step146as long as the tunnel device has not notified the TB server71in the “Notify TB” step314athat is in the IDLE state313. In such a case, the status of the available tunnel devices is stored in the TB server71, in a form of table330shown inFIG.33, which is based on the table100shown inFIG.10. An ‘IDLE’ column102iid added, denoting by “Y” if the respective tunnel device is in the ‘IDLE’ state313, and ‘N’ if the respective tunnel device is not in the ‘IDLE’ state313, such as in the ‘ACTIVE’ state312. Upon receiving a message of shifting to IDLE state313by the “Notify TB” step314a, the TB server71changes the respective value in the IDLE column102ito ‘Y’. Preferably, the TB server71selects a tunnel that is in the ‘IDLE’ state313, as noted by the respective value ‘Y’ in the IDLE column102i, such as from the tunnel devices associated with the first row101a, the fourth row101d, the fifth row101e, and the seventh row101gin the example of the modified table330. During the ‘IDLE’ state313, an idle condition is continuously monitored, and when such idle condition is not met (shown as an ‘ACTIVE’ Detect line315d), the device sends a message to the TB server71regarding entering an ‘ACTIVE’ state312in the “Notify TB” step314b, such as by using the established connection111d, which is followed (shown as a line315e) by re-entering the ‘ACTIVE’ state312. Upon receiving a message of shifting to ACTIVE state312by the “Notify TB” step314b, the TB server71changes the respective value in the IDLE column102ito ‘N’. Preferably, the TB server71does not selects a tunnel that is in the ‘IDLE’ state313, as noted by the respective value ‘N’ in the IDLE column102i, such as from the tunnel devices associated with the second row101b, the third row101c, and the sixth row101fin the example of the modified table330. A flow chart320of a tunnel device that may be used only when idling is shown inFIG.32, corresponding to the flow chart170shown inFIG.17. After establishing a connection as part of the “Establish Connection” step173, the tunnel device checks, as part of the “IDLE?” step321if it is in the IDLE state313. In a case where the tunnel device is not in the IDLE state313, such as if it is in the ACTIVE state312, a message notifying the unavailability of the tunnel device to serve as a tunnel is sent to the TB server71as part of a “Send Status to TB” step322b, which may corresponds to the “Notify TB” step314b. In a case where the tunnel device is in, or entering, the IDLE state313, a message notifying the availability of the tunnel device to serve as a tunnel is sent to the TB server71as part of a “Send Status to TB” step322a, which may corresponds to the “Notify TB” step314a. Upon receiving such a notification, the TB server71may select the tunnel device as part of the “Select Tunnel” step146, and the selected tunnel is contacted as part of the “Receive Request from TB” step174. Similarly, a flow chart320aof a tunnel device that may be used only when idling is shown inFIG.32a, corresponding to the flow chart220shown inFIG.22. Alternatively or in addition, the tunnel device connects to the TB server71, as part the “Initiate TB Connection” step171, when entering the IDLE state313. For example, the “Notify TB” step314amay correspond to the “Initiate TB Connection” step171, so the TB server71may be aware of the tunnel device availability only when such a device is in the IDLE state313. In such a case, upon the sensing of the ‘ACTIVE’ detect315d, as part of the “Notify TB” step314b, the established connection111dwith the selected tunnel device is disconnected, such as by stopping the TCP keepalive mechanism, so that the TB server71is notified that the selected tunnel device is no long available to serve as a tunnel device. Idle detection techniques are disclosed in U.S. Pat. No. 9,244,682 to Rowles et al. entitled: “Idle detection”, which is incorporated in its entirety for all purposes as if fully set forth herein. A set of idle conditions that includes one or more conditions not comprising or triggered by an absence of user input is monitored. The device is determined to be idle based at least in part on results of the monitoring. The device may be determined not to be idle even in the absence of recent user input. Any of the idle detection techniques that are disclosed in the U.S. Pat. No. 9,244,682 to Rowles et al. may equally be used herein. Further, in some embodiments, a user or administrator configurable set of idle detection conditions applicable to the particular device and/or desired by the user or administrator are used. In one example, the idle condition will be based on, or use, services or tasks provided by the operating system or other software applications that are concurrently executed in the tunnel device with the tunnel related flow chart or functionalities. For example, most operating systems will display an idle task, which is a special task loaded by the OS scheduler only when there is nothing for the computer to do. The idle task can be hard-coded into the scheduler, or it can be implemented as a separate task with the lowest possible priority. An advantage of the latter approach is that programs monitoring the system status can see the idle task along with all other tasks; an example is Windows NT's System Idle Process. A screensaver (or screen saver) is a computer program that blanks the screen or fills it with moving images or patterns when the computer is not in use, and is typically a computer program that displays aesthetic patterns or images when the computer is not being used, originally intended to prevent screenburn. While the original purpose of screensavers was to prevent phosphor burn-in on CRT and plasma computer monitors (hence the name), though modern monitors are not susceptible to this issue, screensavers are still used for other purposes. Screensavers are often set up to offer a basic layer of security, by requiring a password to re-access the device. Some screensavers use the otherwise unused computer resources to do useful work, such as processing for distributed computing projects. The screensaver typically terminates after receiving a message from the operating system that a key has been pressed or the mouse has been moved. In one example, upon executing an idle process or thread (by the operating system or any other software application), or when a screensaver application is operated, the idle condition is considered to be met, and respectively upon terminating an idle process or the screensaver operation, the idle condition is considered not to be met. In one example, the idle condition is met when any application other than a screen saver is running in “full screen” mode (e.g., movies or video games often run in this mode), relating to a display which covers the full screen without the operating system's typical window-framing interface, or a window occupying all the available display surface of a screen. Conversely, a screen may not be powered or may be blanked, suggesting that is not visualized by a human user. In one example, upon displaying a full screen by a software application the idle condition is considered not to be met, since it is assumed that the human user is watching that screen. However, upon a blanked display or a closed (such as non-powered) displaying, the idle condition is considered to be met, since it is assumed that the human user is not watching in front of the screen. An input device, such as the input device18as part of the computer system10shown inFIG.1, is a piece of computer hardware equipment used to provide data and control signals to an information processing system such as a computer or information appliance. Such input device may be an integrated or a peripheral input device (e.g., hard/soft keyboard, mouse, resistive or capacitive touch display, etc.). Examples of input devices include keyboards, mouse, scanners, digital cameras and joysticks. Input devices can be categorized based on the modality of input (e.g., mechanical motion, audio, visual, etc.), whether the input is discrete (e.g. pressing of key) or continuous (e.g., a mouse's position, though digitized into a discrete quantity, is fast enough to be considered continuous), the number of degrees of freedom involved (e.g., two-dimensional traditional mice, or three-dimensional navigators designed for CAD applications). Pointing devices (such as ‘computer mouse’), which are input devices used to specify a position in space, can further be classified according to whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the pointer appears. Touchscreens and light pens involve direct input. Examples involving indirect input include the mouse and trackball, and whether the positional information is absolute (e.g., on a touch screen) or relative (e.g., with a mouse that can be lifted and repositioned). Direct input is almost necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing graphics tablets that do not have an embedded screen involve indirect input and sense absolute positions and are often run in an absolute input mode, but they may also be set up to simulate a relative input mode like that of a touchpad, where the stylus or puck can be lifted and repositioned. In one example, the idle detection is based on receiving any input (or change of an input) from an input device. For example, a pre-defined time interval may be used, measured by a dedicated timer or counter or used as a service of the operating system. In case of no input sensed from one or more input devices during the pre-defined time interval, the idle condition is considered to be met. Further, the idle condition is considered not to be met upon receiving any input from one or more of the input devices. Examples include, without limitation, detecting receipt of a user input, e.g., via mouse movement, touch screen interaction, button clicks, or keyboard keystrokes. Such idle-detection methods can detect if a human-interaction device such as a mouse, keyboard, or touch-screen has not been used for a certain amount of time. When portable or handheld devices are involved, the idle condition may be considered to be met when no motion or acceleration (or a motion or an acceleration below a set threshold) is sensed for a pre-defined time interval, using an accelerometer, a motion sensor, or a GPS. The motion sensor may be based on a piezoelectric accelerometer that utilizes the piezoelectric effect of certain materials to measure dynamic changes in mechanical variables (e.g., acceleration, vibration, and mechanical shock). Piezoelectric accelerometers commonly rely on piezoceramics (e.g., lead zirconate titanate) or single crystals (e.g., quartz, tourmaline). Piezoelectric quartz accelerometer is disclosed in U.S. Pat. No. 7,716,985 to Zhang et al. entitled: “Piezoelectric Quartz Accelerometer”, U.S. Pat. No. 5,578,755 to Offenberg entitled: “Accelerometer Sensor of Crystalline Material and Method for Manufacturing the Same” and U.S. Pat. No. 5,962,786 to Le Traon et al. entitled: “Monolithic Accelerometric Transducer”, which are all incorporated in their entirety for all purposes as if fully set forth herein. Alternatively or in addition, the motion sensor may be based on the Micro Electro-Mechanical Systems (MEMS, a.k.a. Micro-mechanical Electrical Systems) technology. A MEMS based motion sensor is disclosed in U.S. Pat. No. 7,617,729 to Axelrod et al. entitled: “Accelerometer”, U.S. Pat. No. 6,670,212 to McNie et al. entitled: “Micro-Machining” and in U.S. Pat. No. 7,892,876 to Mehregany entitled: “Three-axis Accelerometers and Fabrication Methods”, which are all incorporated in their entirety for all purposes as if fully set forth herein. An example of MEMS motion sensor is LIS302DL manufactured by STMicroelectronics NV and described in Data-sheet LIS302DL STMicroelectronics NV, ‘MEMS motion sensor 3-axis-±2 g/±8 g smart digital output “piccolo” accelerometer’, Rev. 4, October 2008, which is incorporated in its entirety for all purposes as if fully set forth herein. Alternatively or in addition, the motion sensor may be based on electrical tilt and vibration switch or any other electromechanical switch, such as the sensor described in U.S. Pat. No. 7,326,866 to Whitmore et al. entitled: “Omnidirectional Tilt and vibration sensor”, which is incorporated in its entirety for all purposes as if fully set forth herein. An example of an electromechanical switch is SQ-SEN-200 available from SignalQuest, Inc. of Lebanon, NH, USA, described in the data-sheet ‘DATASHEET SQ-SEN-200 Omnidirectional Tilt and Vibration Sensor’ Updated 2009 Aug. 3, which is incorporated in its entirety for all purposes as if fully set forth herein. Other types of motion sensors may be equally used, such as devices based on piezoelectric, piezoresistive and capacitive components to convert the mechanical motion into an electrical signal. Using an accelerometer to control is disclosed in U.S. Pat. No. 7,774,155 to Sato et al. entitled: “Accelerometer-Based Controller”, which is incorporated in its entirety for all purposes as if fully set forth herein. The Global Positioning System (GPS) is a space-based radio navigation system owned by the United States government and operated by the United States Air Force. It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. The GPS system does not require the user to transmit any data, and it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS system provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver. In addition to GPS, other systems are in use or under development, mainly because of a potential denial of access by the US government. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s. GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within two meters. There are also the European Union Galileo positioning system, China's BeiDou Navigation Satellite System and India's NAVIC. The GPS concept is based on time and the known position of specialized satellites, which carry very stable atomic clocks that are synchronized with one another and to ground clocks, and any drift from true time maintained on the ground is corrected daily. The satellite locations are known with great precision. GPS receivers have clocks as well; however, they are usually not synchronized with true time, and are less stable. GPS satellites continuously transmit their current time and position, and a GPS receiver monitors multiple satellites and solves equations to determine the precise position of the receiver and its deviation from true time. At a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and clock deviation from satellite time). Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes: (a) A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the Time-of-Arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale. (b) A message that includes the Time-of-Transmission (TOT) of the code epoch (in GPS system time scale) and the satellite position at that time. Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four Time-Of-Flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite range differences. The receiver then computes its three-dimensional position and clock deviation from the four TOFs. In practice, the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs. The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid (e.g., EGM96) (essentially, mean sea level). These coordinates may be displayed, e.g., on a moving map display, and/or recorded and/or used by some other system (e.g., a vehicle guidance system). In one example, the idle condition may be considered to be met when the communication traffic through a network interface, such as over a PAN, LAN, WLAN, WAN or WWAN, is below a threshold. Portable or handheld devices, such as tablets, laptops, and smartphones, typically use a rechargeable smart battery. A smart battery or a smart battery pack is a rechargeable battery pack with a built-in Battery Management System (BMS), usually designed for use in a portable computer such as a laptop. Besides the usual plus and minus terminals, it also has two or more terminals to connect to the BMS; typically minus is also used as BMS “ground”. BMS interface examples are SMBus, PMBus, EIA-232, EIA-485, MIN BM and Local Interconnect Network. The smarter battery can internally measure voltage and current, and deduce charge level and SoH (State of Health) parameters, indicating the state of the cells. Externally the smart battery can communicate with a smart battery charger and a “smart energy user” via the bus interface. The smart battery can demand that the charging stops, ask for charging, or demand that the smart energy user stop using power from this battery. There are standard specifications for smart batteries: Smart Battery System and many ad-hoc specifications. A Battery Management System (BMS) is any electronic system that manages a rechargeable battery (cell or battery pack), such as by protecting the battery from operating outside its Safe Operating Area, monitoring its state, calculating secondary data, reporting that data, controlling its environment, authenticating it and/or balancing it. A battery pack built together with a battery management system with an external communication data bus is a smart battery pack. A smart battery pack must be charged by a smart battery charger. A BMS may monitor the state of the battery as represented by various items, such as: Voltage: total voltage, voltages of individual cells, minimum and maximum cell voltage or voltage of periodic taps; Temperature: average temperature, coolant intake temperature, coolant output temperature, or temperatures of individual cells; State of Charge (SOC) or Depth of Discharge (DOD), to indicate the charge level of the battery; State of Health (SOH), a variously-defined measurement of the overall condition of the battery; Coolant flow: for air or fluid cooled batteries; and Current: current in or out of the battery. In one example, the idle condition may be considered to be met when, based on the BMS output, the battery capacity is above a minimum threshold. For example, the idle condition may be considered to be met when the current capacity of the battery is above 40%, 50%, 60%, 70%, 80%, or 90%. In the case where the capacity is estimated or measured to be below the set threshold, the idle condition may be considered not to be met. Such threshold provides for not draining the battery by using the tunnel functionalities, rendering the device useless or powerless when the human user may want to use it after being used for tunneling. In the example of the state diagram310shown inFIG.31, being in an ‘IDLE’ state313or in ‘ACTIVE’ state312is determined by the tunnel device itself, such as based on detecting or sensing physical phenomenon or events, and notifying the TB server71, such as over the established connection, of the tunnel device determined state. For example, a tunnel device such as tunnel device #533e, may check the battery capacity and may use an associated threshold, and then the tunnel device itself may decide that the battery capacity is below the set threshold (e.g. 35%), and in response shift from the ‘IDLE’ state313to the ‘ACTIVE’ 312 state, and may notify the change over the established connection112edescribed as part of the messaging chart110ashown inFIG.11a. As a result, the TB server71may update the tunnel device #533estatus, such as by updating the associated idling status as part of the related idling column102iin the status table330shown inFIG.33. Alternatively or in addition, while the tunnel device may still detect, sense, or measure various parameters or phenomena regarding its operation or the environment, the decision regarding the tunnel device state is performed by the TB server71. In such a scheme, the sensing or detecting information or value is sent to the TB server71, such as over the established connection. For example, the connection112emay be used by the tunnel device #533e. Upon receiving the value or status information from the tunnel device, the tunnel device state is determined by the TB server71itself. For example, the value of the battery capacity may be sent by the tunnel device to the TB server71, which apply the comparison to a pre-set threshold for determining the state of the tunnel device. Such mechanism allows a centralized control of the criteria used for deciding on the tunnel devices status. For example, in case where a criterion for idling is changed, it is required to be updated only at a single location, at the TB server71, and not at each of the tunnel devices. Further, the threshold, criterion, or rules for idling may be changed in time according to various system requirement. For example, assuming that battery capacity of at least 50% is used as an idling criterion. In case of having a large quantity of available tunnel devices, the threshold may be relaxed to 55% or 60%. In contrast, in case of low quantity of available tunnel devices (such as in a specific location), the threshold may be reduced to 40%, rendering many tunnel devices having battery capacity between 40% and 50% available as tunnel devices. A tunnel device may notify the TB server71of the measured or sensed value regarding a criterion for idling periodically, upon sensing an event, as a response to a request from the TB server71, or any combination thereof. A state diagram310ashown inFIG.31aillustrates the idling determination by the TB server71. A tunnel device may be in an ‘IDLE’ state313a(corresponding to the ‘IDLE’ state313), and generally available to server as a tunnel and to fetch a content from a web server, such as web server22b, or may be non-available for tunnel functionality in an ‘ACTIVE’ state312a(corresponding to the ‘ACTIVE’ state312). However, the determining regarding the tunnel devices is by the TB server71, rather than by the tunnel device itself as exampled above. After completing the TOWER-UP′ phase311by the tunnel device, a connection is established as part of a “Connection Established” state316, which corresponds to “Registration and Connection” step81shown inFIG.8, and further described as part of the “Connection Handler” flow chart140aand the tunnel flow chart170. As part of a “Value to TB” step317, the tunnel device sends a value to the TB server71, to be used by the TB server71for determining the tunnel device status or state, such as ‘IDLE’ state313aor ‘ACTIVE’ state312a. The value may correspond to a measured physical phenomenon, such as battery capacity or available bandwidth. Alternatively or in addition, the value may notify a state or an event, such as a screen saver status, being in “full screen” mode or not. The tunnel device may send multiple values (continuous values or discrete values), that may correspond to multiple phenomena, events, parameters, and criteria. In one example, the value (or values) is periodically sent from the tunnel device to the TB server71, allowing for periodical refreshing of the tunnel device status. In such a scheme, when the tunnel device is in the ‘IDLE’ state313a(as determined by the TB server71), the value is measured or otherwise determined by the tunnel device, and after a time period from the former sending, an updated value is sent to the TB server71as shown in a dashed line319bin the state diagram310a. Similarly, when the tunnel device is in the ‘ACTIVE’ state312a(as determined by the TB server71), the value is measured or otherwise determined by the tunnel device, and after a time period from the former sending, an updated value is sent to the TB server71as shown in a dashed line319ain the state diagram310a. The time period may be at least 1 second, 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 3 months, or may be less than 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 6 months. Alternatively or in addition, as shown in the dashed lines319aand319b, a value is sent to the TB server71as part of the “Value to TB” step317upon sensing an event. Any event or occurrence, such as relating to the tunnel device operation or interaction with a user may be used to trigger the value sending. For example, any event that was described to trigger an ‘ACTIVE Detect’315dor ‘IDLE’ Detect315bas part of the state diagram310may trigger a sending of an updated value. As an alternative, or in addition to, the periodic update and event triggered update, a tunnel device may send a value as part of the “Value to TB” step317in response to a request from the TB server71as part of a “Request from TB” step318, for example over the connection established as part of the “Connection Established” state316. After such request is received by the tunnel device, a value is sent as part of the “Value to TB” step317as shown in a dashed line319c. After the sent value is received by the TB server71, the TB server71determines the tunnel device status. A criteria, such as a threshold may be used, and the TB server71may decide that the value received from the tunnel device justifies considering it as in the ‘IDLE’ state313aas described by the line319d, or justifies considering it as in the ‘ACTIVE’ state312aas described by the line319a. A flow chart320bof a tunnel device operation where the idling status is determined by the TB server71is shown inFIG.32b, corresponding to the flow chart170shown inFIG.17and to the flow chart320shown inFIG.32. After establishing a connection as part of the “Establish Connection” step173, the tunnel device enters a “Normal Operation” state323, where activities that are not related to fetching content for a client device may be performed. In case of using periodic updating of the TB server71with the status of the tunnel device, corresponding to the periodic sending value to the TB server71as part of the “Value to TB” step317shown in the state diagram310a, the tunnel device periodically update the TB server71by sending a value as part of a “Periodically Send value To TB” step328. In a case where an event based triggering is used (as an alternative or in addition to the periodical updating), the tunnel device continuously or periodically checks for an event or occurrence as part of an “Event ?” step327a, and resumes normal operation state323if no event has been identified. Upon identifying an event as part of the “Event ?” step327a, a value is sent to the TB server71as part of a “Send Value To TB” step329, which corresponds to the “Value to TB” step317. In a case where TB server71initiated update is used (as an alternative to, or in addition to, the periodical or event-based updating), when a request is received by the tunnel device as part of a “Receive Value Request” step324a, which corresponds to the “Request from TB” step318, the tunnel device responds by sending a value as part of the “send Value to TB” step324b, and afterwards resumes the normal operation323. Upon receiving a request for content as part of the “Receive Request from TB” step174, the tunnel device responds by sending the required content as part of the “Send Content to TB” step177, similar to the activity as part of the flow chart170shown inFIG.17. A flow chart320cdescribed the operation of the TB server71where the idling status is determined by the TB server71is shown inFIG.32c, corresponding to the flow chart140ashown inFIG.14. A value is received from a tunnel device as part of a “Receive value” step325b, as a response to the tunnel device sending the value periodically as part of the “Periodically Send Value To TB” step328, or as a response to event-based sending as part of the “Send Value To TB” step329as part of the flow chart320bshown inFIG.32b. The TB server71applies a rule or a criterion, such as a threshold for the value received, or based on the value itself in case of a discrete value as part of a “Change State ?” step326. If the result provides that the status of the tunnel device is to be sustained, such as by staying in the ‘IDLE’ state313aor in the ‘ACTIVE’ state312a, no change is implied and the TB server71waits for another update from the tunnel device as part of the “Receive value” step325b. However, in case where the TB server71decides, according to the rules or criteria as part of the “Change State ?” step326that the tunnel device status needs to be updated, such as from the ‘IDLE’ state313ato the ‘ACTIVE’ state312bor vice versa, such status change is executed as part of an “Table Update” step329, and only afterwards the TB server71waits for another update from the tunnel device as part of the “Receive Value” step325b. In one example, the “Table Update” step329involves changing status associated with the tunnel device as part of the column ‘IDLE’102iin the table330shown inFIG.33, such as when changing the state to IDLE is marked as ‘Y’. In a case where the value is received from the tunnel device in response to a request initiated by the TB server71, such request id sent to the tunnel device as part of a “Request Value” step325a. As part of the “Tunnel Selection” step83shown as part of the flow chart80, or as part of the “Select Tunnel” step146shown as part of the flow chart140b, a tunnel device is selected for fetching the requesting client device #131awith the requested content from the web server22b. In one example, the tunnel device is selected from all available tunnels, such as from all the tunnels that are marked as idling ‘Y’ in the IDLE column102iin the table330(that meets the criteria, if used), that is stored in the database73that is part of the Tunnel Bank Server71. In such a case, the all pool of available tunnel devices shares the task of serving as tunnels, and the requested content from the web server22bis being accessed by a diversified tunnel devices. In another example, a single tunnel device (uniquely identified as a single IP address) is used by the requesting client device #131a, so that the web server22bis always accessed by the same selected tunnel device, allowing the client device #131ato anonymously simulate a consistent accessing device to the web server22b, for example for experiencing and testing the web server22bperformance, responsiveness, or operation over time when accessed by the same device. For the sake of load balancing, a different tunnel device may be selected for use with the client device #131awhen accessing different web servers. For example, the tunnel device #133amay always be used when fetching content from the web server22b, and the tunnel device #433dmay always be used when fetching content from another web server, such as from the data server #122a. Alternatively or in addition, a client device, such as the client device #131a, may be associated with a defined group of IP addresses, each identifying a different tunnel device. Such a scheme allows for better manageability and control of resources. In such a case, a tunnel device is selected from the defined group as part of the “Tunnel Selection” step83shown as part of the flow chart80, or as part of the “Select Tunnel” step146shown as part of the flow chart140b. An example of an IP group341is shown as part of a view340shown inFIG.34. The IP group (designated as GIP) includes 16 IP addresses, ranging from IP #1341ato IP #16341p, and includes IP #2341b, IP #3341c, IP #4341d, IP #5341e, IP #6341f, IP #7341g, IP #8341h, IP #9341i, IP #10341j, IP #11341k, IP #12341l, IP #13341m, IP #14341n, IP #153410, and IP #16341p. Each of the IP addresses in the group341may be associated with the attributes shown in the table330inFIG.33. For example, IP #5341emay be the IP associated with the third row101cof the table330, and the IP #14341nmay be the IP associated with the seventh row101gof the table330. When a content request is received from the client device #131a, an IP address (designating a tunnel device) is selected only from the IP addresses of the table341. Preferably, when a criterion (or multiple criteria) are associated with the client device #131a, the IP addresses in the associated IP addresses group341are all satisfying that criterion (or criteria), thus obviating the need to scan and select from all the available tunnel devices in the TB server71. While the IP group341is examples as having 16 IP addresses, any number of addresses may be used. Further, a different number of IP addresses may be used in different IP groups, associated with different client devices. For example, the IP group that is used for the client device #131amay include 16 IP addresses as shown in the IP group341, while the IP group that is used for another client device, such as client device #231bmay include 5 or 50 IP addresses. An IP group may include a number that is equal or higher than 1, 2, 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, or 10,000 IP addresses. Alternatively or in addition, an IP group may include less than 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, 10,000 or 20,000 IP addresses. Further, a group may be formed and defined only for a limited time for a client device. For example, an IP group may be defined and used by a client device for at least 1 minute, 2, minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 6 months. Alternatively or in addition, an IP group may be defined and used by a client device for less than 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, 6 months, or 1 year. As part of the “Tunnel Selection” step83shown as part of the flow chart80, or as part of the “Select Tunnel” step146shown as part of the flow chart140b, a single IP address (associated with a specific tunnel device) is selected from the IP group341. In one example, the selected IP address is checked among the tunnel devices that are idling, such as those that are marked as idling ‘Y’ in the IDLE column102iin the table330. Alternatively or in addition, the single IP address is randomly selected from the available and idling tunnel devices. Alternatively or in addition, a load balancing scheme may be used, where the tunnel devices are sequentially selected, or where the available tunnel device that was last used earlier than all others is selected to be used. While the tunnel device to be used is exampled to be selected from the single group341that is associated with a single client device #131a, further partitions of the IP group may be used, providing for further manageability and control. In the example shown in the view340, the IP group341is further partitioned into 3 sub-groups, designated as GIP #1342a, GIP #2342b, and GIP #3342c. The sub-group GIP #1342aincludes 6 IP addresses, namely from IP #1341ato IP #6341f, the sub-group GIP #2342bincludes 5 IP addresses, namely from IP #7341gto IP #11341k, and the sub-group GIP #3342cincludes 5 IP addresses, namely from IP #12341lto IP #16341p. The sub-groups may have equal or different number of elements. The number of subgroups (such as GIP #1342a, GIP #2342b, or GIP #3342c) for a single group (such as IP group341) may be equal or more than 1, 2, 3, 4, 5, 8, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, or 1,000 sub-groups. Alternatively or in addition, the number of subgroups (such as GIP #1342a, GIP #2342b, or GIP #3342c) for a single group (such as IP group341) may be less than 2, 3, 4, 5, 8, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, or 2,000 sub-groups. Each sub-group (such as GIP #1342a, GIP #2342b, or GIP #3342c) may include equal or more than 1, 2, 3, 4, 5, 8, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, or 10,000 IP addresses. Alternatively or in addition, each sub-group may include less than 2, 3, 4, 5, 8, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, 10,000 or 20,000 IP addresses. In the example shown in the view340, the IP group341is partitioned into mutually exclusive sub-groups, where each of the IP addresses in the IP group341is included in only one of the sub-groups. For example, each of the IP addresses in the range from IP #1341ato IP #6341fis exclusively part of the sub group GIP #1342a, each of the IP addresses in the range from IP #7341gto IP #11341kis exclusively part of the sub group GIP #2342b, and each of the IP addresses in the range from IP #12341lto IP #16341pis exclusively part of the sub group GIP #3342c. Alternatively or in addition, an IP address may be shared by two or more sub-groups, as exampled in a view340ashown inFIG.34a. In the example shown in the view340a, the IP group341is partitioned into 3 sub-groups, designated as GIP #4342d, GIP #5342e, and GIP #6342f. The sub-group GIP #4342dincludes 8 IP addresses, namely from IP #1341ato IP #8341h, the sub-group GIP #5342eincludes 5 IP addresses, namely from IP #7341gto IP #11341k, and the sub-group GIP #6342fincludes 7 IP addresses, namely from IP #10341jto IP #16341p. Some IP addresses are included in only a single sub-group, such as the IP #3341cthat is exclusively part of the sub-group GIP #4342dand the IP #9341ithat is exclusively part of the sub-group GIP #5342e. In addition, one or more IP addresses may be shared by more than one sub-group. For example, the IP #7341gis part of both the sub-groups GIP #4342dand the GIP #5342e, and the IP #10341jis part of both the sub-groups GIP #6342fand the GIP #5342e. Such overlapping partition may provide better utilization of the tunnel devices, while providing the benefits of using sub-groups. In one example, the partitioning of the IP addresses that are part of an IP group (such as the IP group341) into sub-groups may be random, where the IP addresses are randomly assigned to the various sub-groups. Alternatively or in addition, the partition may be based on the any criteria. For example, any of the criteria described with regard to selecting a tunnel, such as the criteria described in the table100shown inFIG.10may be used, and in such a scheme, the each sub-group may include all the IP addresses relating to tunnel devices that share a specific feature, attribute, or characteristic. In one example, each such-group may be associated with a geographical location (relating to the column102cin the table100) and as such includes all the tunnel devices in the same city, country, or continent. For example, a sub-group GIP #1342amay be assigned a city (such as Boston, MA, USA relating to the tunnel device in the row101c) or a country (such as France relating to the tunnel device in the row101a), and includes all the IP addresses associated with that city or country. Alternatively or in addition, each such-group may be associated with an ASN (relating to the column102din the table100) and as such includes all the tunnel devices having the same ASN. Similarly, each such-group may be associated with a connection type (relating to the column102ein the table100), may be associated with an operating system (relating to the column102fin the table100), may be associated with a bandwidth (BW) (relating to the column102gin the table100), or may be associated with a RTT (relating to the column102hin the table100). When sub-groups are used, the selection of a tunnel device (as part of the “Tunnel Selection” step83shown as part of the flow chart80, or as part of the “Select Tunnel” step146shown as part of the flow chart140b) to be used for a specific request of the client device (such as the client device #131a) involves two steps: selecting the sub-group, and then selecting a tunnel device (typically identified by its IP address) within the selected sub-group, shown as part of a “Select Tunnel” step146ainFIG.35, which corresponds to the “Tunnel Selection” step83shown as part of the flow chart80, or to the “Select Tunnel” step146shown as part of the flow chart140b. In a “Select Sub-Group” step351the sub-group is first selected, and a specific single tunnel device is selected from the selected sub-group as part of a “Select Tunnel in Sub-Group” step352. The sub-group selection may be random, or may use one or more criterions as shown by a “Selection Criteria” step353. For example, when the IP group341is used, in the “Select Sub-Group” step351the sub-group GIP #2342bmay be selected, followed by selecting the IP #10341jas part of the “Select Tunnel in Sub-Group” step352. The selection of a tunnel device from a sub-group as part of the “Select Tunnel in Sub-Group” step352may use any selection scheme described herein, such as random selection, or alternatively a sequential selection, preferably based on any load balancing scheme. In one example, as a request of the client device31aor as part of the TB server71without any specific request, a new tunnel device may be selected. For example, a tunnel device that has never been selected or used with the client device31amay be used. Similarly, a tunnel device that has never been selected and used with the web server22bassociated with the request from the client device31a. Similarly, a tunnel device that has never been selected regarding the attributes or criteria described herein may be newly introduced and used as a response to a request. Alternatively or in addition, a tunnel device may be selected from available tunnel devices that have not been used with the client device31a, with the web server22b, with any other attribute or criterial, or any combination thereof, for more than a defined time period. The time period may be at least 1 second, 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 3 months, or may be less than 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 6 months. Such a mechanism allows for refreshing the tunnels used with a specific client device, a specific web server, any other attribute or criteria, or any combination thereof. Various criterions may be used for selecting a sub-group associated with the “Selection Criteria” step353. In the case where each sub-group is associated with a specific feature, attribute, or characteristic of the tunnel devices, and include IP addresses that comply with a specific criterion or the specific feature, attribute, or characteristic, then the request from the client device #131amay include the identification of the specific feature, attribute, or characteristic, and the sub-group is selected to comply with the identification included in the request. For example, in a case where the sub-group are associated with different countries (or different ASNs), the sub-group that is relevant to the request criterion of a specific country (or an ASN) is selected. In an example where partitioning of the IP group341is country base, the GIP #1342amay include IP addresses relating to tunnel devices assumed to be geographically located in France, the GIP #2342bmay include IP addresses relating to tunnel devices assumed to be geographically located USA, and the GIP #3342cmay include IP addresses relating to tunnel devices assumed to be geographically located China. If the client device request includes the criterion of tunnel devices in France, then the sub-group UPC342a, while when the client device request includes the criterion of tunnel devices in China, then the sub-group GIP #3342cis selected. Similarly, a sequential selecting mechanism may be used, where the sub-group are selected sequentially and in cycle. For example, assuming only three sub-groups as described in the view340, first the sub-group GIP #1342ais selected, followed by selecting the sub-group GIP #2342b, then followed by selecting the sub-group GIP #3342c, and then cyclic repeating the sequence by selecting the sub-group GIP #1342a, and so forth. Such mechanism provides a load balancing and a substantial probability for the same sub-group (or the same tunnel device) to be selected. In one example, a sub-group may be selected as part of the “Select Sub-Group” step351based on timing information, such as based on the time the client device #131amakes the request for content, for example as part of the “Content Request” step82or the “Send Request to SF” step161. Similarly, the timing associated with any action or any other step of any flow chart here may equally be used. The time periods involved may be a month, a week, a day of the week, an hour of a day, or a minute in an hour. In the example of a day of a week, the partitioning may involve 7 sub-groups, each associated with a day of the week. If the request is received on Monday, then the sub-group associated with Monday will be selected. Similarly, when hours of the day are used, each sub-group may be associated with one or more hours. In one example, there may be 24 sub-groups, each associated with a specific hour of the day. In such a case, when a request is received at 13:25, the sub-group that is associated with the hour of 13:00-14:00 is selected. Similarly, fewer sub-groups may be defines, each associated with few hours. In the example shown in the view340inFIG.34, the sub-group GIP #1342amay be assigned to a ‘day time’, ranging from 07:00 to 18:00 (and selected when a request is received in this time period), the sub-group GIP #2342bmay be assigned to a ‘evening time’, ranging from 18:00 to 22:00 (and selected when a request is received in this time period), and the sub-group GIP #3342cmay be assigned to a ‘night time’, ranging from 22:00 to 07:00 (and selected when a request is received in this time period). Alternatively or in addition, the criterion used as the “Selection Criteria”353for selecting a sub-group relates to the content that is requested by the client device #131a, for example as part of the “Content Request” step82or the “Send Request to SP” step161. In one example, each sub-group is associated with a specific content type, such as including a video data, an audio data, or a web-page without any multimedia. In the example shown in the view340inFIG.34, the sub-group GIP #1342amay be assigned to a video data content, the sub-group GIP #2342bmay be assigned to an audio data content, and the sub-group GIP #3342cmay be assigned to non-multimedia content, such as simple web-pages that only contains images and text. In the case where the requested content is believed to include video data, the relevant sub-group GIP #1342ais selected as part of the “Select Sub-Group” step351, in the case where the requested content is believed to include audio data, the relevant sub-group GIP #2342bis selected as part of the “Select Sub-Group” step351, while in all other cases the sub-group GIP #3342cis selected as part of the “Select Sub-Group” step351. Alternatively or in addition, the criterion used as the “Selection Criteria”353for selecting a sub-group relates to the server from which the content is requested by the client device #131a, for example as part of the “Content Request” step82or the “Send Request to SP” step161. Such server may be identified by an IP address, domain name, web-site name, or an URL. Such an example is described in a view340bshown inFIG.34b. The GIP #1342ais selected when the content requested is associated with the web-site having a domain name www.xxx.com342g, and is selected upon accessing content from this domain, the GIP #2342bis selected when the content requested is associated with the web-site having a domain name www.yyy.com342h, and is selected upon accessing content from this domain, and the GIP #3342cis selected when the content requested is associated with the web-site having a domain name www.zzz.com342i, and is selected upon accessing content from this domain. Similarly, other identifications of the server that stores that content may be used. For example, the GIP #1342ais selected when the content is requested from the data server #122aidentified by a first IP address, the GIP #2342bis selected when the content is requested from the data server #222b, and the GIP #3342cis selected when the content requested from a third data server using its IP address. Similarly, other identifications of the server that stores that content may be used. While the IP group341is exampled in the view340shown inFIG.34as exclusively used by a single client device (such as the client device #131a), an IP group (such as the IP group341) may equally be shared by two or more client devices, offering better utilization of the available tunnel devices. In such a case, the partitioning to sub-groups may be identical for two or more client devices, or may be different. An example of different partitioning is described in a view340cshown inFIG.34c, illustrating two client devices sharing the IP group341. The first client device, designated as ‘Customer #1’ and may correspond to the client device #131adescribed above, use the partitioning into 3 sub-groups, the GIP #1342athat is associated with the web-site having a domain name www.xxx.com342g, the GIP #2342bthat is associated with the web-site having a domain name www.yyy.com342h, and the GIP #3342cthat is associated with the web-site having a domain name www.zzz.com342i. However, a second client device (such as client device #231b) use a different partitioning into 3 sub-groups, where one GIP (including the IP #1341ato the IP #5341e) is associated with the web-site having a domain name www.zzz.com342j, another GIP (including the IP #6341fto the IP #9341i) is associated with the web-site having a domain name www.mmm.com342k, and another GIP (including the IP #10341jto the IP #16341p) is associated with the web-site having a domain name www.ppp.com3421. Preferably, there is no overlapping of IP addresses associated with the same domain name between the two client devices. While the view340cdescribes the example of sub-groups that are based on domain names, any other partition may equally be applied. In one example, an IP group, such as the IP group341shown as part of the view340inFIG.34, is defined once, and is static and unchanged during the system operation. In such a case, the number of, and the identity of, the IP addresses that are included in the group, are fixed and unchanged over time. Alternatively or in addition, an IP group may be dynamically changed over time, by adding of, or by deleting of, IP addresses from the group. Similarly, the partitioning into sub-groups of an IP group may be defined once, and is static and unchanged during the system operation. In such a case, the number of, and the identity of, the IP addresses that are included in teach of the sub-groups, are fixed and unchanged over time. Alternatively or in addition, the sub-groups may be dynamically changed over time, by adding of, or by deleting of, IP addresses from the sub-group. An allocation of IP addresses to an IP group may be performed as part of a “Group Allocation” step354illustrated in a flow chart350ashown inFIG.35a. When a request for content is received from a client device that is associated with an IP group, the tunnel to be used is selected from the IP group as part of a “Select Tunnel From Group” step146a, which corresponds to the “Tunnel Selection” step83shown as part of the flow chart80, or to the “Select Tunnel” step146shown as part of the flow chart140b. An example of dynamically forming an IP group, such as the IP group341is illustrated in a flow chart360shown inFIG.36. For a start, a single IP address, such as the IP #1341a, is assigned to the IP group as part of a “Assign IP #1 to GIP” step361. Upon receiving a request for content from the client device, as part of a “Request Received” step366(such as the client device #131a), which is associated with the IP group, for example as part of the “Content Request” step82or the “Send Request to SF” step161. The idling status of this single tunnel device (identified by the IP #1341a) is checked as part of a “Check Tunnel Status” step362, which may correspond to the idling status described relating to the “IDLE?” step321in the flow chart320. In the case as part of a “IDLE?” step363the tunnel device is idling and is available to serve as a tunnel device, this single tunnel device (identified by the IP #1341a) is selected as part of a “Use IP #1” step364. However, in the case where this single tunnel device is not available (such as not idling), another tunnel device is suggested, such as the one associated with the IP #2341b. In case a criterion is defined for the content request, the suggested tunnel device is selected so that it satisfies the criteria. As part of a “Check Next Tunnel Status” step362a, the idling status of this suggested tunnel device (identified for example by the IP #2341b) is checked. If it is decided as part of an “IDLE?” step363athat the suggested tunnel device is available for operation as a tunnel device (such as being in an idling state), the suggested tunnel device is added to the IP group341as part of an “Add Next Tunnel to GIP” step365, followed by using the tunnel for retrieving the required content as part of an “Use Added Tunnel” step364a. At this point, the IP group341includes two tunnel devices, namely the original IP #1341aand the newly added IP #2341b. However, in the case where it is determined in the “IDLE?” step363athat the suggested tunnel device is not available as a tunnel device, another tunnel device is suggested, and is available, the another tunnel device is used and added to the IP group341. In steady state, each time a request is received as part of the “Request Received” step366, the process is repeated, and the availability of a tunnel device that is already part of the IP group is checked as part of a “Check Tunnels Status” step362b, which may correspond to the idling status described relating to the “IDLE?” step321in the flow chart320. In the case as part of a “One IDLE?” step363bone of the tunnel devices already in the group is idling and is available to serve as a tunnel device, this tunnel device is selected as part of a “Use Idle One” step364a. However, if no tunnel device in the group is found to be available, another tunnel device is suggested, and its availability is checked as part of the “Check Next Tunnel Status” step362a. If it is decided as part of an “IDLE?” step363athat the newly suggested tunnel device is available for operation as a tunnel device (such as being in an idling state), the suggested tunnel device is added to the IP group341as part of an “Add Next Tunnel to GIP” step365, followed by using the tunnel for retrieving the required content as part of an “Use Added Tunnel” step364a. However, in the case where it is determined in the “IDLE?” step363athat the suggested tunnel device is not available as a tunnel device, other tunnel device is suggested, and if available, the other tunnel device is used and added to the IP group341and will be used for the content retrieving. Over time, after multiple iterations, the IP group341will include a suitable number of IP addresses of tunnel devices where at least one is generally expected to be available when required. The number of tunnel devices that may be handled by the system may be very high, and may reach hundreds of thousands. In order to better manage and control such large number of entities, it may be preferable to aggregate few tunnel devices into a group or collection, and to handle the group as a single unit, offering better manageability. In one example, a number of IP addresses, that identify corresponding tunnel devices in the system, are collectively identified by a single label. The label may be any characters set, any alphanumeric string, any number, or any other identification. Any two labels may identify the identical same number of IP addresses, similar number of IP addresses, or different number of IP addresses. In one example, a label may identify a collection of at least 1, 2, 3, 5, 10, 12, 15, 20, 50, 80, 100, 120, 150, 200, 300, 500, or 1,000 IP addresses. Alternatively or in addition, a label may identify a collection of less than 2, 3, 5, 10, 12, 15, 20, 50, 80, 100, 120, 150, 200, 300, 500, 1,000, or 2,000 IP addresses. Further, the format of the label may be similar or identical to an IP address, referred herein as Virtual IP (VIP). Preferably, each label identifies multiple IP addresses that are associated with the same attribute, feature, or characteristic, such as the same geographical location (relating to the column102cin the table100), the same ASN (relating to the column102din the table100), the same connection type (relating to the column102ein the table100), the same operating system (relating to the column102fin the table100), the same bandwidth (BW) (relating to the column102gin the table100), or the same RTT (relating to the column102hin the table100). An example of such a labelling scheme is illustrated in a view370shown inFIG.37, exampling the IP addresses collection341of 16 IP addresses. A first label VIP #1371aidentifies IP #1341a, IP #3341c, IP #8341h, and IP #11341k, a second label VIP #12371bidentifies IP #2341b, IP #6341f, and IP #14341n, a third label VIP #3371cidentifies IP #4341d, IP #7341g, IP #12341l, and IP #153410, and a fourth label VIP #4371didentifies IP #5341e, IP #9341i, IP #10341j, IP #13341m, and IP #16341p. The labeling may makes use of mapping table that associate a label with its members. In some cases, such a table may be too big to handle, and may consume substantial computing resources. Alternatively or in addition, a function may be defined, which map each of the IP addresses to a single label, such as a single VIP. Such a function operation is illustrated in a view370ashown inFIG.37a. A defined function map the IP #1341avia a function operation373ato the VIP #1371a, the IP #3341cis mapped via the same function operation373bto the VIP #1371a, the IP #8341his mapped via the same function operation373cto the VIP #1371a, and similarly the IP #11341kis mapped via the same function operation373dto the VIP #1371a. While the IP group341and the sub-groups (such as GIP #1342aor GIP #2342b) were described in the views340-340c(inFIGS.34-34c) as containing IP addresses, such as IP #1341aand IP #2341b, which represent available tunnel devices, a group (or a sub-group) may include labels as a substitute for, or in addition to, specific IP addresses. An example of such a VIP group374is illustrated in a view370bshown inFIG.37b. The VIP group is exampled to include 14 VIP labels, ranging from a VIP #1371ato VIP #14371n, and includes the labels VIP #2371b, VIP #3371c, VIP #4371d, VIP #5371e, VIP #6371f, VIP #7371g, VIP #8371h, VIP #9371i, VIP #10371j, VIP #11371k, VIP #12371l, VIP #13371m, and VIP #14371n. All of, or part of, the VIP labels in the VIP group371may be associated with one or more of the attributes shown in the table330inFIG.33. Similarly, sub-groups may be defined to include a collection of VIP labels, such as a GVIP #1372athat is shown to include the VIP #1371a, VIP #2371b, VIP #3371c, VIP #4371d, VIP #5371e, and the label VIP #6371f. Similarly, a second sub-group GVIP #2372bmay be defined and may include the labels VIP #7371g, VIP #8371h, and the VIP #9371i, and a third sub-group GVIP #3372cmay be defined and may include the labels VIP #10371j, VIP #11371k, VIP #12371l, VIP #13371m, and the VIP #14371n. A VIP group (such as the VIP group374), or a sub-group such as the GVIP #1372a, may include a number that is equal or higher than 1, 2, 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, or 10,000 VIP labels. Alternatively or in addition, a VIP group (such as VIP group374), or a sub-group such as GVIP #1372a, may include less than 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, 10,000 or 20,000 VIP labels. In one example, the mapping function (such as the function operating as373a,373b,373cand373d) may be a hash function, where the labels are the resulting hash values. The hash function may include checksum, check digit, fingerprint, lossy compression, randomization function, error-correcting code, or cipher. In one example, the hash function (such as the function operating as373a,373b,373cand373d) may be based on, or comprises, a Secure Hash Algorithms (SHA). In another example, the mapping function is using, includes, or is based on, a modulo function or operation, which assigns a remainder after division of one number by a number N (sometimes called modulus), for example according to IEEE standard 754-1985. In such a configuration, N is the number of required labels for the group, a number is assigned to each tunnel device, and the associated label is the assigned number modulo N. N may be a number that is equal or higher than 1, 2, 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, or 10,000 labels. Alternatively or in addition, N may be less than 5, 10, 12, 15, 20, 20, 30, 50, 80, 100, 120, 150, 200, 500, 1,000, 2,000, 5,000, 10,000 or 20,000 labels. The number assigned to each of the tunnel device may correspond, may be based on, or may include, any identifier of the specific tunnel device, such as the associated tunnel device IP address, a random number, or a sequential number according to the order of registering in the system or the order upon which the tunnel device was first listed as part of the database73of the TB server71, or based on the order of establishing connection with the TB server71, such as part of the “Establish Connection” step173. Alternatively or in addition, the assigned number may be based on any other attribute of the tunnel devices as shown in the table100, such as according to a feature, attribute, or a characteristic, using their associated numerical value (e.g., IP address value), or according to their alphanumeric identifier (e.g., host name or location name in ASCII value). Any selecting of an element (or multiple elements) from a collection or a group of elements herein, such as the selecting of a tunnel device (for example, by selecting its associated IP address) as part of the “Tunnel Selection” step83shown as part of the flow chart80or the “Select Tunnel” step146shown as part of the flow chart140b, as well as part of a “Select Tunnel From Group” step146a, may be based on random, quazi-random, or deterministic selection. Similarly, the selection of a sub-group or a label (such as VIP label) may be based on random, quazi-random, or deterministic selection. Using random selection allows for load balancing, preferably by equally distributing the workload across the elements, which may optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. The randomness may be based on using a random signal generator. The random signal generator may be based on a digital random signal generator having a digital output. Alternatively, the random signal generator may be based on analog random signal generator having an analog output. Analog random signal generator may use a digital random signal generator whose output is converted to analog using analog to digital converter, or can use a repetitive analog signal generator (substantially not synchronized to any other timing in the system) whose output is randomly time sampled by a sample and hold. A random signal generator (having either analog or digital output) can be hardware based, using a physical process such as thermal noise, shot noise, nuclear decaying radiation, photoelectric effect or other quantum phenomena, or can be software based, using a processor executing an algorithm for generating pseudo-random numbers which approximates the properties of random numbers. Alternatively or in addition, the selection may be deterministic based. In one example, the elements to select from are listed in an orderly fashion, such as according to a feature, attribute, or a characteristic, using their associated numerical value (e.g., IP address value), according to their alphanumeric identifier (e.g., host name or location name in ASCII value), according to the order that joined the collection or group, or according to the order they were formerly selected from the group or collection. In such a case, the elements are sequentially selected according to the list order. In one example, a LIFO (last in first out) like scheme may be used, where the lastly selected entity is re-selected, and upon its unavailability, the one entity that was selected before the last is selected. Alternatively or in addition, a FIFO (first in first out) like scheme is used, where the oldest formerly selected entity selected. In order to better control or manage the large number of potential tunnel devices, tunnel groups may be defined. A tunnel group may include more than 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, or 10,000 different tunnel devices (or different IP addresses) that may be used for tunneling as described herein. Alternatively or in addition, a group may include less than 20, 50, 100, 200, 500, 1000, 2000, 5000, 10,000 or 20,000 different tunnel devices or different IP addresses. A tunnel group may be identified by a tunnel group identifier, so that the tunnel list memory73associate a tunnel group identifier to all the members in that group. In one example, the data base storing the tunnel device identifiers, such as the table330, further includes a column associating each potential tunnel device to its tunnel group, such as by the tunnel group identifier. In one example, the tunnel groups are non-overlapping, where a tunnel device may be included only in a single tunnel group. Alternatively or in addition, a tunnel device may be included in multiple groups. Preferably, the tunnel devices in a tunnel group share the same value (or values or a value range). For example, a tunnel group may include only potential tunnel devices that are associated with the same ASN, as defined in the Column ASN102din the table330. In another example, a tunnel group may include only potential tunnel devices that are associated with the same geographical location, such as being in the same country or city, as defined in the Geographical Location column102cin the table330. As described herein, as part of the “Send Request to SP” step161, the client device31amay influence the selected tunnel by defining an attribute type and an attribute value (or values or a value range). Further, as part of the “Send Tunnel IP to SF” step161a, the client device31amay select a specific tunnel device that will be used for the specific content fetching. In a case where tunnel groups are defined, the client device31amay, as part of the “Send Request to SF” step161, select a tunnel group by using its tunnel group identifier. In such a case, the SP server72forwards the received tunnel group identifier to the TB server71, which in turn selects as part of the “Select Tunnel” step146(such as by random selection or by any orderly selection) a tunnel device from the defined group. For example, the TB server71may identify the available tunnel devices from the group, such as by identifying those devices (or IP addresses) having a respective ‘Y’ value in the “IDLE” column102i, and select from these available to be used tunnel devices (such as by random selection or by any orderly selection). In such a way, the client device31amay repeatedly select from the same tunnel group, allowing for better control and management of the selected tunnel devices. Such mechanism may further be used to emulate to the web server22ba consistency of content fetched from a group of tunnel devices that share an attribute vale (such as an ASN or a geographical location). An example of using tunnel groups is attached in vipdb.js, customer.js, and vipdb_makejs files. Further, IP allocation of data center IPs is described in the attached code ip_alloc.js, IP allocation for customer of data center IPs is described in the attached code vip_allocjs, and IP/vIP allocation for customers is described in the attached code using ip_alloc.js and vip_allocjs. In one example, the database that include the IP list341, the sub-group lists (such as VIP #1371aand the VIP #2371b), the labels list374, the label groups (such as the GVIP #1372aand the GVIP #2372b), or any combination thereof, that is associated with a one of, multiple of, or each one of all of, the client devices (such as the client device #131a), is stored in the TB server71, for example as part of the database73. In such a case, the selecting a tunnel device from the group, the selecting a sub-group, or the selecting a tunnel device from the sub-group, is also performed by the TB server71, as part of the “Select Tunnel” step146as part of the flow chart140bthat is performed by the TB server71, and may include part of, or all of, the “Select Tunnel” step146ashown in theFIG.35, or the flow chart350ashown inFIG.35a. Alternatively or in addition, the selecting a tunnel device from the group, the selecting a sub-group, or the selecting a tunnel device from the sub-group, as well as the storing of the database that include the IP list341, the sub-group lists (such as VIP #1371aand the VIP #2371b), the labels list374, or the label groups (such as the GVIP #1372aand the GVIP #2372b), or any combination thereof, that is associated with a one of, multiple of, or each one of all of, the client devices (such as the client device #131a), is performed by the SP server72. The examples above illustrated a TB server71that is involved in the tunnel registration and connection, such as part of the “Registration and Connection” step81, as well as in the tunnel selection, such as part of the “Tunnel Selection” step83. For example, the flow chart140shown inFIG.14describes the Connection handler flow chart140a, dealing with the registration and tracking, such as by updating a tunnels table, and the selecting of a tunnel for serving a client request as part of the Request Handler flow chart140b. Alternatively or in addition, the selecting of a tunnel to serve a content request from the client device31amay be handled, in whole or in part, by the SP server72. In such a scheme, the full list, or part thereof, of the available tunnels that may be used, is made available to the SP server by the TB server71. The tunnel selecting, such as part of the “Select Tunnel” step146shown inFIG.14, as part of the “Select Tunnel” step146shown as part of the Selection Handler201inFIG.20, or as part of the “Select Tunnel” flow chart146ashown inFIG.35, is performed by the SP server72, as a substitute for, or in addition to, the TB server71. In one example, the full list, or a part thereof, of the available tunnels, is periodically sent to update the SP server72, shown as a data path382in a messaging chart380shown inFIG.38. Such updating may take place at least any 1 second, 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 3 months, or less than 2 seconds, 5 seconds, 10 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, 20 minutes, 30 minutes, 1 hour, 2 hours, 5 hours, 10 hours, 1 day, 2 days, 4 days, 1 week, 2 weeks, 3 weeks, 1 months, 2 months, or 6 months. Alternatively or in addition, the list update may be provided by a request from the SP server72, shown as a data path382in the messaging chart380. For example, such request may be initiated by the SP server72upon receiving one or more requests (such as above pre-set threshold) from the client devices, or upon using many tunnel devices (such as above pre-set threshold). In case where the tunnels list stored by the TB server71is composed of groups, as described above, the TB server71may only send to the SP server72selected groups of the list. For example, only frequently used groups may be updated or sent to the SP server72. Further, the SP server72may request, such as by using the request data path381, one or more groups according to a specific criteria or attributes, similar or identical to the criteria or attributes as described above regarding the selection by the client device31a. A flow chart140′ashown inFIG.39, which corresponds to the flow chart140ashown inFIG.14, describes the operation of the TB server71in a scenario where the tunnel selection is performed by the SP server72. A part of, or whole of, the tunnels table is sent to the SP server72as part of a “Send Table to SP” step391, which may correspond to the data path382. Such updating may be performed periodically, or as a response to a request from the SP server72as part of a “Table Request from SP” step392(which may correspond to the request path381). Further, the available tunnels updating of the SP server72as part of the “Send Table to SP” step391may be initiated by the TB server71itself, such as based on number of changes in the table, for example after exceeding a pre-set threshold number of changes in tunnel devices status, such as number of tunnel devices added to the table, number of tunnel devices removed from the table, or any combination thereof. A flow chart390ashown inFIG.39a, which corresponds to the flow chart210shown inFIG.21, describes the operation of the SP server72in a scenario where the tunnel selection is performed by the SP server72. The full list of available tunnel devices, or a part thereof, is received as part of a “Receive Group from TB Server” step394, which may correspond to the data path382shown in the messaging chart380. After receiving a request from the client device31a(that may include criteria or attribute for the tunnel selecting) as part of the “Receive Request from Client” step151, the SP server72select a tunnel for fetching the requested content as part of the “Select Tunnel” step146′. The receipt of the list of available tunnels may be initiated by the SP server72as part of a “Send Request to TB Server” step393, which may correspond to the data path381shown in the messaging chart380. While the TB server71is exampled above to perform the opening of connection with the selected tunnel, such connection opening and establishing may be performed (as an alternative for, or in addition to the TB server71) by the SP server72itself, as shown in the flow chart390bshown inFIG.39b. The exemplary arrangement130shown inFIG.13above, as well as other examples herein, involves selecting of a single tunnel device, such as the tunnel device #433d, for fetching the required content from the web server22bto the requesting client31a. Alternatively or in addition, multiple tunnel devices may be selected for fetching the same content from the same web server22b. The selecting of redundant multiple tunnel devices may be used for increasing the fetching resiliency and reliability, since in case where one of the selected tunnel devices is unable to fetch the required content, still the requested content may be fetched by another selected tunnel device. For example, a selected tunnel device may become unavailable by transferring, such as by detecting non-idling activity315d, to the ACTIVE state312active from the IDLE state313, as described in the state diagram310shown inFIG.31. Alternatively or in addition, the selected tunnel device may be switched off by a user, or become faulty. Similarly, the connection links or the message transfers involved in the fetching of the content, such as the each of message path131bto the selected tunnel device #433d, the content request131cto the web server22b, the web server reply131d, or the content transfer131eto the TB server71, may become faulty or otherwise unavailable, rendering the selected tunnel device #433dunavailable for such content fetching. Further, the web server22bmay block the tunnel device #433dfrom accessing any content in general, or the requested content in particular, thus again rendering the selected tunnel device #433dunavailable for the required content fetching. Further, using selecting multiple tunnel devices and using them in parallel may accelerate the fetching operation by using the first content that is fetched, and discarding the others that may be received later. Such mechanism allows for using the quickest tunnel, and thus improved the total responsiveness for the content request. When using multiple tunnel devices, the “Tunnel Selection” step83shown as part of the flow chart87inFIG.8includes selecting multiple tunnel devices for the same “Content Request” step82, and each of the selected tunnel devices is used as part of “Using Tunnel” step84shown as part of the flow chart87inFIG.8. The number of tunnel devices that may be selected for a specific single content request may me equal to, or more than, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 35, 40, 45, 50, 60, 70, or 100. Alternatively or in addition, the number of tunnel devices that may be selected for a specific single content request may be less than 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 35, 40, 45, 50, 60, 70, 100, or 150. Preferably, the selected tunnels may be selected based on the same criterions, be part of the same group, or associated with the same label. Further, the selection or the using of the multiple tunnel devices may be partly or fully in parallel. Alternatively or in addition, the selection or the using of the multiple tunnel devices may be sequential. Further, the selection or the using of the multiple tunnel devices may be a combination of parallel and sequential steps. For example, the selection mechanism may be sequential, where a first tunnel device is selected, and only afterwards a second one is selected, followed by a third one to be selected. Alternatively, the multiple tunnels are selected together. Further, the process of selecting a second tunnel may be initiated only after the first tunnel was selected and the content request was sent thereto, to be followed by selecting a third tunnel only after the second tunnel was selected and the content request was sent thereto. Each of the selected tunnel devices may execute part of, or whole of, the tunnel device related functionalities, steps, or methods, such as the flow chart170shown inFIG.17, the flow chart220shown inFIG.22, the flow chart240bshown inFIG.24b, the flow chart240cshown inFIG.24c, the flow chart300shown inFIG.30, the flow charts300-320ashown inFIGS.30-32a, or any combination thereof. The execution of such related functionalities, steps, or methods, may be executed in parallel, or sequentially, with the other selected tunnels. An example of using two tunnel devices for fetching the same content from the same web server22bis shown in a messaging chart400shown inFIG.40, which corresponds to the messaging chart130shown inFIG.13. In addition to fetching the content by using the selected tunnel device #433d, the TB server71further selects the tunnel device #133a, as an example, as another tunnel device to be used. In parallel to, or after, the tunnel device #433dis selected and accessed for the content. Upon completing the selection of the tunnel #133a, the TB server71forwards the requested content identification to the selected tunnel #133a, shown as a message path411ain the messaging chart400shown inFIG.40. Such communication uses the established connection (such as the TCP connection) that was established during the “Registration and Connection” phase81. The message sent over the message path411amay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP, HTTPS, Socket Secure (SOCKS), WebSocket (ws), which may be WebSocket Secure (wss), or HTTP Proxy protocol may be used, where the TB server71executes a server side protocol, and the tunnel #131aexecutes a client side protocol. Alternatively or in addition, the TB server71may executes a client side protocol, and the tunnel #131amay execute a server side protocol. In response to the request message411a, the selected tunnel #133asends a request for the identified content to the appropriate server that stores the required content, exampled to be the web server22b, shown as a message path411bin the messaging chart400binFIG.40. Thus, the “Using Tunnel” phase84is completed where the request arrives at the content source, namely the web server22b. The message sent over the message path411bmay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP or HTTPS protocol may be used, where the web server22bexecutes a server side protocol, and the tunnel #133amay execute a client side protocol. Further, any tunneling protocol or mechanism may be used where the selected tunnel, which is the tunnel #133ain the example herein, serves as a tunnel between the TB server71and the web server22b. The requested content is then fetched from the web server22bto the requesting client31a, as part of the “Content Fetching” phase85, along the ‘opposite’ route of the request flow. As shown in a messaging chart400shown inFIG.40, the content is first sent from the web server22bto the selected tunnel #133aalong a message path411c, which in turn sends it to the TB server71along a message path411d, which in turn sends it to the SP server72along a message path131f, arriving at the requesting client31aalong a message path131g, completing the second request/response cycle from the client device31apoint of view. The protocol or protocols, as well as the messages format, or any other attribute or functionality involved with the using of the tunnel device #133amay be identical, similar, or different, from the corresponding protocols or message formats used as part of employing the tunnel device #433d. For example, the content request path411arelating to employing the tunnel device #133amay be identical, similar, or different, from the corresponding request path131brelating to the employing the tunnel device #433d. In one example, the content request path411amay use Socket Secure (SOCKS) based protocol, while the corresponding request path131bmay use, or may be based on, HTTP Proxy protocol. In case of selecting and using more than two tunnel devices as exampled in the messaging chart400, each the process of fetching content from each such selected device may be identical, similar, or different from any other. A “Request Handler” flow chart140′bshown inFIG.41is based on the “Request Handler” flow chart140b, where multiple tunnel devices are selected and used. In this example, three tunnel devices, designated as #a, #b, and #c, are selected and used. For example, the tunnel device #a may correspond to the tunnel device #433d, and the tunnel device #b may correspond to the tunnel device #133a, as illustrated in the messaging scheme400inFIG.40. Instead of selecting a single tunnel device as illustrated by the “Select Tunnel” step146shown in the flowchart140binFIG.14, three different or distinct tunnel devices are selected as part of a “Select Tunnel #a” step146a, a “Select Tunnel #b” step146b, and a “Select Tunnel #c” step146c. Each of these selecting steps may be identical, similar, or different from, the “Select Tunnel” step146shown in the flowchart140binFIG.14, or any other selecting step described herein. After the selecting, instead of employing a single tunnel device as illustrated by the “Send Request to Tunnel” step147shown in the flowchart140binFIG.14, three distinct or different requests are sent to the three selected tunnel devices, as part of a “Send Request to Tunnel #a” step147a, a “Send Request to Tunnel #b” step147b, and a “Send Request to Tunnel #c” step147c. Each of these request sending steps may be identical, similar, or different from, the “Send Request to Tunnel” step147shown in the flowchart140binFIG.14, or any other request sending step described herein. While in the flow chart140bshown inFIG.14the TB server71waits for a single response from the single selected tunnel device, the “Request Handler” flow chart140′bshown inFIG.41waits for three responses, one from each of the three selected tunnel devices, in response to the “Send Request to Tunnel #a” step147a, the “Send Request to Tunnel #b” step147b, and the “Send Request to Tunnel #c” step147c, as part of a “Receive Content from Tunnels” step148a. In case where the three selected tunnel devices are operative and available to serve as tunnel devices, three (typically identical) responses are expected, one from each of the selected tunnel device. However, in case of a failure or unavailability of one (or more) of the tunnel devices, no response is expected within a pre-defined time period. A handling of the responses is performed as part of a “Select Tunnel Response” step411. For example, for sake of speedy response, the first received response, via the quickest fetching path, is used for sending to requesting client device31a, such as via the SP server72, as part of the “Send Content to SF” step149. In such a case, the two later received responses may be discarded. In one example, the “Select Tunnel #a” step146a, the “Select Tunnel #b” step146b, and the “Select Tunnel #c” step146c, may be performed, in whole or in part, in parallel. Alternatively or in addition, these selecting steps may be performed sequentially. Similarly, the “Send Request to Tunnel #a” step147a, the “Send Request to Tunnel #b” step147b, and the “Send Request to Tunnel #c” step147c, may be performed, in whole or in part, in parallel. Alternatively or in addition, these request sending steps may be performed sequentially. An example of a sequential operation is illustrated in a flow chart140″bshown inFIG.41a. In such a scheme, only after the completion of the selection of all of the tunnel devices they are used for fetching the content. As shown, after the first tunnel device is selected as part of the “Select Tunnel #a” step146a, the second one is selected as part of the “Select Tunnel #b” step146b, followed by selecting the third tunnel device as part of the “Select Tunnel #c” step146c. After the selection is completed, the first selected tunnel device is used as part of the “Send Request to Tunnel #a” step147a, followed by using the second selected one as part of the “Send Request to Tunnel #b” step147b, and then followed by fetching using the third selected tunnel device as part of the “Send Request to Tunnel #c” step147c. While the messaging chart400shown inFIG.40illustrated the scenario where the fetched content is routed via the TB server71based on the messaging chart130shown inFIG.13, the NAT traversal scheme may be equally used for a scenario of multiple tunnel devices. Such a messaging chart420is shown inFIG.42, based on the chart190bshown inFIG.19b. In addition to fetching the content by using the selected tunnel device #433d, the TB server71further selects the tunnel device #233b, as an example, as another tunnel device to be used. In parallel to, or after, the tunnel device #433dis selected and accessed for the content. Upon completing the selection of the tunnel #233b, the TB server71forwards the requested content identification to the selected tunnel #233b, shown as a message path421ain the messaging chart420shown inFIG.42. In response to the request message421a, the selected tunnel #233bsends a request for the identified content to the appropriate server that stores the required content, exampled to be the web server22b, shown as a message path421bin the messaging chart420inFIG.42. Thus, the “Using Tunnel” phase84is completed where the request arrives at the content source, namely the web server22b. The message sent over the message path421bmay use a proprietary protocol, agreed upon between the two communicating nodes. Preferably, the HTTP or HTTPS protocol may be used, where the web server22bexecutes a server side protocol, and the tunnel #233bmay execute a client side protocol. Further, any tunneling protocol or mechanism may be used where the selected tunnel, which is the tunnel #233bin the example herein, serves as a tunnel between the TB server71and the web server22b. The requested content is then fetched from the web server22bto the requesting client31a, as part of the “Content Fetching” phase85, along the ‘opposite’ route of the request flow. As shown in a messaging chart420shown inFIG.42, the content is first sent from the web server22bto the selected tunnel #233balong a message path421c, which in turn sends it to the SP server72along a message path421d, which in turn sends it to the requesting client31aalong a message path131g, completing the second request/response cycle from the client device31apoint of view. The protocol or protocols, as well as the messages format, or any other attribute or functionality involved with the using of the tunnel device #233bmay be identical, similar, or different, from the corresponding protocols or message formats used as part of employing the tunnel device #433d. For example, the content request path421arelating to employing the tunnel device #233bmay be identical, similar, or different, from the corresponding request path131brelating to the employing the tunnel device #433d. In one example, the content request path421amay use Socket Secure (SOCKS) based protocol, while the corresponding request path131bmay use, or may be based on, HTTP Proxy protocol. In case of selecting and using more than two tunnel devices as exampled in the messaging chart420, each the process of fetching content from each such selected device may be identical, similar, or different from any other. The TB server71operation in a NAT transversal scheme is shown in a flow chart420ashown inFIG.42a, based on the corresponding flow chart201shown inFIG.20. As part of processing a content request from the client device31a, the TB server71receives from the SP server72, over the message path131′ashown in the messaging chart420, criteria (or a criterion) for selecting a tunnel device to be used for delivering the requested content, as part of a “Receive Criteria from SF” step202. While as part of the “Receive Request from SP” step145that is part of the flow chart140bthe TB server71was also notified of the identification of the requested content, such identification is not required in this alternative scheme, since the TB server71is no longer part of the actual content request and fetching data paths. In one example, the same message, including also the content identification is sent from the SP server72to the TB server71over the message path131′a, so that the “Receive Criteria from SP” step202may be rendered to be the same as the “Receive Request from SF” step145described above. Instead of selecting a single tunnel device as part of the step146in the flow chart201, the TB server71select multiple tunnels (such as two in the example of the messaging chart420) as part of a “Select Multiple Tunnels” step146′, followed by connecting and directing the selected tunnel devices as part of the “Connect and Direct Tunnels” step203′, in which each tunnel is handled according to the “Connect and Direct Tunnel” step203. The operation of the SP server72in a NAT traversal scheme using three tunnel devices for fetching the same content from the same web server22bis described in a flow chart420bshown inFIG.42, which corresponds to the flow chart210shown inFIG.21. The three tunnel devices are designated #a, #b, and #c. While exampled using three tunnel devices, any number of tunnel devices may equally be used. Instead of sending the request to a single selected tunnel device as described regarding the “Send Request to Tunnel” step215in the flow chart210, the three selected tunnel devices are used as part of a “Send Request to Tunnel #a” step215a, a “Send Request to Tunnel #b” step215b, and a “Send Request to Tunnel #c” step215c. The content fetched from the three tunnels is received as part of a “Receive Content from Tunnels” step216a(corresponding to the “Receive Content from Tunnel” step216in the flow chart210), and one of the responses, such as the first one received, is selected as part of the “Select Tunnel Response” step411. In one example, the using of two (or three) multiple tunnel devices as part of the “Send Request to Tunnel #a” step215a, the “Send Request to Tunnel #b” step215b, and the “Send Request to Tunnel #c” step215c, may be partly or fully in parallel. Alternatively or in addition, the using of the multiple tunnel devices may be sequential. An example of sequential operation is illustrated in a flow chart420cshown inFIG.42c. In this scheme, the “Send Request to Tunnel #b” step215bis initiated only after the “Send Request to Tunnel #a” step215ais completed, and the “Send Request to Tunnel #c” step215cis initiated only after the “Send Request to Tunnel #b” step215bis completed. As exampled above, the requesting client31asends a request for content, and the SP server72, the TB server71, or any combination thereof, select and use multiple tunnel devices for fetching the requesting client device31athe required content. Alternatively or in addition, the requesting client itself may initiate the using of multiple tunnels for the same requested content, as an alternative or in addition to the SP server72, the TB server71, or the combination thereof. In such a configuration, the client device31amay initiate multiple requests for the same content, and the system (such as the SP server72, the TB server71, or any combination thereof) treats each such request as a separate and independent request, and as such selects and uses a different single tunnel device for each request. Thus, the system executes multiple times the URL fetch flow chart87shown inFIG.8, where the same content request is involved in the “Content Request” step82. Alternatively or in addition, the system (such as the SP server72, the TB server71, or any combination thereof) may select and use multiple different tunnel devices for each request, according to, or based on, any multiple tunnel selection or using scheme described herein, or any combination thereof. Such repeating mechanism for the same content requested may be used by the client device31ato ensure that indeed the proper content is received, and there are no errors or mistakes in the system operation. For example, if the same content is indeed fetched as response to the multiple identical requests, it may be used as an indication that the proper content was received in response to the request. The number of requests for the same content by a client device may me equal to, or more than, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 35, 40, 45, 50, 60, 70, or 100. Alternatively or in addition, the number of requests for the same content by a client device may be less than 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 35, 40, 45, 50, 60, 70, 100, or 150. Preferably, the requests may use the same tunnel selection criterions, be part of the same group, or associated with the same label. Alternatively or in addition, the requests may use the different tunnel selection criterions, be part of different groups, or be associated with different labels. Further, the requests for the same content by a client device may be partly or fully in parallel. Alternatively or in addition, the requests for the same content by a client device may be sequential. Further, the requests for the same content by a client device may be a combination of parallel and sequential steps. For example, the requesting mechanism may be sequential, where a first request is performed, and only afterwards a second one is performed, followed by a third one to be performed. Alternatively, the requests are processed together. Further, the second request may be initiated only after the first request was completed, to be followed by a third request after the second one is completed. Each of the requests may execute part of, or whole of, the client device request related functionalities, steps, or methods, such as the flow chart160shown inFIG.16, the flow chart160ashown inFIG.16a, the flow chart260shown inFIG.26, the flow chart390bshown inFIG.39b, or any combination thereof. The execution of such related functionalities, steps, or methods, may be executed in parallel, or sequentially, with the other selected tunnels. An example of using two requests for the same content by the requesting client31ais illustrated in a messaging chart430shown inFIG.43. As exampled herein, a first request over the path121ais directed to the tunnel device #433dthat relays the request to the web server22bover the path131c, and the path131ddescribes the content path as a response of the web server22bto the tunnel device #433d. The fetched content is then relayed (such as by using the SP proxy72, the TB server71, or any combination thereof) to the requesting client31aover the data path131g. Sequentially or in parallel, the client31amay submit to the SP server72(over a data path121′a) another request to the same content. The second request shown as the data path121′amay be identical to the first one sent over the data path121a, or may be different, such as providing different rules or criterions for selecting a tunnel device for serving the request. The second request over the path121′amay be directed to the same, or to a different tunnel device, such as the tunnel device #233bthat relays the request to the web server22bover a data path421d, and a data path421cdescribes the content path as a response of the web server22bto the tunnel device #233b. The fetched content is then relayed (such as by using the SP proxy72, the TB server71, or any combination thereof) to the requesting client31aover a data path131′g. An example of sending three content requests for the same content by the client device31ais illustrated in a flow chart430ashown inFIG.43a. As part of a “Define Request” step431, the content request, intended to be used with multiple requests for the same content, is defined and prepared. Then three different requests are sent designated as #1, #2, and #3, where a first request is sent as part of a “Send Request #1” step161a, a second request (for the same content as request #1) is sent as part of a “Send Request #2” step161b, in parallel or sequential to the first request, and a third request (for the same content as request #1) is sent as part of a “Send Request #3” step161c, in parallel or sequential to the first or second request. The three instances of the same content are fetched and received as part of a “Receive Content from SP” step162a, and the content to be actually used by the client device31ais selected in a “Select Response” step411a. For example, the first received content may be used, while the other received later are discarded. In another example, the first two content instances received are checked to include the same content, and only then the content is used. Alternatively or in addition, the client device31amay use sequential operation, where the requests for the same content are sequentially submitted, where a request is sent only after a content of a former request is obtained, as exampled in a flow chart430bshown inFIG.43b. A first request is sent as part of the “Send Request #1” step161a, and the client device31awaits until the content is received in response to the first request #1 as part of a “Receive Content #1” step162b. At this stage, the client device31amay use this fetched content as part of the “Select response” step411a, as shown by the dashed line432a. Alternatively or in addition, in response to receiving the response as part of the “Receive Content #1” step162b, the client device31amay initiate another request for the same content as part of a “Send Request #2” step161b, and waits for a response as part of a “Receive Content #2” step162c. At this stage, the client device31amay use this second fetched content as part of the “Select response” step411a, as shown by the dashed line432b, or may select between the first and second responses. In one example, the received content may be used only if both fetched content are the same. Alternatively or in addition, in response to receiving the response as part of the “Receive Content #2” step162c, the client device31amay initiate a third request for the same content as part of a “Receive Content #3” step162d. Alternatively or in addition, in response to receiving the response as part of the “Receive Content #2” step162c, the client device31amay initiate a third request for the same content as part of a “Send Request #3” step161c, and waits for a response as part of a “Receive Content #3” step162d. At this stage, the client device31amay use this third fetched content as part of the “Select response” step411a, as shown by the dashed line432c, or may select between the first, second, and third responses (if different, for example). A redundancy scheme is exampled herein in the messaging chart400where two tunnels (tunnel #133aand tunnel #433d) are used for fetching the same content from the same web server22b. Similarly, a redundancy scheme is exampled herein in the flow chart410where three tunnels (designated #a, #b, and #c) are used for fetching the same content from the same web server22b. Further, a redundancy may employ multiple requests from the client device31a, wherein each request may use a different tunnel, a different data path, or both, as described in the flow chart430a. Further, a redundancy may be used by employing multiple data paths or multiple components. Such a redundancy may be used in order to improve the accuracy, reliability, or availability. The redundancy may be further implemented where two or more components may be used for the same functionality. The components may be similar, substantially or fully the same, identical, different, substantially different, or distinct from each other, or any combination thereof. The redundant components may be concurrently operated, allowing for improved robustness and allowing for overcoming a single point of failure (SPOF), or alternatively one or more of the components serves as a backup. The redundancy may be a standby redundancy, which may be ‘Cold Standby’ and ‘Hot Standby’. In the case three redundant components are used, Triple Modular Redundancy (TMR) may be used, and Quadruple Modular Redundancy (QMR) may be used in the case of four components. A 1:N Redundancy logic may be used for three or more components. Deciding which unit is correct, such as by the TB server71receiving multiple content responses from selected multiple tunnel devices, or by the client device31awhen using multiple content requests, may be challenging if only two units are used. If more than two units are used, the problem is simpler, usually the majority wins or the two that agree win. In N Modular Redundancy, there are three main typologies: Dual Modular Redundancy, Triple Modular Redundancy, and Quadruple Redundancy. Quadruple Modular Redundancy (QMR) is fundamentally similar to TMR but using four units instead of three to increase the reliability. The obvious drawback is the 4× increase in system cost. Dual Modular Redundancy (DMR) uses two functional equivalent units, thus either can control or support the system operation. The most challenging aspect of DMR is determining when to switch over to the secondary unit. Because both units are monitoring the application, a mechanism is needed to decide what to do if they disagree. Either a tiebreaker vote or simply the secondary unit may be designated as the default winner, assuming it is more trustworthy than the primary unit. Triple Modular Redundancy (TMR) uses three functionally equivalent units to provide a redundant backup. This approach is very common in aerospace applications where the cost of failure is extremely high. TMR is more reliable than DMR due to two main aspects. The most obvious reason is that two “standby” units are used instead of just one. The other reason is that in a technique called diversity platforms or diversity programming may be applied. In this technique, different software or hardware platforms are used on the redundant systems to prevent common mode failure. The voter decides which unit will actively control the application. With TMR, the decision of which system to trust is made democratically and the majority rules. If three different answers are obtained, the voter must decide which system to trust or shut down the entire system, thus the switchover decision is straightforward and fast. Another redundancy topology is 1:N Redundancy, where a single backup is used for multiple systems, and this backup is able to function in the place of any single one of the active systems. This technique offers redundancy at a much lower cost than the other models by using one standby unit for several primary units. This approach only works well when the primary units all have very similar functions, thus allowing the standby to back up any of the primary units if one of them fails. While the redundant data paths, content requests, or selected tunnel devices, have been exampled with regard to the added reliability and availability, redundant data paths may as well be used in order to provide higher aggregated data rate, allowing for faster response and faster transfer of data over the multiple data paths. Each of the devices denoted herein as servers, such as the SP server72, the TB server71, the web server22b, or the dedicated tunnel33a(when implemented as a server), may function as a server in the meaning of client/server architecture, providing services, functionalities, and resources, to other devices (clients), commonly in response to the clients' request. Each of the server devices may further employ, store, integrate, or operate a server-oriented operating system, such as the Microsoft Windows Server® (2003 R2, 2008, 2008 R2, 2012, or 2012 R2 variant), Linux™ (or GNU/Linux) variants (such as Debian based: Debian GNU/Linux, Debian GNU/kFreeBSD, or Debian GNU/Hurd, Fedora™, Gentoo™, Linspire™ Mandriva, Red Hat® Linux available from Red Hat, Inc. headquartered in Raleigh, North Carolina, U.S.A., Slackware®, SuSE, or Ubuntu®), or UNIX®, including commercial UNIX® variants such as Solaris™ (available from Oracle Corporation headquartered in Redwood City, California, U.S.A.), AIX® (available from IBM Corporation headquartered in Armonk, New York, U.S.A.), or Mac™ OS X (available from Apple Inc. headquartered in Cupertino, California, U.S.A.), or free variants such as FreeBSD®, OpenBSD, and NetBSD®. Alternatively or in addition, each of the devices denoted herein as servers, may equally function as a client in the meaning of client/server architecture. Devices that are not denoted herein as servers, such as client devices (such as the client device31a) or any of the tunnel devices (including the dedicated tunnel33awhen implemented as a server), may typically function as a client in the meaning of client/server architecture, commonly initiating requests for receiving services, functionalities, and resources, from other devices (servers or clients). Each of the these devices may further employ, store, integrate, or operate a client-oriented (or end-point dedicated) operating system, such as Microsoft Windows® (including the variants: Windows 7, Windows XP, Windows 8, and Windows 8.1, available from Microsoft Corporation, headquartered in Redmond, Washington, U.S.A.), Linux, and Google Chrome OS available from Google Inc. headquartered in Mountain View, California, U.S.A. Further, each of the these devices may further employ, store, integrate, or operate a mobile operating system such as Android (available from Google Inc. and includes variants such as version 2.2 (Froyo), version 2.3 (Gingerbread), version 4.0 (Ice Cream Sandwich), Version 4.2 (Jelly Bean), and version 4.4 (KitKat), iOS (available from Apple Inc., and includes variants such as versions 3-7), Windows® Phone (available from Microsoft Corporation and includes variants such as version 7, version 8, or version 9), or Blackberry® operating system (available from BlackBerry Ltd., headquartered in Waterloo, Ontario, Canada). Alternatively or in addition, each of the devices that are not denoted herein as servers, may equally function as a server in the meaning of client/server architecture. The method and system described herein allows for a client device (such as the client device31aoperation described in the flow chart160inFIG.16or the flow chart160ainFIG.16a) to effectively fetch content from a data server (such as the web server22b). The method and system may be used by the client device for supporting an application, such as a web browser application, when the application is requesting a content from the Internet in general, and from a data server in particular. The request for Internet-related content may be intercepted by the ‘client’ application and process, initiating the client flowchart160shown inFIG.16, or the flowchart160ashown inFIG.16a. In one example, the client device uses a communication-related application to be used by the application when no ‘client’ application is present, such as HTTP stack handling application. The request from the requesting application to the communication-related application is intercepted and routed to be handled as part of the ‘client’ application or process. Such interception may be in the form of a filter driver (or any other intermediate driver), enabling the interception as part of the OS kernel. Alternatively or in addition, the interception may be in the form of extension or a plug-in of the requesting application, such as a browser plug-in or a browser extension in the case where the application is a web browser. Alternatively or in addition, the interception of the request may use hooking of the requesting application or of the communication-related application. Alternatively or in addition, the application and the steps described herein may communicate using an Inter-Process Communication (IPC), such as a file sharing, a signal, a socket, a pipe, a message queue, a shared memory, a semaphore, or memory mapped file. In Windows environment, the IPC may be based on a clipboard, a Component Object Model (COM), a data copy, a DDE protocol, or mailslots. Examples of web browsers include Microsoft Internet Explorer (available from Microsoft Corporation, headquartered in Redmond, Washington, U.S.A.), Google Chrome which is a freeware web browser (developed by Google, headquartered in Googleplex, Mountain View, California, U.S.A.), Opera™ (developed by Opera Software ASA, headquartered in Oslo, Norway), and Mozilla Firefox® (developed by Mozilla Corporation headquartered in Mountain View, California, U.S.A.). The web-browser may be a mobile browser, such as Safari (developed by Apple Inc. headquartered in Apple Campus, Cupertino, California, U.S.A.), Opera Mini™ (developed by Opera Software ASA, headquartered in Oslo, Norway), and Android web browser. Any communication between any two nodes may use the Socket Secure (SOCKS), WebSocket (ws), which may be WebSocket Secure (wss), or HTTP Proxy protocol. Further, any communication between any two nodes may use the HTTP or HTTPS protocol. In one example, a communication between the client device31aor any tunnel device (such as the tunnel #133a, the tunnel #233b, the tunnel #333c, the tunnel #433d, or the tunnel #533e) and any server, such as the TB server71, the SP server72, or the Web Server22b, may use the SOCKS, WebSocket or HTTP Proxy protocol, wherein the respective device, such as the client device31aor the tunnel device, executes the respective SOCKS, WebSocket or HTTP Proxy client side protocol, and the respective server executes the respective SOCKS, WebSocket or HTTP Proxy server side protocol. Alternatively or in addition, the respective device, such as the client device31aor the tunnel device, executes the respective SOCKS, WebSocket or HTTP Proxy server side protocol, and the respective server executes the respective SOCKS, WebSocket or HTTP Proxy client side protocol. Further, a communication between the client device31aor any tunnel device (such as the tunnel #133a, the tunnel #233b, the tunnel #333c, the tunnel #433d, or the tunnel #533e) and any server, such as the TB server71, the SP server72, or the Web Server22b, may use the HTTP (or HTTPS) protocol, wherein the respective device, such as the client device31aor the tunnel device, executes the HTTP (or HTTPS) client side protocol, and the respective server executes the HTTP (or HTTPS) server side protocol. Alternatively or in addition, the respective device, such as the client device31aor the tunnel device, executes the HTTP (or HTTPS) server side protocol, and the respective server executes the HTTP (or HTTPS) client side protocol. The term ‘network element’ (or ‘element’) or ‘network node’ (or ‘node’) is used herein to include, but not limited to, the client device31a, a tunnel device (such as the tunnel device #133a), the SP server72, the TB server71, or a web server (such as the web server #122a). Any memory, storage, database, or cache mentioned herein may consist of, comprise, use, or be included in, the local cache as described in U.S. Pat. No. 8,135,912 to the Shribman et al., entitled: “System and Method of Increasing Cache Size”. Any device, component, or apparatus herein, may be structured as, may be shaped or configured to serve as, or may be integrated with, a wearable device. In one example, any one or more of the tunnel devices herein, such as the tunnel device #133a, the tunnel device #233b, or the tunnel device #333c, may consists of, may comprise, may be integrated with, or may be part of, a wearable device. Similarly, any one or more of the client devices herein, such as the client device #131a, or the client device #231b, may consist of, may comprise, may be integrated with, or may be part of, a wearable device. Any wearable device or any apparatus or device herein may be wearable on an organ such as on the person head, and the organ may be eye, ear, face, cheek, nose, mouth, lip, forehead, or chin. Alternatively or in addition, wearable device or any apparatus or device herein may be constructed to have a form substantially similar to, may be constructed to have a shape allowing mounting or wearing identical or similar to, or may be constructed to have a form to at least in part substitute for, headwear, eyewear, or earpiece. Any headwear herein may consist of, may be structured as, or may comprise, a bonnet, a headband, a cap, a crown, a fillet, a hair cover, a hat, a helmet, a hood, a mask, a turban, a veil, or a wig. Any eyewear herein may consist of, may be structured as, or may comprise, glasses, sunglasses, a contact lens, a blindfold, or a goggle. Any earpiece herein may consist of, may be structured as, or may comprise, a hearing aid, a headphone, a headset, or an earplug. Alternatively or in addition, any enclosure herein may be permanently or releasably attachable to, or may be part of, a clothing piece of a person. The attaching may use taping, gluing, pinning, enclosing, encapsulating, a pin, or a latch and hook clip, and the clothing piece may be a top, bottom, or full-body underwear, or a headwear, a footwear, an accessory, an outwear, a suit, a dress, a skirt, or a top. Any system or device herein may use a virtualization. Any system or device herein may further comprise a Virtual Machine (VM) executing a virtualized application. Any device herein, or any part thereof, such as the client device, the web server, at least one of the tunnel devices, the first server, or the second server, may be implemented as virtual hardware as part of the VM. At least one of any action or step herein by any device may be executed as part of the virtualized application. Any network herein may be used with a virtualization, and any network herein may be executed as a virtualized network as part of a Virtual Machine (VM). The virtualization may be implemented by a host computer that may implement the VM, and any method herein may further comprise executing, by the host computer, a hypervisor or a Virtual Machine Monitor (VMM), and the virtualized may use or interface virtual hardware. Any virtualization herein may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. For example, any communication between two entities selected from a group consisting of the client device, the web server, at least one of the multiple tunnel devices, the first server, and the second server, may be executed as a virtualized network as part of a Virtual Machine (VM). Any method herein, any step herein, any flow-chart herein, or any part thereof, may be used with a virtualization, and at least one of the steps or methods herein may be executed as part of a virtualized application as part of a Virtual Machine (VM). Any device herein, such as the analyzer device, the first device, or any part thereof, may be implemented as virtual hardware. Any virtualization herein may be used with an host computer that implement the VM, and may further comprising executing, by the host computer, a hypervisor or a Virtual Machine Monitor (VMM). Any virtualized application herein or any or hardware virtualization herein may use or may interface virtual hardware. Any virtualization herein may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. Any operating system herein may be used with a virtualization, and any operating system herein may be executed as a guest operating system as part of a Virtual Machine (VM). The virtualization may be implemented by a host computer that may implement the VM, and any method herein may further comprise executing, by the host computer, a hypervisor or a Virtual Machine Monitor (VMM), and the guest operating system may use or interface virtual hardware. Any such virtualization herein may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. Any element or entity herein, such as the client device, the web server, at least one of the multiple tunnel devices, the first server, and the second server, may be implemented as virtualized entity. Any virtualization may include, may be based on, or may use, desktop virtualization, network virtualization, storage virtualization, application virtualization, server virtualization, or any combination thereof. Further, any virtualization herein may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. Further, any virtualization herein may include, may be based on, or may use, a virtual machine (VM) on a host computer that executes a hypervisor or Virtual Machine Monitor (VMM), and the operating system may be a guest operating system that may use or interface a virtual hardware. Any method herein may be used with a virtualization, where at least one of the steps may be executed as part of a virtualized application as part of a Virtual Machine (VM). Alternatively or in addition, the client device or any part thereof, the web server or any part thereof, at least one of the multiple tunnel devices or any part thereof, the first server or any part thereof, or the second server or any part thereof, may be implemented as virtual hardware. Further, any method herein may be used with a host computer that may implement the VM, and any method herein may further comprise executing, by the host computer, a hypervisor or a Virtual Machine Monitor (VMM), and any virtualized application herein or any hardware herein may use or may interface virtual hardware. Any virtualization herein may include, may be based on, or may uses, full virtualization, para-virtualization, or hardware assisted virtualization. At least two devices that may be selected from a group consisting of the client device, the web server, at least one of the multiple tunnel devices, the first server, and the second server, may be implemented as virtual hardware, and the at least two devices may be virtualized by the same host computer that implements the VM. The steps described herein may be sequential, and performed in the described order. For example, in a case where a step is performed in response to another step, or upon completion of another step, the steps are executed one after the other. However, in case where two or more steps are not explicitly described as being sequentially executed, these steps may be executed in any order, or may be simultaneously performed. Two or more steps may be executed by two different network elements, or in the same network element, and may be executed in parallel using multiprocessing or multitasking. For example, any two actions or steps of sending, any two actions or steps of receiving, any two actions or steps of selecting, any two actions or steps of processing, or any combination thereof, may be performed in full or in part in parallel by the same entity (e.g., server, client, or tunnel) or separated entities, using multitasking or multiprocessing. Similarly, any steps of sending and receiving, sending and selecting, sending and processing, receiving and selecting, receiving and processing, or any combination thereof, may be performed in full or in part in parallel by the same entity (e.g., server, client, or tunnel) or separated entities, using multitasking or multiprocessing. A tangible machine-readable medium (such as a storage) may have a set of instructions detailing part (or all) of the methods and steps described herein stored thereon, so that when executed by one or more processors, may cause the one or more processors to perform part of, or all of, the methods and steps described herein. Any of the network elements may be a computing device that comprises a processor and a computer-readable memory (or any other tangible machine-readable medium), and the computer-readable memory may comprise computer-readable instructions such that, when read by the processor, the instructions causes the processor to perform the one or more of the methods or steps described herein. Any part of, or the whole of, any of the methods described herein may be provided as part of, or used as, an Application Programming Interface (API), defined as an intermediary software serving as the interface allowing the interaction and data sharing between an application software and the application platform, across which few or all services are provided, and commonly used to expose or use a specific software functionality, while protecting the rest of the application. The API may be based on, or according to, Portable Operating System Interface (POSIX) standard, defining the API along with command line shells and utility interfaces for a software compatibility with variants of Unix and other operating systems, such as POSIX.1-2008 that is simultaneously IEEE STD. 1003.1™—2008 entitled: “Standard for Information Technology—Portable Operating System Interface(POSIX(R)) Description”, and The Open Group Technical Standard Base Specifications, Issue7, IEEE STD. 1003.1™, 2013 Edition. Any server, client, tunnel, or other device herein, such as the SP server72, the TB server71, the client device31a, the tunnel device #133a, the tunnel device #233b, the tunnel device #333c, the tunnel device #433d, the tunnel device #533e, or any combination thereof, may execute any part of, or whole of, any one or more of the JavaScript program code of the modules, subroutines, programs, or functions included in any of the U.S. Provisional Application Ser. No. 62/550,834, which was filed on Aug. 28, 2017, U.S. Provisional Application Ser. No. 62/563,157, which was filed on Sep. 26, 2017, U.S. Provisional Application Ser. No. 62/624,208, which was filed on Jan. 31, 2018, U.S. Provisional Application Ser. No. 62/684,211, which was filed on Jun. 13, 2018, or any combination thereof. Any server, client, tunnel, or other device herein, such as the SP server72, the TB server71, the client device31a, the tunnel device #133a, the tunnel device #233b, the tunnel device #333c, the tunnel device #433d, the tunnel device #533e, or any combination thereof, may comprise any element or functionality, or may execute any step, method, or action, described in the “BACKGROUND” section above, including in any of the documents incorporated therein. Any device or network element herein may comprise, consists of, or include a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a cellular handset, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or a non-portable device. Further, any device or network element herein may comprise, consist of, or include a major appliance (white goods) and may be an air conditioner, dishwasher, clothes dryer, drying cabinet, freezer, refrigerator, kitchen stove, water heater, washing machine, trash compactor, microwave oven and induction cooker. The appliance may similarly be a ‘small’ appliance such as TV set, CD or DVD player, camcorder, still camera, clock, alarm clock, video game console, HiFi or home cinema, telephone or answering machine. Any system or apparatus herein may further be operative for storing, operating, or using, an operating system. Any system herein may comprise a Virtual Machine (VM) for virtualization, and the operating system may be executed as a guest operating system. Any system herein may further comprise a host computer that implements the VM, and the host computer may be operative for executing a hypervisor or a Virtual Machine Monitor (VMM), and the guest operating system may use or may interface virtual hardware. Any virtualization herein, such as any operating system virtualization, may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. The term ‘host’ or ‘network host’ is used herein to include, but not limited to, a computer or other device connected to a computer network, such as the Internet. A network host may offer information resources, services, and applications to users or other nodes on the network, and is typically assigned a network layer host address. Computers participating in networks that use the Internet Protocol Suite may also be called IP hosts, and computers participating in the Internet are called Internet hosts, or Internet nodes. Internet hosts and other IP hosts have one or more IP addresses assigned to their network interfaces. The addresses are configured either manually by an administrator, automatically at start-up by means of the Dynamic Host Configuration Protocol (DHCP), or by stateless address autoconfiguration methods. Network hosts that participate in applications that use the client-server model of computing, are classified as server or client systems. Network hosts may also function as nodes in peer-to-peer applications, in which all nodes share and consume resources in an equipotent manner. The arrangements and methods described herein may be implemented using hardware, software or a combination of both. The term “software integration” or any other reference to the integration of two programs or processes herein, is used herein to include, but not limited to, software components (e.g., programs, modules, functions, processes, etc.) that are (directly or via another component) combined, working or functioning together or form a whole, commonly for sharing a common purpose or set of objectives. Such software integration can take the form of sharing the same program code, exchanging data, being managed by the same manager program, executed by the same processor, stored on the same medium, sharing the same GUI or other user interface, sharing peripheral hardware (such as a monitor, printer, keyboard and memory), sharing data or a database, or being part of a single package. The term “hardware integration” or integration of hardware components is used herein to include, but not limited to, hardware components that are (directly or via another component) combined, working or functioning together or form a whole, commonly for sharing a common purpose or set of objectives. Such hardware integration can take the form of sharing the same power source (or power supply) or sharing other resources, exchanging data or control (e.g., by communicating), being managed by the same manager, physically connected or attached, sharing peripheral hardware connection (such as a monitor, printer, keyboard and memory), being part of a single package or mounted in a single enclosure (or any other physical collocating), sharing a communication port, or used or controlled with the same software or hardware. The term “integration” herein is used herein to include as applicable, but not limited to, a software integration, a hardware integration, or any combination thereof. Any networking protocol may be utilized for exchanging information between the network elements (e.g., clients, tunnels, peers, servers) within the network (such as the Internet). For example, it is contemplated that communications can be performed using TCP/IP. Generally, HTTP and HTTPS are utilized on top of TCP/IP as the message transport envelope. These two protocols are able to deal with firewall technology better than other message management techniques. However, partners may choose to use a message-queuing system instead of HTTP and HTTPS if greater communications reliability is needed. A non-limiting example of a message queuing system is IBM's MQ-Series or the Microsoft Message Queue (MSMQ). The system described hereinafter is suited for both HTTP/HTTPS, message-queuing systems, and other communications transport protocol technologies. Furthermore, depending on the differing business and technical requirements of the various partners within the network, the physical network may embrace and utilize multiple communication protocol technologies. Any network herein, such as the first network or the second network, may be implemented as a virtualized network as part of a Virtual Machine (VM). Any system herein may comprise a host computer that implements the VM. The host computer may further be operative for executing a hypervisor or a Virtual Machine Monitor (VMM). Any virtualized network herein may use or may interface virtual hardware. Any virtualization herein may include, may be based on, or may use, full virtualization, para-virtualization, or hardware assisted virtualization. The term “port” refers to a place of access to a device, electrical circuit or network, where energy or signal may be supplied or withdrawn. The term “interface” of a networked device refers to a physical interface, a logical interface (e.g., a portion of a physical interface or sometimes referred to in the industry as a sub-interface—for example, such as, but not limited to a particular VLAN associated with a network interface), and/or a virtual interface (e.g., traffic grouped together based on some characteristic—for example, such as, but not limited to, a tunnel interface). As used herein, the term “independent” relating to two (or more) elements, processes, or functionalities, refers to a scenario where one does not affect nor preclude the other. For example, independent communication such as over a pair of independent data routes means that communication over one data route does not affect nor preclude the communication over the other data routes. Some embodiments may be used in conjunction with various devices, network elements, and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a cellular handset, a handheld PDA device, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a non-mobile or non-portable device, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router, a wired or wireless modem, a wired or wireless network, a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.11, 802.11a, 802.11b, 802.11g, 802.11k, 802.11n, 802.11r, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards and/or future versions and/or derivatives of the above standards, units and/or devices which are part of the above networks, one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA device which incorporates a wireless communication device, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device (e.g., BlackBerry, Palm Treo), a Wireless Application Protocol (WAP) device, or the like. While the communication sessions between the elements herein, such as between servers and clients, are exampled to be over the Internet113using Internet Protocol (IP) or TCP/IP, any other communication protocols may be equally used, such as a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.11, 802.11a, 802.11b, 802.11g, 802.11k, 802.11n, 802.11r, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards. For example, each of, or all of, the communication path111abetween the tunnel device #133aand the TB server71, the communication path111bbetween the tunnel device #233band the TB server71, the communication path111cbetween the tunnel device #333cand the TB server71, the communication path111dbetween the tunnel device #433dand the TB server71, and the communication path111ebetween the tunnel device #533eand the TB server71, may use any one of the protocols associated with a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.11, 802.11a, 802.11b, 802.11g, 802.11k, 802.11n, 802.11r, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards. Similarly, each of, or all of, the communication path121abetween the client device31aand the SP server72, the communication path131abetween the SP server72and the TB server71, the communication path131cor131dbetween the tunnel device #433dand the web server22b, and the communication path191or192between the SP server72and the tunnel device #433d, may use a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), devices and/or networks operating substantially in accordance with existing IEEE 802.11, 802.11a, 802.11b, 802.11g, 802.11k, 802.11n, 802.11r, 802.16, 802.16d, 802.16e, 802.20, 802.21 standards. As used herein, the terms “program”, “programmable”, and “computer program” are meant to include any sequence or human or machine cognizable steps which perform a function. Such programs are not inherently related to any particular computer or other apparatus, and may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the likes, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the likes, as well as in firmware or other implementations. Generally, program modules include routines, programs, objects, components, data structures, etc., that performs particular tasks or implement particular abstract data types. The term “application program” (also referred to as ‘application’, ‘software application’, or ‘application software’) is used herein to include, but not limited to, a computer program designed to perform a specific function directly for a user, or for another application program. Application software is typically a set of one or more programs designed to carry out operations for a specific application. Commonly, an application software is dependent on system software that manages and integrates computer capabilities, but does not directly perform tasks that benefit the user, such as an operating system, to execute. Examples of types of application software may include accounting software, media players, and office suites. Applications may be bundled with the computer and its system software, or may be published separately, and further may be developed and coded as a proprietary, or as an open-source, software. Most applications are designed to help people perform an activity. The terms “task” and “process” are used generically herein to describe any type of running programs, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to any block and flow diagrams and message sequence charts, may typically be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments, unless this disables the embodiment or a sequence is explicitly or implicitly required (e.g., for a sequence of reading the value, processing the value—the value must be obtained prior to processing it, although some of the associated processing may be performed prior to, concurrently with, and/or after the read operation). Where certain process steps are described in a particular order or where alphabetic and/or alphanumeric labels are used to identify certain steps, the embodiments are not limited to any particular order of carrying out such steps. In particular, the labels are used merely for convenient identification of steps, and are not intended to imply, specify or require a particular order for carrying out such steps. Furthermore, other embodiments may use more or less steps than those discussed herein. They may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. The corresponding structures, materials, acts, and equivalents of all means plus function elements in the claims below are intended to include any structure, or material, for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. The present invention should not be considered limited to the particular embodiments described above, but rather should be understood to cover all aspects of the invention as fairly set out in the attached claims. Various modifications, equivalent processes, as well as numerous structures to which the present invention may be applicable, will be readily apparent to those skilled in the art to which the present invention is directed upon review of the present disclosure. All publications, standards, patents, and patent applications cited in this specification are incorporated herein by reference as if each individual publication, patent, or patent application were specifically and individually indicated to be incorporated by reference and set forth in its entirety herein. Any of the arrangements or actions described herein (or any part thereof) may be implemented as a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or Flash memory), a Static Random Access Memory (SRAM), a portable Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Any computer readable program instructions described herein may be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. Any network herein may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, Instruction-Set-Architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network mentioned herein. In some embodiments, electronic circuitry including, for example, programmable logic circuitry, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the various arrangements described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. Further, each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable program instructions. Any computer readable program instructions or steps herein may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Any program described herein may be identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. | 248,833 |
11863340 | DETAILED DESCRIPTION Throughout the disclosure, the expression “at least one of a, b or c” indicates any of: only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. Terms used herein will be described in brief, and the disclosed embodiments of the disclosure will be described in detail. Although terms used in the disclosure are selected with general terms popularly used at present under the consideration of functions in the disclosure, the terms may vary according to the intention of those of ordinary skill in the art, judicial precedents, introduction of new technology, etc. In addition, in a specific case, the applicant voluntarily may select terms, and in this case, the meaning of the terms is disclosed in a corresponding description part of an embodiment of the disclosure. Thus, the terms used in the disclosure should be defined not by the simple names of the terms but by the meaning of the terms and the contents throughout the disclosure. Throughout the entirety of the specification of the disclosure, when it is assumed that a certain part includes a certain component, the term ‘including’ means that a corresponding component may further include other components unless specially described to the contrary. The term used in the embodiments of the disclosure such as “unit” or “module” indicates a unit for processing at least one function or operation, and may be implemented in hardware, software, or in a combination of hardware and software. Hereinafter, embodiments of the disclosure will be described in detail with reference to the attached drawings to allow those of ordinary skill in the art to easily carry out the embodiments of the disclosure. However, an embodiment of the disclosure may be implemented in several different forms, and is not limited to the embodiment of the disclosure described herein. To clearly describe an embodiment of the disclosure, parts that are not associated with the description have been omitted from the drawings, and throughout the entire disclosure, identical reference numerals refer to identical parts. The disclosure will primarily describe various components as “home appliances” or “appliances” as an example use case. However, it is noted that the principles disclosed within are broadly applicable to a variety of devices and are not limited to the contexts of appliances or residential homes. FIG.1is a diagram illustrating a system for controlling a home appliance according to an embodiment of the disclosure. Referring toFIG.1, a system for controlling a home appliance (hereinafter, a home appliance control system) according to an embodiment of the disclosure may include a server device110, a first home appliance120, a user equipment130, and a second home appliance140. However, all the illustrated components are not essential components. A home appliance control system may be implemented by more components than the illustrated components or fewer components than the illustrated components. The first home appliance120and the second home appliance140, according to an embodiment of the disclosure, may communicate with the user equipment130through wireless fidelity (WiFi) communication. The first home appliance120and the second home appliance140may connect to the server device110through an access point (AP) device150. The first home appliance120and the second home appliance140may perform WiFi communication with the AP device150and may connect to the server device110by connecting to the Internet through the AP device150. The second home appliance140, according to an embodiment of the disclosure, may be an ultra-wideband (UWB) device including a UWB communication module, and may measure a location of the user equipment130based on a UWB measurement signal. The first home appliance120according to an embodiment of the disclosure may not include a UWB communication module. The first home appliance120may provide various functions while communicating with the user equipment130and the server device110. For example, the first home appliance120may connect to the server device110and may be registered in the server device110. The first home appliance120may provide various functions through an application executed on the user equipment130. The application may operate in conjunction with the server device110. The application may provide a function such as monitoring, control, automation, voice assistant, etc., of the first home appliance120. When the first home appliance120is registered in the server device110, it may mean that device information about the first home appliance120(a model name, a serial number, a manufacturing date, etc.), user account information about the first home appliance120, network information about the first home appliance120(an Internet protocol (IP) address, etc.), and so forth are stored in the server device110. Thus, on an application executed on the user equipment130, a user logging in through a user account may transmit a control command for controlling the first home appliance120to the server device110, and the server device110may transmit the control command to the first home appliance120based on the network information about the first home appliance120. To provide such an application function, the first home appliance120needs to be registered in the server device110and has to establish communication with the server device110. The first home appliance120may not be able to provide an application function and various functions provided in the server device110, in a new product state of not yet being registered in the server device110after release from a factory. Such a new product state may be referred to as an out-of-box (00B) state. As described above, the new product state may mean a state where the first home appliance120is not registered in the server device110. To register the first home appliance120in the server device110, the first home appliance120has to switch to a network connection mode to establish WiFi communication connection with the user equipment130, and has to receive information for connection of the AP device150(a service set identifier (SSID) of the AP device150, an ID of the AP device150, a password, an authentication scheme, an encryption method, an authentication key, etc.) from the user equipment130. In this case, to switch the first home appliance120to the network connection mode for WiFi communication connection with the user equipment130, it is necessary for the user to operate a button of the first home appliance120. For example, when the first home appliance120is an air cleaner, the user may switch the air cleaner to the network connection mode to establish WiFi communication with the user equipment130by long pressing an air volume button of the air cleaner or long pressing a reservation button of a remote control of the air cleaner. In this case, a method to switch to the network connection mode varies from home appliance to home appliance, such that the user may in various cases capture a quick response (QR) code attached onto a surface of the first home appliance120using the user equipment130or perform near field communication (NFC) tagging by touching an NFC tag region of the first home appliance120using the user equipment130, whereby the first home appliance120may obtain a guide to switch to the network connection mode. In a general registration process of a home appliance, information (a product name, a product serial number, a manufacturing date, etc.) of the home appliance may be automatically transmitted to the server device110, but a location of the home appliance may be designated by a user input. For example, when a home appliance is a refrigerator, a user input to set a location of the refrigerator as a “kitchen” may be required. In this case, a user input to designate a location of a home appliance is required, causing a problem in terms of user convenience and a difficulty in designating an exact location of the home appliance. When a home appliance to be registered includes a UWB communication module, a UWB signal may be received from another home appliance including the UWB communication module and an exact location of the home appliance to be registered may be measured based on the received UWB signal. However, in case of a home appliance without a UWB communication module, it is difficult to measure an exact location of the home appliance. Herein, the first home appliance120to be registered is assumed to be a home appliance without a UWB communication module. Thus, according to an embodiment of the disclosure, a technique of utilizing the user equipment130including a UWB communication module and the second home appliance140including a UWB communication module to register the location of the first home appliance120without a UWB communication module is proposed. In order for the user to register the first home appliance120in the server device110, a process of communication connection between the user equipment130and the first home appliance120is required. The first home appliance120to which the user equipment130is to connect, and/or a type of the first home appliance120, may be recognized by an action of capturing a QR code displayed on the first home appliance120by using the user equipment130, or by an action of placing the user equipment130in adjacent to the NFC tag region of the first home appliance120to perform NFC tagging. Either such action requires that the user equipment130and the first home appliance120are to be located very close to each other during registration of the first home appliance120. Thus, when the location of the user equipment130is measured, the location of the user equipment130may be registered as the location of the first home appliance120. According to an embodiment of the disclosure, the second home appliance140may include a UWB communication module, and the second home appliance140may be a home appliance already registered in the server device110. According to an embodiment of the disclosure, the second home appliance140may serve as a reference point for measuring a relative location of the user equipment130located adjacent to the first home appliance120, and thus may be secured in position due to a fixed location thereof in the house. For example, the second home appliance140may include, but is not limited to, an artificial intelligence (AI) speaker, an induction range, an illuminating device, etc. According to an embodiment of the disclosure, to receive a guide for registering the first home appliance120in the server device110, the user may capture the QR code displayed on the first home appliance120at a distance very close to the first home appliance120, or may touch the user equipment130to an NFC tag region of the first home appliance120to perform NFC tagging. When the user selects a QR capturing button of the user equipment130or performs NFC tagging, a location identification request signal may be transmitted from the user equipment130to the second home appliance140. The second home appliance140according to an embodiment of the disclosure may measure a location of the user equipment130based on the received location identification request signal. The second home appliance140may measure a relative location measurement value of the user equipment130with respect to the second home appliance140, based on a UWB signal that is the location identification request signal transmitted from a UWB antenna embedded in the user equipment130. In this case, the location measurement value of the user equipment130may include azimuth information and elevation information about the user equipment130measured with respect to the second home appliance140and distance information about the user equipment130with respect to the second home appliance140. That is, the second home appliance140may measure, as the location measurement value, coordinates at which the user equipment130is located in a spherical coordinate system having the second home appliance140as the origin. The second home appliance140according to an embodiment of the disclosure may transmit the measured location measurement value of the user equipment130to the server device110. The server device110according to an embodiment of the disclosure may determine location information about the user equipment130based on a registered location information lookup table of a home appliance (for brevity, hereinafter, a “lookup table”). The server device110according to an embodiment of the disclosure may have stored a lookup table including location information about a registered home appliance in the house and a location measurement value of the registered home appliance. For example, when an air conditioner in the house is registered in the server device110, the lookup table may have stored a model name of the air conditioner, location information about the air conditioner (a “living room” where the air conditioner is installed), and a location measurement value of the air conditioner (coordinates at which the air conditioner is located with respect to the second home appliance140). The server device110according to an embodiment of the disclosure may compare the received location measurement value of the user equipment130with the location measurement value of the registered home appliance, included in the lookup table, to determine the location information about the user equipment130. For example, when it is determined that the location measurement value of the user equipment130is most similar to the location measurement value of the registered air conditioner as a result of comparison between the location measurement value of the user equipment130and the lookup table, the location information about the user equipment130may be determined as the location information about the registered air conditioner. Thus, when the location information about the registered air conditioner is a “living room”, the location information about the user equipment130may be determined as the “living room”. Together with a process, performed by the server device110according to an embodiment of the disclosure, of determining the location information about the user equipment130, registration of the first home appliance120may be performed. When the user captures the QR code attached onto the surface of the first home appliance120to be registered, by using the user equipment130, or performs NFC tagging by touching the user equipment130to the NFC tag region of the first home appliance120, a registration guide of the first home appliance120may be displayed on the user equipment130. According to an embodiment of the disclosure, when the first home appliance120switches to the network connection mode based on the registration guide and WiFi communication is established between the user equipment130and the first home appliance120, information about the AP device150may be transmitted from the user equipment130to the first home appliance120. The first home appliance120may connect to the server device110based on the received information about the AP device150, and transmit information (a product name, a product serial number, a manufacturing date, etc.) of the first home appliance120to the server device110. The first home appliance120may transmit user account information received from the user equipment130to the server device110to allow the server device110to store the user account information together. The server device110, according to an embodiment of the disclosure, may use, as the location information about the first home appliance120, the location information about the user equipment130determined based on the location measurement value of the user equipment130transmitted from the second home appliance140. The server device110, according to an embodiment of the disclosure, may register the first home appliance120in the server device110, based on the information about the first home appliance120transmitted from the first home appliance120and the location information about the first home appliance120. The server device110, according to an embodiment of the disclosure, may update the lookup table by matching the information about the first home appliance120, the location information about the first home appliance120, and the location measurement value of the first home appliance120(which is the same as the location measurement value of the user equipment130). According to an embodiment of the disclosure, in registration of the first home appliance120, an exact location of which is difficult to measure because of absence of the UWB communication module therein, the location of the user equipment130located closest to the first home appliance120may be used as the location of the first home appliance120. In this case, by using the UWB communication module included in the user equipment130and the UWB communication module included in the second home appliance140, the location of the user equipment130may be accurately measured. Thus, the location of the first home appliance120including no UWB communication module may be accurately recognized by the server device110. FIG.2is a block diagram illustrating a structure of a server device according to an embodiment of the disclosure. Referring toFIG.2, the server device110may include a communication interface210, a processor220, and a memory230. However, all the illustrated components are not essential components. The server device110may be implemented by more or less components than the illustrated components. Hereinafter, the aforementioned components will be described sequentially. The communication interface210may include one or more components that enable communication between the server device110and the first home appliance120, between the server device110and the user equipment130, or between the server device110and the second home appliance140. According to an embodiment of the disclosure, the communication interface210may receive a distance measurement request signal from the user equipment130. The distance measurement request signal may be transmitted from the user equipment130to the server device110when a user input to select a QR capturing menu or an NFC tagging menu is received in a device registration graphical user interface (GUI) of the user equipment130. The distance measurement request signal, according to an embodiment of the disclosure, may be a signal allowing the server device110to identify existence of a home appliance including a UWB communication module among registered home appliances in relation to a user account. According to an embodiment of the disclosure, the communication interface210may transmit a UWB communication module activation signal to the user equipment130and the second home appliance140based on the received distance measurement request signal. The UWB communication module activation signal may induce activation of the UWB antennas respectively included in the user equipment130and the second home appliance140to induce a preparation stage for the second home appliance140to measure the location of the user equipment130. According to an embodiment of the disclosure, the communication interface210may receive a location identification request signal from the user equipment130, and transmit the received location identification request signal to the second home appliance140. The location identification request signal may be transmitted from the user equipment130to the server device110when the user presses the QR capturing button of the user equipment130, or when the user touches the NFC tag region of the first home appliance120using the user equipment130to perform NFC tagging. The location identification request signal, according to an embodiment of the disclosure, may induce the second home appliance140to measure a location measurement value of the user equipment130based on a UWB signal transmitted from the UWB antenna of the user equipment130. According to an embodiment of the disclosure, the communication interface210may receive a location measurement value of the user equipment130from the second home appliance140. In this case, the location measurement value of the user equipment130may include azimuth information and elevation information about the user equipment130with respect to the second home appliance140and distance information about the user equipment130with respect to the second home appliance140. According to an embodiment of the disclosure, the communication interface210may receive information about the first home appliance120from the first home appliance120. The first home appliance120may receive information about the AP device150from the user equipment130in a registration process, and connect to the server device110based on the received information about the AP device150, and transmit the information about the first home appliance120to the server device110. According to an embodiment of the disclosure, the communication interface210may receive a control GUI screen request signal from the user equipment130. In this case, the control GUI screen request signal may request a control GUI screen of a home appliance toward which the user equipment130is oriented. The processor220may control an overall operation of the server device110by using a program, an instruction, or information stored in the memory230. The processor220may be implemented as one or more processors. The processor220may control an operation of components included in the server device110. According to an embodiment of the disclosure, the processor220may determine whether there is a home appliance including a UWB communication module among home appliances registered in the server device110, based on the distance measurement request signal transmitted from the user equipment130. The processor220may identify a home appliance having these qualities and designate it for use in later processes as the second home appliance140. According to an embodiment of the disclosure, the processor220may determine the location information about the user equipment130, based on the location measurement value of the user equipment130transmitted from the second home appliance140. The processor220may determine the location information about the user equipment130by using a registered-home-appliance location information lookup table231in the memory230. According to an embodiment of the disclosure, the processor220may compare a location measurement value of a registered home appliance stored in the registered-home-appliance location information lookup table231with the location measurement value of the user equipment130, in order to use location information about a registered home appliance, determined to be closest, as the location information about the user equipment130. According to an embodiment of the disclosure, the processor220may register the first home appliance120based on the information about the first home appliance120received from the first home appliance120. As the location information about the first home appliance120, the location information about the user equipment130may be used. According to an embodiment of the disclosure, when the processor220receives the location measurement value of the user equipment130from the second home appliance140after receiving the control GUI request signal from the user equipment130, the processor220may determine a third home appliance toward which the user equipment130is oriented. The processor220may determine the third home appliance toward which the user equipment130is oriented, by using the registered-home-appliance location information lookup table231in the memory230. According to an embodiment of the disclosure, the processor220may recognize the location of the user equipment130in the house through the location measurement value of the user equipment130, and compare the location of the user equipment130with a location measurement value of a registered home appliance stored in the registered-home-appliance location information lookup table231to determine the third home appliance toward which the user equipment130is oriented. According to an embodiment of the disclosure, the processor220may select a GUI screen for controlling the third home appliance determined in a GUI list232for controlling registered home appliances in the memory230, and provide the selected GUI screen for controlling the third home appliance to the user equipment130. The memory230may store a program for processing the processor220and store input/output data. For example, the memory230may store the registered-home-appliance location information lookup table231and the GUI list232for controlling the registered home appliances. The registered-home-appliance location information lookup table231may be information in which information (a model name, a serial number, and a manufacturing date) of home appliances registered in the server device110, location information (places where the home appliances are installed) of the registered home appliances, and location measurement values (coordinates at which the home appliances are located with respect to the second home appliance140) of the registered home appliances are matched and stored. The GUI list232for controlling the registered home appliances may be information where GUIs that may be provided to the user equipment130to control the home appliances registered in the server device110are stored. For example, when the air conditioner is registered in the server device110, a GUI for controlling a temperature of the air conditioner or changing a mode of the air conditioner may be included in the GUI list232for controlling the registered home appliances. FIG.3is a diagram illustrating a user equipment, a second home appliance, and a server device according to an embodiment of the disclosure. According to an embodiment of the disclosure, the second home appliance140may include a processor320, a communication module332, a UWB communication module324, and a memory326. The second home appliance140may be an electronic device that performs a certain function. The second home appliance140may be arranged in a certain position in the house. The second home appliance140may serve as a reference point for measuring a relative location of the user equipment130located close to the first home appliance120and may be fixed at a certain position in the house without moving. The second home appliance140may include, for example, an AI speaker, an induction range, an illuminating device, a refrigerator, a kimchi refrigerator, a laundry machine, a television (TV), an air conditioner, an air cleaner, a steam closet, an oven, a microwave, an audio output device, a smart home hub device, etc. The second home appliance140may perform its original function by including a certain home appliance function module. For example, in a refrigerator, the home appliance function module may include a cooler, a container, a door, a temperature sensor, a door opening/closing sensor, a lamp, etc. In another example, in a laundry machine, the home appliance function module may include a washing tub, a motor, a door, a door opening/closing sensor, a water supply unit, a drain unit, etc. In another example, in a vacuum cleaner, the home appliance function module may include a vacuum suction assembly, a dust container, a brush, etc. The processor320may control an overall operation of the second home appliance140. The processor320may be implemented as one or more processors. The processor320may perform a certain operation by executing an instruction or a command stored in the memory326. The processor320may control an operation of components included in the second home appliance140. The communication module322may wirelessly or wiredly communicate with an external device. The communication module322may communicate with the user equipment130and the server device110. The communication module322may communicate with the user equipment130by using a short-range communication scheme. For example, the communication module322may communicate with the user equipment130through Bluetooth or WiFi communication connection. The communication module322may communicate with the server device110by using a long-range communication scheme. For example, the communication module322may communicate with the AP device150through WiFi and with the server device110through a long-range communication network connected to the AP device150. The communication module322may include a wireless communication module (e.g., a cellular communication module, a short-range wireless communication module, a global navigation satellite system (GNSS) communication module) or a wired communication module (e.g., a local area network (LAN) communication module or a power line communication module). The communication module322may perform short-range communication, and may use, for example, Bluetooth, Bluetooth Low Energy (BLE), short-range wireless communication (near field communication (NFC)), a wireless local area network (WLAN) (Wireless Fidelity (WiFi)), Zigbee, infrared data association (IrDA) communication, WiFi Direct (WFD), Ant+ communication, etc. In another example, the communication module322may perform short-range communication, and may communicate with an external device, for example, through a legacy cellular network, a 5-Generation (5G) network, a next-generation communication network, Internet, a computer network (e.g., a local area network (LAN) or a wide area network (WAN)), etc. The communication module322may establish communication with the user equipment130and the server device110under control of the processor320. The communication module322may transmit a control signal and data to the user equipment130and the server device110or receive a control signal and data from the user equipment130and the server device110. The second home appliance140may be registered at an account registered in the server device110and may communicate with the server device110. The second home appliance140may communicate with the user equipment130through a communication connection such as Bluetooth, WiFi, etc. According to an embodiment of the disclosure, the second home appliance140may communicate with another home appliance through a home network. The UWB communication module324may include one or more components for allowing reception of a UWB signal transmitted from the UWB antenna included in the user equipment130. The UWB communication module324may include one or more components for causing communication between the user equipment130and the second home appliance140. According to an embodiment of the disclosure, the UWB communication module324may include one or more components for allowing reception of a UWB signal. According to an embodiment of the disclosure, the UWB communication module324may include one or more components for causing UWB communication. UWB communication, which is ultra-wideband communication, may mean wireless communication to transmit large-volume information with low power over a wider band than an existing spectrum. Unlike a BLE communication technique for inferring existence of a user equipment in a specific space, a UWB communication technique may accurately determine a location where the user equipment exists. When the UWB communication technique is used, azimuth information and elevation information about the user equipment130with respect to the second home appliance140and distance information about the user equipment130with respect to the second home appliance140may be accurately measured. That is, when the UWB communication technique is used, the location of the user equipment130with respect to the second home appliance140may be accurately measured as coordinates of a spherical coordinate system. The UWB communication module324according to an embodiment of the disclosure may include at least one UWB antenna. The at least one UWB antenna may receive a UWB signal transmitted from the UWB communication module324of the user equipment130, and measure the exact location of the user equipment130based on the received UWB signal. The UWB communication module324according to an embodiment of the disclosure may establish UWB communication with the user equipment130under control of the processor320. The UWB communication module324may transmit a control signal and data to the user equipment130or receive a control signal and data from the user equipment130. The memory326may store various information, data, an instruction, a program, etc., required for an operation of the second home appliance140. The memory326may include at least one of or a combination of volatile memory or non-volatile memory. The memory326may include a storage medium of at least one type of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., an SD or XD memory, etc.), a RAM, an SRAM, a ROM, an EEPROM, a PROM, a magnetic memory, a magnetic disk, an optical disk, or the like. The memory326may correspond to a web storage or a cloud server that performs a storage function on the Internet. The user equipment130may include a processor310, a communication module312, a UWB communication module314, a memory316, and an input/output interface318. The processor310may control an overall operation of the user equipment130. The processor310may be implemented as one or more processors. The processor310may perform a certain operation by executing an instruction or a command stored in the memory326. The communication module312may wirelessly or wiredly communicate with an external device. The communication module312may communicate with the second home appliance140and the server device110. The communication module312may communicate with the second home appliance140through a short-range communication scheme. The communication module312may communicate with the server device110by using a long-range communication scheme. The communication module312may include a wireless communication module (e.g., a cellular communication module, a short-range wireless communication module, a GNSS communication module) or a wired communication module (e.g., an LAN communication module or a power line communication module). The communication module312may perform short-range communication, and may use, for example, Bluetooth, BLE, short-range wireless communication (NFC), a WLAN (WiFi), Zigbee, IrDA communication, WFD, Ant+ communication, etc. In another example, the communication module312may perform short-range communication, and may communicate with an external device, for example, through a legacy cellular network, a 5G network, a next-generation communication network, Internet, a computer network (e.g., an LAN or a WAN), etc. The communication module312may establish communication with the second home appliance140and the server device110under control of the processor310. The communication module312may transmit a control signal and data to the second home appliance140and the server device110or receive a control signal and data from the second home appliance140and the server device110. The UWB communication module314may include one or more components for allowing transmission of a UWB signal that may be received by the UWB antenna included in the second home appliance140. The UWB communication module314may include one or more components for causing communication between the user equipment130and the second home appliance140. The UWB communication module314according to an embodiment of the disclosure may include a UWB antenna. The UWB communication module314according to an embodiment of the disclosure may include at least three UWB antennas. After the user equipment130receives a user input to select the QR capturing button or performs NFC tagging after the UWB communication module activation signal from the server device110, the location identification request signal may be transmitted from the user equipment130to the second home appliance140. The location identification request signal according to an embodiment of the disclosure may be a UWB signal transmitted from the UWB antenna of the user equipment130to the UWB antenna of the second home appliance140. The second home appliance140may accurately measure the location of the user equipment130with respect to the second home appliance140based on the location identification request signal that is transmitted to the second home appliance140by using the UWB antenna. The UWB communication module314may establish communication with the second home appliance140under control of the processor310, and may transmit or receive a control signal and data to or from the second home appliance140. The memory316may store various information, data, an instruction, a program, etc., required for an operation of the user equipment130. The memory316may include at least one of or a combination of volatile memory or non-volatile memory. The memory316may include a storage medium of at least one type of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., an SD or XD memory, etc.), a RAM, an SRAM, a ROM, an EEPROM, a PROM, a magnetic memory, a magnetic disk, an optical disk, or the like. The memory316may correspond to a web storage or a cloud server that performs a storage function on the Internet. The memory316stores an application for registering the first home appliance120or controlling the first home appliance120and the second home appliance140. The processor310may execute an application to register the first home appliance120or control the first home appliance120and the second home appliance140. The application may provide registration of the first home appliance120and monitoring, control, automation, a voice assistance, etc., of the first home appliance120and the second home appliance140. The memory316may previously store an application or receive an application from a cloud server and store the application. The input/output interface318may receive commands or data to be used in a component (e.g., the processor310) of the user equipment from the outside (e.g., from a user) of the user equipment130. The input/output interface318may include, for example, a touch screen, a touch pad, a key, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus pen). The input/output interface318may include, for example, a display, a speaker, a vibration device, etc. The input/output interface318may provide a GUI related to an application and receive a user input being input through the GUI. The input/output interface318may have more abundant features than input/output interfaces of the first home appliance120and the second home appliance140. For example, the input/output interface318may include a touch screen, a key, a microphone, a speaker, a vibration device, etc., but the first home appliance120and the second home appliance140may include a limited number of keys and a small-size display. Embodiments of the disclosure may receive a control input to control the first home appliance120and the second home appliance140through the user equipment130, exploiting the fact of the user equipment130having more abundant input/output features than the first home appliance120and the second home appliance140. FIG.4is a block diagram illustrating a structure of a user equipment according to an embodiment of the disclosure. The user equipment130according to an embodiment of the disclosure may include the processor310, the communication module312, the UWB communication module314, the memory316, the input/output interface318, and a sensor410. The user equipment130may include abundant input/output and sensor features when compared to the first home appliance120and the second home appliance140. For example, the input/output interface318may include a touch screen421, a touch panel422, a key423, a pen recognition panel424, a microphone425, a speaker426, etc. The sensor410may include an image sensor411, an acceleration sensor412, a gyro sensor413, an iris sensor414, a fingerprint sensor415, an illuminance sensor416, etc. The user equipment130may control the first home appliance120and the second home appliance140by using the input/output interface318and the sensor410. The user equipment130may execute an application for controlling the first home appliance120and the second home appliance140, and establish communication connection with the first home appliance120and the second home appliance140. The user equipment130may receive a control signal in various forms through an application. The control signal may be input through the touch screen421, the touch panel422, the key423, the pen recognition panel424, the microphone425, etc. The user equipment130may provide an output in various forms through an application. The output of the application may be output through the touch screen421, the speaker426, etc. In an embodiment of the disclosure, the touch screen421and the touch panel422may be formed as one piece. FIG.5is a flow diagram illustrating a method, performed by a server device, of controlling a home appliance according to an embodiment of the disclosure. In operation S501, the server device110, according to an embodiment of the disclosure, may receive a distance measurement request signal from the user equipment130. According to an embodiment of the disclosure, the user may register the first home appliance120in the server device110and desire to conveniently control the first home appliance120through an application installed in the user equipment130. In a process of registering the first home appliance120by using the user equipment130, information (a product name, a product serial number, a manufacturing date, etc.) of the first home appliance120may be automatically transmitted to the server device110. However, the first home appliance120lacks a UWB antenna, such that there may be a difficulty in automatically designating the location of the first home appliance120in the server device110by using the UWB communication technique. Thus, according to an embodiment of the disclosure, to register the first home appliance120in the server device110, a scheme may be proposed to measure the location of the user equipment130including the UWB antenna located close to the first home appliance120by using the UWB communication technique, and to use the measured location of the user equipment130as the location of the first home appliance120. According to an embodiment of the disclosure, the user equipment130may display, on a display, a registered-device control GUI for controlling home appliances registered in the server device110. The user equipment130may display a device registration GUI on the display by receiving a user input to select a certain button (e.g., a plus (+) indication) displayed on the registered-device control GUI. According to an embodiment of the disclosure, the user may register the first home appliance120in the server device110through the device registration GUI displayed on the user equipment130. According to an embodiment of the disclosure, to register the first home appliance120in the server device110, the first home appliance120has to receive information for connection to the AP device150from the user equipment130. To receive the information for connection to the AP device150, WiFi communication has to be established between the first home appliance120and the user equipment130such that the first home appliance120has to switch to the network connection mode. In this case, a method to switch to the network connection mode varies from home appliance to home appliance, such that the user may capture a QR code attached onto the first home appliance120using the user equipment130or perform NFC tagging by touching an NFC tag region of the first home appliance120using the user equipment130, whereby the first home appliance120may obtain a guide to switch to the network connection mode. To this end, the user equipment130may receive a user input to select a QR code capturing menu or an NFC tagging menu in the device registration GUI. In this case, when the user equipment130receives the user input to select the QR code capturing menu or the NFC tagging menu from the user, the user equipment130may display a QR code capturing screen or a screen on which a method for performing NFC tagging is described. A process of receiving the user input to select the QR code capturing menu or the NFC tagging menu from the user equipment130may be the first process to register the first home appliance120in the server device110. Thus, at this time, the user equipment130may transmit a distance measurement request signal to the server device110. According to an embodiment of the disclosure, when the user equipment130receives the user input to select the QR code capturing menu or the NFC tagging menu in the device registration GUI, the user equipment130may transmit the distance measurement request signal to the server device110. According to an embodiment of the disclosure, when the server device110receives the distance measurement request signal from the user equipment130, the server device110may identify existence of a home appliance having embedded therein a UWB antenna capable of measuring the location of the user equipment130(hereinafter, a UWB home appliance) among home appliances registered in the server device110. In operation S502, the server device110, according to an embodiment of the disclosure, may transmit a UWB communication module activation signal to the user equipment130and to the second home appliance140having embedded therein a UWB antenna, based on the received distance measurement request signal. According to an embodiment of the disclosure, when the server device110receives the distance measurement request signal from the user equipment130, the server device110may identify existence of the UWB home appliance registered in the same user account as a user account of the user equipment130in the server device110. In this case, the registered UWB home appliance may be a home appliance that serves as a reference point with respect to which the location of the user equipment130is measured. The server device110may store location information about another registered home appliance in the house with respect to the registered UWB home appliance, as a lookup table (hereinafter, a registered-home appliance location information lookup table). According to an embodiment of the disclosure, the UWB home appliance registered in the server device110may be the second home appliance140. According to an embodiment of the disclosure, where there is a registered UWB home appliance, the server device110may transmit the UWB communication module activation signal to the user equipment130and the second home appliance140. According to an embodiment of the disclosure, the UWB antenna included in the second home appliance140may receive a UWB signal transmitted from the UWB antenna included in the user equipment130. In this case, the UWB communication module activation signal may be a signal for activating sensors of the UWB antennas respectively included in the user equipment130and the second home appliance140. In operation S503, the server device110, according to an embodiment of the disclosure, may receive, from the second home appliance140, a location measurement value of the user equipment130measured with respect to the second home appliance140based on the UWB signals of the user equipment130and the second home appliance140. On the other hand, the server device110may receive the location measurement value of the user equipment130measured with respect to the second home appliance140, from the user equipment130. According to an embodiment of the disclosure, to receive a guide for switching the first home appliance120to the network connection mode, the user may capture the QR code attached to the first home appliance120by using the user equipment130or touch an NFC tag region of the first home appliance120using the user equipment130to perform NFC tagging. In this case, to capture the QR code attached to the first home appliance120, the user equipment130needs to be located very close to the first home appliance120. To cause NFC tagging between the first home appliance120and the user equipment130, the user equipment130has to be attached to a certain region of the first home appliance120or be placed very close to the certain region, such that the user equipment130may be located very close to the first home appliance120in this case. Thus, in the case that the location of the user equipment130may be measured at a moment when the user presses a QR code capturing button or NFC tagging is performed, the measured location of the user equipment130may be used as the location of the first home appliance120. According to an embodiment of the disclosure, when the user equipment130receives a user input to select the QR capturing button or NFC tagging is performed between the user equipment130and the first home appliance120, the location identification request signal may be transmitted from the user equipment130to the second home appliance140. According to an embodiment of the disclosure, the location identification request signal may be a UWB signal transmitted from the UWB antenna included in the user equipment130to the UWB antenna included in the second home appliance140. According to an embodiment of the disclosure, the second home appliance140may measure a relative location measurement value of the user equipment130with respect to the second home appliance140, based on a UWB signal that is the received location identification request signal. In this case, the location measurement value of the user equipment130may include azimuth information and elevation information about the user equipment130measured with respect to the second home appliance140and distance information between the second home appliance140and the user equipment130. The server device110according to an embodiment of the disclosure may receive the location measurement value of the user equipment130from the second home appliance140. In operation S504, the server device110, according to an embodiment of the disclosure, may determine the location information about the user equipment130based on the location measurement value. According to an embodiment of the disclosure, the server device110may determine the location information about the user equipment130based on a a location information lookup table of registered home appliances (hereinafter, a “lookup table”) stored in the server device110. According to an embodiment of the disclosure, the server device110may have stored a lookup table including location information and location measurement values of home appliances registered in the server device110. Herein, the location measurement values of the registered home appliances may mean values measured with respect to the second home appliance140, like the location measurement value of the user equipment130. The server device110according to an embodiment of the disclosure may compare the location measurement value of the user equipment130with the location measurement values of the registered home appliances, included in the lookup table, to identify a registered home appliance determined to be most similar to the location measurement value of the user equipment130. The server device110according to an embodiment of the disclosure may determine the location information about the user equipment130by using the location information about the registered home appliance, which is determined to be most similar to the location measurement value of the user equipment130, as the location information about the user equipment130. Herein, the location information may mean information about a place, such as a room or section of a building, where an appliance is installed. For example, when a TV is installed in a living room of a house, the location information about the TV may indicate the living room. In operation S505, the server device110, according to an embodiment of the disclosure, may receive information about the first home appliance120from the first home appliance120having no UWB antenna embedded therein. According to an embodiment of the disclosure, a process of measuring the location of the user equipment130with respect to the second home appliance140may be performed simultaneously with a process of registering the first home appliance120. The process of registering the first home appliance120may be performed through WiFi communication established between the first home appliance120and the user equipment130. According to an embodiment of the disclosure, the user equipment130may receive a user input to capture a QR code attached to the first home appliance120, thus identifying a uniform resource locator (URL) address included in the QR code and provide a guide for switching the first home appliance120to the network connection mode to the user. The user equipment130may establish NFC communication with the first home appliance120by performing NFC tagging with the first home appliance120. The user equipment130may be provided with the guide for switching the first home appliance120to the network connection mode from the first home appliance120through the established NFC communication, and provide the provided guide to the user. In this case, the user may manipulate the first home appliance120according to the provided guide to switch the first home appliance120to the network connection mode, and establish WiFi communication between the first home appliance120and the user equipment130. The first home appliance120, according to an embodiment of the disclosure, may use a software enabled AP (SoftAP) that enables the first home appliance120to be recognized as a virtual AP. The SoftAP may be a WLAN client, but may be implemented as software serving as a wireless AP. The SoftAP may operate like the wireless AP. The first home appliance120may drive the SoftAP by using a WiFi module. The user equipment130may establish WiFi communication connection with the first home appliance120by connecting to the SoftAP of the first home appliance120, and perform WiFi communication with the first home appliance120. WiFi communication connection between the user equipment130and the first home appliance120by using the SoftAP may correspond to WiFi Direct. According to an embodiment of the disclosure, through WiFi communication established between the first home appliance120and the user equipment130, information about the AP device150(an SSID of the AP device150, an ID of the AP device150, a password, an authentication scheme, an encryption method, an authentication key, etc.) may be transmitted from the user equipment130to the first home appliance120. The first home appliance120may connect to the server device110based on the information about the AP device150, and transmit information (a product name, a product serial number, a manufacturing date, etc.) of the first home appliance120to the server device110. The first home appliance120may transmit user account information received from the user equipment130to the server device110. In operation S506, the server device110, according to an embodiment of the disclosure, may register the first home appliance120in the server device110based on the received information about the first home appliance120and the location information about the user equipment130. The server device110, according to an embodiment of the disclosure, may use the determined location information about the user equipment130as the location information about the first home appliance120. According to an embodiment of the disclosure, when the user registers the first home appliance120in the server device110, the user needs to locate the user equipment130at a distance very close to the first home appliance120and thus may use the location information about the user equipment130as the location information about the first home appliance120. For example, to register a refrigerator installed in the kitchen in the server device110, the user may obtain a guide for registering the refrigerator in the server device110by capturing the QR code attached onto a front surface of the refrigerator, and at this time, the location information about the user equipment130may be determined. The location of the user equipment130determined when the user captures the QR code on the front surface of the refrigerator is the same as the location of the refrigerator, such that the location information about the user equipment130determined at this time may be used as the location information about the refrigerator. The server device110according to an embodiment of the disclosure may register the first home appliance120in the server device110, based on the location information about the first home appliance120that is the same as the location information about the user equipment130and the information about the first home appliance120. The server device110, according to an embodiment of the disclosure, may register the first home appliance120at a certain account of the server device110, based on the location information about the first home appliance120and the information about the first home appliance120. The server device110, according to an embodiment of the disclosure, may match the information about the first home appliance120, the location information about the first home appliance120(which is the same as the location information about the user equipment130), and the location measurement value of the first home appliance120(which is the same as the location measurement value of the user equipment130) to update them in the location information lookup table of registered home appliances. According to an embodiment of the disclosure, after the first home appliance120is registered in the server device110, the first home appliance120may transmit state information, sensor information, monitoring information, a support request, a data processing request, etc., of the first home appliance120to the server device110. After the first home appliance120is registered in the server device110, the first home appliance120may receive a control signal from the server device110and operate. Referring toFIG.6, a detailed description will be made of an operation, performed by the server device110, of receiving a location measurement value of the user equipment130from the second home appliance140and determining location information about the user equipment130. FIG.6is a sequence diagram illustrating a method, performed by a server device in cooperation with other devices, of determining location information about a user equipment according to an embodiment of the disclosure. The user equipment130according to an embodiment of the disclosure may execute an IoT application in operation S601, and transmit an ID and a password for execution of the IoT application to the server device110in operation S602. The IoT application installed in the user equipment130, according to an embodiment of the disclosure, may be an application capable of providing a function such as monitoring, control, automation, voice assistance, etc., of a home appliance registered at a certain account of the server device110. The server device110according to an embodiment of the disclosure may perform user authentication in operation S603and transmit an authentication result to the user equipment130in operation S604. The server device110according to an embodiment of the disclosure may determine whether the ID and the password received from the user equipment130match an ID and a password stored in the server device110to perform user authentication. The server device110according to an embodiment of the disclosure may perform user authentication, and when the user equipment130is authenticated as a legitimate user, an authentication result may be transmitted to the user equipment130. The server device110, according to an embodiment of the disclosure, may provide the registered-device control GUI to the user equipment130. Referring to an illustrative scenario700adepicted inFIG.7, registered home appliances (an illuminating device, an air conditioner, a TV, a speaker) may be controlled through the registered-device control GUI displayed on the display of the user equipment130. In operation S605, the user equipment130, according to an embodiment of the disclosure, may receive a user input to select a QR capturing menu or an NFC tagging menu for obtaining a registration guide of a home appliance to be registered, from a device registration GUI. Referring to the illustrative scenario700aofFIG.7, the registered-device control GUI may be displayed on the display of the user equipment130, and the user may control not only the registered home appliances, but also attempt registration of a new home appliance, through the registered-device control GUI. According to an embodiment of the disclosure, the user equipment130may display the device registration GUI on the display by receiving a user input to select a plus button710that is a certain button displayed on the registered-device control GUI. Referring to an illustrative scenario700bdepicted inFIG.7, the device registration GUI may be displayed on the display of the user equipment130, and a QR capturing menu720or an NFC tagging menu730may be included in the device registration GUI. The user may start a series of processes for registering a home appliance by selecting the QR capturing menu720or the NFC tagging menu730displayed on the device registration GUI. In operation S606, the user equipment130, according to an embodiment of the disclosure, may transmit a distance measurement request signal from the user equipment130to the server device110in response to selection of the QR capturing menu or the NFC tagging menu. The distance measurement request signal according to an embodiment of the disclosure may be a signal serving as a trigger for determining whether a home appliance having embedded therein a UWB antenna capable of measuring a location of the user equipment130is registered in the server device110. In operation S607, the server device110, according to an embodiment of the disclosure, may determine whether there is a UWB device having embedded therein a UWB antenna among home appliances registered at a certain account of the server device110. The UWB device, according to an embodiment of the disclosure, may be a device that serves as a reference point with respect to which the location of the user equipment130may be measured. For the described method ofFIG.6, the second home appliance140is such a UWB device, and is so identified at operation S607. In operations S608and S609, the server device110, according to an embodiment of the disclosure, may transmit the UWB communication module activation signal to the user equipment130and the second home appliance140. Referring to an illustrative scenario700cdepicted inFIG.7, the server device110may transmit the UWB communication module activation signal to the user equipment130, and to the second home appliance140that is a UWB device registered in the server device110. According to an embodiment of the disclosure, the UWB communication module activation signal may be a signal inducing activation of the UWB antennas respectively included in the user equipment130and the second home appliance140to exchange UWB signals with the user equipment130and the second home appliance140. In operation S610, the user equipment130, according to an embodiment of the disclosure, may receive an input to select a QR capturing button or detect that NFC tagging is performed. According to an embodiment of the disclosure, to receive a guide for switching a home appliance to be registered to the network connection mode, the user may capture the QR code attached to the home appliance by using the user equipment130or touch a home appliance using the user equipment130to perform NFC tagging. Referring to an illustrative scenario800adepicted inFIG.8, when a home appliance to be registered is a refrigerator, the user may capture the QR code attached onto the front surface of the refrigerator, or may touch a certain region (an NFC tag region) on the front surface of the refrigerator using the user equipment130to perform NFC tagging between the refrigerator and the user equipment130. When the user equipment130captures the QR code attached onto the front surface of the refrigerator or is caused to touch the NFC tag region of the refrigerator to perform NFC tagging, the user equipment130is expected to be located at a distance very close to the refrigerator. Thus, when the user equipment130receives the user input to capture the QR code or detects that NFC tagging is performed, the location of the user equipment130may be used as a location of the refrigerator when the location of the user equipment130may be measured. In operation S611, the user equipment130, according to an embodiment of the disclosure, may transmit the location identification request signal to the second home appliance140. According to an embodiment of the disclosure, when the user equipment130captures the QR code attached to a home appliance (the refrigerator) to be registered or performs NFC tagging with the home appliance (the refrigerator) to be registered, the user equipment130may transmit the location identification request signal to the second home appliance140. According to an embodiment of the disclosure, the location identification request signal may be a UWB signal transmitted from the UWB antenna included in the user equipment130to the UWB antenna included in the second home appliance140. Referring to an illustrative scenario800bdepicted inFIG.8, the location identification request signal may be transmitted from the user equipment130, which is located very close to the refrigerator, to the second home appliance140. The second home appliance140may measure the location of the user equipment130based on the location identification request signal. In operation S612, the second home appliance140, according to an embodiment of the disclosure, may measure the location measurement value of the user equipment130based on the UWB signal that is the location identification request signal. The location measurement value of the user equipment130, according to an embodiment of the disclosure, may include azimuth information and elevation information about the user equipment130measured with respect to the second home appliance140and distance information between the second home appliance140and the user equipment130. In addition, the location measurement value of the user equipment130, according to an embodiment of the disclosure, may include information related to existence of a wall on a linear distance between the second home appliance140and the user equipment130. Referring to an illustrative scenario900adepicted inFIG.9, the user equipment130may include three UWB antennas ANT1, ANT2, and ANT3. A tag A910, a tag B920, and a tag C930may indicate UWB antennas included in the second home appliance140. The tag A910may detect UWB signals received from the antenna ANT1and the antenna ANT2, but may fail to detect a UWB signal received from the antenna ANT3. The tag B920may detect the UWB signals received from the antennas ANT1, ANT2, and ANT3. The tag C930may detect the UWB signals received from the antenna ANT2and the antenna ANT3, but may fail to detect the UWB signal received from the antenna ANT1. Thus, the relative location of the user equipment130may be measured as a different value according to the location of the second home appliance140. Referring to data of a data chart900bpresented inFIG.9, the UWB antenna included in the second home appliance140may measure a time-of-flight (ToF) result (or distance) between the user equipment130and the second home appliance140, an angle of arrival (AoA) azimuth result of the user equipment with respect to the second home appliance140, and an AoA elevation result. According to an embodiment of the disclosure, the UWB antenna included in the second home appliance140may measure a distance between the second home appliance140and the user equipment130by using a double-side two-way ranging (DS-TWR) technique. The DS-TWR technique may determine a time of flight of a UWB signal transmitted from the user equipment130and multiply the determined time of flight by a speed of light to measure a distance between the second home appliance140and the user equipment130. According to an embodiment of the disclosure, the UWB antenna included in the second home appliance140may measure an azimuth and an elevation of the user equipment130with respect to the second home appliance140by using an AoA-based positioning method. In this case, referring to an example positional chart900cpresented inFIG.9, the azimuth (Φ) may be an angle formed between a z axis and the user equipment130when the second home appliance140is assumed to exist at the origin. The elevation (θ) may be an angle formed between an x axis and the user equipment130when the second home appliance140is assumed to exist at the origin. In operation S613, the second home appliance140, according to an embodiment of the disclosure, may determine the location measurement value of the user equipment130. In operation S614, the second home appliance140, according to an embodiment of the disclosure, may transmit the location measurement value of the user equipment130to the server device110. Alternatively, the second home appliance140may transmit the location measurement value of the user equipment130to the user equipment130which may then transmit the location measurement value thereof to the server device110. In operation S615, the server device110, according to an embodiment of the disclosure, may determine the location information about the user equipment130based on the registered-home-appliance location information lookup table. A detailed description of a method, performed by the server device110, of determining location information about the user equipment130will be described with reference toFIG.10. In the illustrative embodiment described with reference toFIGS.6to9, an example is described where the first home appliance120is a home appliance not registered in the server device110, but the disclosure is not limited thereto. That is, the first home appliance120may be a home appliance already registered in the server device110, wherein the server device110does not yet store the location information measurement value of the first home appliance120. The user equipment130may perform an NFC tagging or QR code capturing process of the first home appliance120, and in this case, the location information measurement value measured between the user equipment130and the second home appliance140may match and be stored in the first home appliance120. FIG.10depicts an illustrative example of operation of a method, performed by a server device, of determining location information about a user equipment according to an embodiment of the disclosure. According to an embodiment of the disclosure, the server device110may have stored a lookup table including location information and location measurement values of home appliances registered in the server device110. Herein, a lookup table including location information and location measurement values of registered home appliances may be referred to as a “registered-home-appliance location information lookup table”. Referring toFIG.10, when an induction range, a laundry machine, and an air conditioner are registered in the server device110, the location information and the location measurement values of the registered home appliances may be stored in a lookup table1010. The location information about the registered home appliance, according to an embodiment of the disclosure, may mean a location where the registered home appliance is installed. Referring to the lookup table1010ofFIG.10, location information about the induction range may be stored as a kitchen, location information about the refrigerator may be stored as a laundry room, and location information about the air conditioner may be stored as a living room. The location measurement value of the registered home appliance according to an embodiment of the disclosure may include an azimuth (Φ) parameter value and an elevation (θ) parameter value of the registered home appliance with respect to the second home appliance140, a distance (D) between the second home appliance140and the registered home appliance, and a line of sight indicator value indicating whether an obstacle exists on a linear path between the registered home appliance and the second home appliance140. Referring to the lookup table1010ofFIG.10, a location measurement value of the induction range may include a distance value (D: 0.6), an azimuth parameter value (Φ: 240 degrees), an elevation parameter value (θ: 90 degrees), and information indicating that no obstacle exists on the linear path. The location measurement value of the refrigerator may include a distance value (D: 3.5), an azimuth parameter value(Φ: 120 degrees), an elevation parameter value(θ: 95 degrees), and information indicating that an obstacle exists on the linear path. The location measurement value of the air conditioner may include a distance value (D: 2.2), an azimuth parameter value(Φ: 45 degrees), an elevation parameter value(θ: 80 degrees), and information indicating that no obstacle exists on the linear path. The server device110according to an embodiment of the disclosure may compare location measurement values of home appliances stored in the lookup table1010with the location measurement value of the user equipment130and determine, as the location information about the user equipment130, location information about a registered home appliance determined to be most similar to the location measurement value of the user equipment130. A location measurement value1020of the user equipment130ofFIG.10may include a distance value (D: 1.0), an azimuth parameter value(Φ: 220 degrees), an elevation parameter value(θ: 100 degrees), and information indicating that no obstacle exists on the linear path. In this case, the server device110may compare that a location measurement value1020of the user equipment130with location measurement values of the induction range, the laundry machine, and the air conditioner and determine that the location measurement value1020of the user equipment130is most similar to the location measurement value of the induction range. When the server device110determines that the location measurement value1020of the user equipment130is most similar to the location measurement value of the induction range, the location information about the induction range may be determined as the location information about the user equipment130. The server device110according to an embodiment of the disclosure may determine a kitchen, which is location information about the induction range, as the location information about the user equipment130. The server device110, according to an embodiment of the disclosure, may store information1030including the determined location information and the location measurement value of the user equipment130in data. FIG.11further depicts an illustrative example of operation of a method, performed by a server device, of determining location information about a user equipment according to an embodiment of the disclosure. Referring toFIG.11, a refrigerator may be a home appliance not registered in the server device110, and the user equipment130may be located at a distance very close to the refrigerator to register the refrigerator in the server device110. Thus, the location information about the user equipment130may be used as the location information about the refrigerator to be registered. Referring toFIG.11, the induction range may be located in the kitchen in the house, the air conditioner may be located in the living room in the house, and the laundry machine may be located in the laundry room in the house. All of the induction range, the air conditioner, and the laundry machine may be home appliances previously registered in the server device110. The location information about the induction range, the air conditioner, and the laundry machine and the location measurement value measured with respect to an AI speaker that is the second home appliance140may be stored in lookup tables1110,1120, and1130of the server device110. According to an embodiment of the disclosure, the server device110may compare location measurement values included in the lookup tables1110,1120, and1130of registered home appliances, stored in the server device110, with a location measurement value1140of the user equipment130and determine that the user equipment130is located closest to the induction range. In this case, the server device110may determine location information1150of the user equipment130as the kitchen that is the location information about the induction range. According to an embodiment of the disclosure, the server device110may determine the location information about the user equipment130, considering the registered home appliance and a line of sight indicating whether an obstacle exists on the linear path of the AI speaker that is the second home appliance140. For example, referring toFIG.11, the laundry machine may be installed in the laundry room, and a wall may exist between the AI speaker and the laundry machine. In this case, information indicating whether an obstacle exists in the location measurement value of the laundry machine may be a value indicating that “an obstacle exists”. Thus, when the line of sight indicating whether the obstacle exists in the location measurement value of the user equipment130indicates that “no obstacle exists”, then it is highly likely that the location information about the user equipment130is not the laundry room. FIG.12is a sequence diagram illustrating a method, performed by a server device in cooperation with other devices, of receiving information about a first home appliance according to an embodiment of the disclosure. In the series of processes illustrated inFIG.12, the user equipment130is provided with a guide for registering the first home appliance120in the server device110by capturing the QR code attached to the front surface of the first home appliance120, and registers the first home appliance120in the server device110based on the provided guide. According to an embodiment of the disclosure, to register the first home appliance120in the server device110, information about the first home appliance120has to be transmitted to the server device110. In operation S1201, the user equipment130, according to an embodiment of the disclosure, may receive the user input to capture the QR code attached to the front surface of the first home appliance120. The user equipment130may display a camera GUI for capturing the QR code when receiving the user input to select the QR capturing menu in the device registration GUI. The user may capture the QR code attached onto the front surface of the first home appliance120by using the camera GUI displayed on the user equipment130. In operation S1202, the user equipment130, according to an embodiment of the disclosure, may display a device registration guide by using an URL of the QR code. The user equipment130according to an embodiment of the disclosure may identify an URL address in the captured QR code. The user equipment130may display a device registration guide included in the identified URL address on the user equipment130. In this case, the device registration guide may include a manipulation method of the first home appliance120for switching the first home appliance120to the network connection mode. For example, when the first home appliance120is a laundry machine, a WiFi button on the front surface of the laundry machine needs to be pressed to switch the laundry machine to the network connection mode. In this case, the device registration guide may include information indicating that a user input to press a WiFi button of the laundry machine for several seconds is required. In operation S1203, the first home appliance120, according to an embodiment of the disclosure, may receive a user input to operate in an AP mode. The user input, according to an embodiment of the disclosure, may be an input to press a certain button of the first home appliance120for switching of the first home appliance120to the network connection mode that is the AP mode. For example, when the first home appliance120is the laundry machine, a user input to select a WiFi button on the front surface of the laundry machine may have to be received to switch the laundry machine to the network connection mode that is the AP mode. In operation S1204, the first home appliance120, according to an embodiment of the disclosure, may operate in the AP mode in response to the user input. The first home appliance120, according to an embodiment of the disclosure, may use a soft AP to allow the first home appliance120to be recognized as a virtual AP, so as to establish WiFi communication with the user equipment130. When the first home appliance120, according to an embodiment of the disclosure, receives the user input to operate in the AP mode, the first home appliance120may operate the soft AP by using the WiFi module. In operation S1205, according to an embodiment of the disclosure, WiFi communication may be established between the first home appliance120and the user equipment130. The user equipment130, according to an embodiment of the disclosure may connect to the soft AP of the first home appliance120to establish WiFi communication connection with the first home appliance120, and perform WiFi communication with the first home appliance120. By using the soft AP according to an embodiment of the disclosure, WiFi communication connection established between the user equipment130and the first home appliance120may correspond to a WiFi Direct scheme. In operation S1206, the user equipment130according to an embodiment of the disclosure may receive information about the AP device150from the user. The information about the AP device150according to an embodiment of the disclosure may include an SSID of the AP device150, and an ID, a password, an authentication scheme, an encryption method, and an authentication key, etc., of the AP device150. In operation S1207, the user equipment130, according to an embodiment of the disclosure, may transmit the received information about the AP device150to the first home appliance120. In operation S1208, the first home appliance120, according to an embodiment of the disclosure, may connect to the server device110by using the received information about the AP device150, and then may transmit the information about the first home appliance120to the server device110. The first home appliance120, according to an embodiment of the disclosure, may connect to the AP device150by using the information (the ID, password, etc.) of the AP device150, and connect to the server device110through the Internet connected to the AP device150. The information about the first home appliance120, according to an embodiment of the disclosure, may be information including a product name, a product serial number, a manufacturing date, etc., of the first home appliance120. The information about the first home appliance120may lack information related to the location of the first home appliance120. In operation S1209, the server device110, according to an embodiment of the disclosure, may match the received information about the first home appliance120and the location information about the user equipment130, and register the first home appliance120in the server device110. According to an embodiment of the disclosure, when the user registers the first home appliance120in the server device110, the user may use the location information about the user equipment130as the location information about the first home appliance120because the user equipment130is located at a distance very close to the first home appliance120. According to an embodiment of the disclosure, the server device110may update registered-home-appliance location information lookup table by matching the information about the first home appliance120and the location information about the user equipment130simultaneously with registering the first home appliance120in the server device110. FIG.13is sequence diagram illustrating another method, performed by a server device in cooperation with other devices, of receiving information about a first home appliance according to an embodiment of the disclosure. In the series of processes illustrated inFIG.13, a guide for registering the first home appliance120in the server device110is provided using an NFC scheme by locating the user equipment130close to the NFC tag region of the first home appliance120, and the first home appliance120is registered based on the provided guide. According to an embodiment of the disclosure, to register the first home appliance120in the server device110, information about the first home appliance120has to be transmitted to the server device110. In operation S1301, the user equipment130, according to an embodiment of the disclosure, may perform NFC tagging and transmit, to the first home appliance120, information indicating that NFC tagging is performed. According to an embodiment of the disclosure, the user equipment130may establish NFC communication with the first home appliance120as the user equipment130is located adjacent to the NFC tag region included in the first home appliance120. In operation S1302, the first home appliance120, according to an embodiment of the disclosure, may transmit device information to the user equipment130. According to an embodiment of the disclosure, the device information may include the manipulation method of the first home appliance120for switching the first home appliance120to the network connection mode. According to an embodiment of the disclosure, the first home appliance120may transmit the device information to the user equipment130through NFC communication established with the user equipment130. In operation S1303, the user equipment130, according to an embodiment of the disclosure, may display the device registration guide. The device registration guide, according to an embodiment of the disclosure, may be a guide including the manipulation method of the first home appliance120for switching the first home appliance120to the network connection mode. The user equipment130may display the device registration guide on the display of the user equipment130. A description of operations S1304to S1310is the same as a description of operations S1203to S1209ofFIG.12, and thus will be omitted. FIGS.14A and14Bdepict illustrative examples of operation of a GUI provided by a server device to a user equipment to register a first home appliance in a certain account, according to an embodiment of the disclosure. Referring to an illustrative scenario1400adepicted inFIG.14A, the user equipment130may display the registered-device control GUI on the display. According to an embodiment of the disclosure, the user equipment130may receive a user input to control registered home appliances through the registered-device control GUI. Referring to the illustrative scenario1400aofFIG.14, the registered-device control GUI may display a list of the registered home appliances (e.g., an illuminating device, a speaker device, an air conditioner, and a TV), and the user may control a home appliance by selecting the home appliance to be controlled from the list. According to an embodiment of the disclosure, the user equipment130may receive a user input to register a new home appliance through the registered-device control GUI. According to an embodiment of the disclosure, the user equipment130may provide the device registration GUI for device registration to the user by receiving a user input to select a plus button1410that is a certain button displayed on the registered-device control GUI. Referring to an illustrative scenario1400bdepicted inFIG.14A, the user equipment130may display the device registration GUI on the display. According to an embodiment of the disclosure, the device registration GUI may include a QR code capturing menu420or an NFC tagging menu1430. The user may select the QR code capturing menu1420or the NFC tagging menu1430, displayed on the device registration GUI, thereby starting a series of processes for registering a home appliance in the server device110. When a QR code is attached to a home appliance to be registered, the user may be provided with a guide for home appliance registration by capturing the QR code attached to the home appliance with the user equipment130. Thus, in this case, in the device registration GUI, the QR code capturing menu1420may be selected. When an NFC tag region is included in the home appliance to be registered, the user may be provided with a guide for home appliance registration by locating the user equipment130in adjacent to the NFC tag region. Thus, in this case, in the device registration GUI, the NFC tagging menu1430may be selected. Referring to an illustrative scenario1400cdepicted inFIG.14A, the user equipment130may display an interface1440for capturing a QR code in response to reception of a user input to select the QR code capturing menu1420. The interface1440for capturing the QR code may display a camera screen for capturing the QR code, together with a guide phrase such as “Capture the QR code attached to the home appliance to be registered”. The user may capture the QR code attached to the home appliance by using the camera screen. Referring to the illustrative scenario1400cofFIG.14A, the user equipment130may display an interface1450for NFC tagging in response to reception of a user input to select the NFC tagging menu1430. The interface1450for NFC tagging may display a guide phrase such as “For NFC tagging, touch the home appliance to be registered, with your smartphone”. By placing the user equipment130adjacent to an NFC tag region of the home appliance to be registered, the user may induce NFC tagging between the user equipment130and the home appliance. Referring to an illustrative scenario1400ddepicted inFIG.14B, the user equipment130may display a device registration guide1460. When the user captures a QR code of the home appliance to be registered using the user equipment130, the user equipment130may identify an URL address from the captured QR code. The user equipment130may display the device registration guide included in the identified URL address. When the user locates the user equipment130on the NFC tag region of the home appliance, the user equipment130may be provided with the device registration guide1460from the home appliance through NFC communication established with the home appliance. The user equipment130may display the provided device registration guide1460. For example, when the home appliance to be registered is the laundry machine, the device registration guide1460may include a manipulation method of the laundry machine to register the laundry machine in the server device110. At this time, the device registration guide1460may include a guide such as “Turn on the laundry machine and press the WiFi button at the top of the menu of the laundry machine for 3 seconds”. Referring to an illustrative scenario1400edepicted inFIG.14B, the user equipment130may display an interface1470for receiving AP information. According to an embodiment of the disclosure, when the home appliance to be registered receives the user input and operates in the AP mode, WiFi communication may be established between the user equipment130and the home appliance. In this case, to enable the home appliance to be registered to connect to the server device110, the user has to provide information about the AP device150. Thus, when the user equipment130determines that the home appliance operates in the AP mode and WiFi communication is established between the home appliance and the user equipment130, the user equipment130may display the interface1470for receiving a user input with respect to the information about the AP device150. The user may provide information for connecting to the AP device150to the home appliance, by inputting the ID and password of the AP device150. According to an embodiment of the disclosure, the home appliance, having received the information about the AP device150, may connect to the AP device150and connect to the server device110through the Internet connected to the AP device150. The home appliance, according to an embodiment of the disclosure, may connect to the server device110, provide information about the home appliance (e.g. a product name, a serial number, and a manufacturing date of the home appliance), and register the home appliance in the server device110. In this case, the information about the home appliance does not include information related to the location of the home appliance, but the server device110may use location information about the user equipment130transmitted from a UWB device (e.g. the second home appliance140) existing in the house as the information about the home appliance. Referring to an illustrative scenario1400fdepicted inFIG.14B, the user equipment130may display an interface1480indicating that the home appliance is registered. For example, when the laundry machine is registered, the interface1480may show a product name and a location of the laundry machine. FIG.15is a sequence diagram illustrating a method, performed by a server device in cooperation with other devices, of providing a registered-home appliance control GUI to a user equipment, according to an embodiment of the disclosure. According to an embodiment of the disclosure, the user may control the home appliance registered in the server device110through the registered-device control GUI displayed on the user equipment130. When the UWB sensor of the user equipment130is already activated, the location of the user equipment130may be measured. The server device110may determine a third home appliance toward which the user equipment130is oriented, based on the measured location of the user equipment130, and provide a GUI for controlling the third home appliance to the user equipment130. The user equipment130according to an embodiment of the disclosure may receive a user input to select a UWB icon in operation S1501, and the user equipment130may transmit a control GUI request signal to the server device110in operation S1502. Referring to an illustrative example of a GUI depicted inFIG.16, the user equipment130, according to an embodiment of the disclosure, may transmit the control GUI request signal to the server device110by receiving a user input to select a UWB icon1610displayed on the registered-device control GUI. The user equipment130, according to another embodiment of the disclosure, may transmit the control GUI request signal to the server device110by receiving a user input to select a “UWB Device” icon1620from a list of registered home appliances, displayed on the registered-device control GUI. The user equipment130, according to another embodiment of the disclosure, may transmit the control GUI request signal to the server device110by receiving a user input to pull down an upper bar of the user equipment130and to select a “UWB Device” icon displayed on the upper bar. In operations S1503and S1504, the server device110, according to an embodiment of the disclosure, may transmit the UWB communication module activation signal to the user equipment130and the second home appliance140. According to an embodiment of the disclosure, the UWB communication module activation signal may be a signal for activating sensors of the UWB antennas respectively included in the user equipment130and the second home appliance140to exchange UWB signals with the user equipment130and the second home appliance140. In operation S1505, the user equipment130, according to an embodiment of the disclosure, may transmit the location identification request signal to the second home appliance140. According to an embodiment of the disclosure, the location identification request signal may be a UWB signal transmitted from the UWB antenna included in the user equipment130to the UWB antenna included in the second home appliance140. In operation S1506, the second home appliance140, according to an embodiment of the disclosure, may measure the location measurement value of the user equipment130based on the UWB signal that is the location identification request signal. The location measurement value of the user equipment130, according to an embodiment of the disclosure, may include azimuth information and elevation information about the user equipment130measured with respect to the second home appliance140and distance information between the second home appliance140and the user equipment130. The location measurement value of the user equipment130may include information about a direction in which the user equipment130is oriented. The second home appliance140, according to an embodiment of the disclosure, may determine the location measurement value of the user equipment130in operation S1507and transmit the location measurement value of the user equipment130to the server device110in operation S1508. In operation S1509, the server device110, according to an embodiment of the disclosure, may determine a third home appliance toward which the user equipment130is oriented, based on the registered-home-appliance location information lookup table. The server device110, according to an embodiment of the disclosure, may compare location measurement values of home appliances, included in the registered-home-appliance location information lookup table, with the location measurement value of the user equipment130, to determine the third home appliance toward which the user equipment is oriented. The server device110may determine the third home appliance that is a home appliance toward which the user equipment130is oriented, among the registered home appliances, based on a direction of the user equipment130included in the location measurement value and a difference in location measurement value between the user equipment130and the home appliances. In operation S1510, the server device110, according to an embodiment of the disclosure, may select a GUI screen for controlling the determined third home appliance from a GUI list for controlling the registered home appliances. According to an embodiment of the disclosure, the GUI list for controlling the registered home appliances may be information where GUIs that may be provided to the user equipment130to control the home appliances registered in the server device110are stored. For example, when the user equipment130is determined to be oriented toward the air conditioner, the server device110may select a GUI for controlling the air conditioner from the GUI list for controlling the registered home appliances. The server device110, according to an embodiment of the disclosure, may provide the selected GUI to the user equipment130in operation S1511, and the user equipment130may display the provided GUI on the display in operation S1512. In an embodiment of the disclosure, operations S1508to S1511are described as being performed by the server device110, but the disclosure is not limited thereto. That is, the second home appliance140may transmit the location measurement value of the user equipment130to the user equipment130. Thereafter, the user equipment130may determine the home appliance toward which the user equipment130is oriented, based on the registered-home-appliance location information lookup table, stored in the user equipment130. Thereafter, the user equipment130may display a GUI screen for controlling the determined home appliance thereon. FIG.17depicts an illustrative example of a home appliance control GUI, displayed by a user equipment, according to an embodiment of the disclosure. Referring toFIG.17, the user equipment130, according to an embodiment of the disclosure, may display a GUI for controlling the home appliance toward which the user equipment130is determined to be oriented, on the display. Hereinbelow, a description will be made on the assumption that the user equipment130is oriented toward the air conditioner among the home appliances registered in the server device110. Referring to an illustrative scenario1700adepicted inFIG.17, the user equipment130, according to an embodiment of the disclosure, may provide, to the user, an air conditioner control GUI for controlling the air conditioner toward which the user equipment130is oriented. According to an embodiment of the disclosure, the user may control the current temperature of the air conditioner, control a mode (a cooling mode or a dehumidifying mode) of the air conditioner, obtain information related to the current temperature, control a speed of a fan of the air conditioner, control ON/OFF of a windless mode of the air conditioner, etc., through an air conditioner control GUI1710. Referring to an illustrative scenario1700bdepicted inFIG.17, the user equipment130, according to an embodiment of the disclosure, may display information1720related to other peripheral devices under the air conditioner control GUI1710for controlling the air conditioner toward which the user equipment130is determined to be oriented. The information1720related to the other peripheral devices may be a text or an image indicating a device name, a device type, etc. In this case, the user equipment130may display the information1720related to the other peripheral devices as default settings, but may display the information1720related to the other peripheral devices when the user inputs a drag input under a screen of the air conditioner control GUI1710. The information1720related to the other peripheral devices may include a list of home appliances, which are registered in the server device110and have location measurement values measured with respect to a UWB device and stored in the server device110. The user may select a home appliance included in the information1720related to the other peripheral devices, thus being provided with a GUI for controlling the home appliance. For example, even when the current air conditioner control GUI1710is displayed on the user equipment130, the user may select a TV from the information1720related to the other peripheral devices at the bottom, thus being provided with a GUI for controlling the TV. Referring to an illustrative scenario1700cdepicted inFIG.17, the user equipment130, according to an embodiment of the disclosure, may display the other peripheral devices as indicated by1730under the air conditioner control GUI1710, taking account of location measurement values of the other peripheral devices with respect to the UWB device. In this case, a home appliance located adjacent to the direction in which the user equipment130is oriented may be displayed in bold or in a large font, and a home appliance located in a direction far from the direction in which the user equipment130is oriented may be displayed in a default font or displayed small. For example, while the user equipment130is determined to be oriented toward the air conditioner and thus the air conditioner control GUI1710is mainly provided, when the user equipment130is determined to be located in a direction adjacent to the TV and the refrigerator, then the TV and the refrigerator may be displayed in bold or in large fonts when compared to a water purifier and a light bulb device. FIG.18depicts an illustrative example of operation of a method, performed by the user equipment130, of providing a UWB mode and a fixed mode in a registered-device control GUI, according to an embodiment of the disclosure. Referring to an illustrative scenario1800adepicted inFIG.18, an air conditioner control GUI1810may include a UWB icon1820for an operation in the UWB mode. When the UWB icon1820is displayed on the air conditioner control GUI1810, a to-be-controlled-home appliance GUI may change with a direction in which the user equipment130is oriented. For example, when the direction of the user equipment130is changed from the direction in which the user equipment130is oriented toward the air conditioner to the direction in which the user equipment130is oriented toward the TV, the user equipment130may change from the air conditioner control GUI1810to a TV control GUI and provide the TV control GUI to the user. Referring to an illustrative scenario1800bdepicted inFIG.18, the air conditioner control GUI1810for controlling the air conditioner may include a fixing icon1830for an operation in the fixed mode. Upon receiving an input to touch the UWB icon1820from the user, the user equipment130, according to an embodiment of the disclosure, may change the UWB icon1820to the fixing icon1830and provide the air conditioner control GUI1810in the fixed mode. The user equipment130, according to an embodiment of the disclosure, may receive a particular gesture or a particular voice command as well as a user input to touch the UWB icon1820, thereby changing the UWB icon1820to the fixing icon1830. According to an embodiment of the disclosure, when the air conditioner control GUI1810is changed to the fixed mode, the user equipment130may fixedly display an existing to-be-controlled-home appliance GUI even in the case that the direction in which the user equipment130is oriented is changed. For example, when the air conditioner control GUI1810is in the fixed mode, the user equipment130may fixedly provide the air conditioner control GUI1810even in the case that the user changes the direction of the user equipment130from the direction in which the user equipment130is oriented toward the air conditioner to the direction in which the user equipment130is oriented toward the TV. According to an embodiment of the disclosure, the user equipment130may automatically switch the device control GUI from the UWB mode to the fixed mode. When receiving a touch input to control a device by using the device control GUI from the user, the user equipment130, according to an embodiment of the disclosure, may determine that the user is to continue device control, and automatically switch from the UWB mode to the fixed mode. For example, when the user controls the temperature of the air conditioner by using the air conditioner control GUI1810, the user equipment130may determine that the user continues to control the air conditioner, and automatically switch the air conditioner control GUI1810to the fixed mode. According to an embodiment of the disclosure, the user equipment130may automatically switch the device control GUI from the fixed mode to the UWB mode. When the user equipment130, according to an embodiment of the disclosure, does not receive any input from the user for a specific time, the user equipment130may determine that the user is not intending to control the device further, and may automatically switch the device control GUI from the fixed mode to the UWB mode. For example, when the user does not make any manipulation using the air conditioner control GUI1810, the user equipment130may determine that the user is not intending to control the air conditioner further, and may automatically switch the air conditioner control GUI1810to the UWB mode. FIG.19depicts an illustrative example of operation of a method, performed by a user equipment, of providing information related to a neighboring home appliance in a fixed mode, according to an embodiment of the disclosure. Referring to an illustrative scenario1900adepicted inFIG.19, the user equipment130, according to an embodiment of the disclosure, may display a GUI for controlling the home appliance toward which the user equipment130is determined to be oriented. When the user equipment130is determined to be oriented toward the air conditioner, the user equipment130may display an air conditioner control GUI1910. The user equipment130, according to an embodiment of the disclosure, may provide the air conditioner control GUI1910in the “fixed mode”, and in the fixed mode, a to-be-controlled home appliance GUI may continue to provide the air conditioner control GUI1910even when the direction in which the user equipment130is oriented is changed. The user equipment130, according to an embodiment of the disclosure, may provide the air conditioner control GUI1910in the fixed mode, and at the same time, display information1920related to other peripheral devices under the air conditioner control GUI1910. The information1920related to the other peripheral devices may include a list of home appliances, which are registered in the server device110and have location measurement values measured with respect to a UWB device and stored in the server device110. For example, the user equipment130may display an icon for selecting a TV, a refrigerator, a water purifier, and a light bulb1in the information1920related to the other peripheral devices. For example, when the user selects the TV, the user equipment130may change the air conditioner control GUI1910to a TV control GUI. Referring to an illustrative scenario1900bdepicted inFIG.19, when the direction in which the user equipment130is oriented is changed, the air conditioner control GUI1910is not changed, but a list of home appliances included in information1930related to the other peripheral devices may be changed. According to an embodiment of the disclosure, in the fixed mode, even when the direction in which the user equipment130is oriented is changed, the air conditioner control GUI1910that is a main control GUI is not changed. However, according to the direction in which the user equipment130is oriented, the information1930related to the other peripheral devices may be updated by changing the order of the information1930or adding information to or deleting information from the information1930. For example, when it is determined that the direction in which the user equipment130is oriented is closer to the water purifier and farther away from the TV, the information1930related to the other peripheral devices may be updated such that the water purifier is changed to a higher order and the TV is deleted. Referring to an illustrative scenario1900cdepicted inFIG.19, when it becomes clear that the direction in which the user equipment130is oriented is toward a particular home appliance, the user equipment130may highlight and display the particular home appliance in information1940related to the other peripheral devices. For example, when it becomes apparent that the direction in which the user equipment130is oriented is toward the water purifier, the air conditioner control GUI1910is displayed in the main GUI because of the fixed mode. In this case, the user equipment130may highlight and display an icon indicating the water purifier in the information1940related to the other peripheral devices. FIG.20depicts an illustrative example of operation of a method, performed by a user equipment, of providing a plurality of home appliance control GUIs, according to an embodiment of the disclosure. Referring to an illustrative scenario2000adepicted inFIG.20, the user equipment130, according to an embodiment of the disclosure, may simultaneously display a plurality of home appliance control GUIs. When the direction in which the user equipment130is oriented is determined as a plurality of home appliances rather than one home appliance, the user equipment130may simultaneously display the plurality of home appliance control GUIs. For example, the user equipment130may be oriented toward both the air conditioner and the TV. Alternatively, it may be difficult to determine whether the direction in which the user equipment130is oriented is the air conditioner or the TV. In this case, the user equipment130may display an air conditioner control GUI2010and a TV control GUI2020at the same time. According to an embodiment of the disclosure, the user may simultaneously control the air conditioner and the TV by using the air conditioner control GUI2010and the TV control GUI2020displayed on the user equipment130. The user equipment130, according to an embodiment of the disclosure, may display information2030related to the other peripheral devices under the air conditioner control GUI2010and the TV control GUI2020. Referring to an illustrative scenario2000bdepicted inFIG.20, the user equipment130, according to an embodiment of the disclosure, may provide the air conditioner control GUI2010in the fixed mode and the TV control GUI2020in the UWB mode. In this case, when the direction in which the user equipment130is oriented is changed, the air conditioner control GUI2010provided in the fixed mode may continue to be provided. However, the TV control GUI2020provided in the UWB mode may be changed to a GUI for controlling a home appliance located in a direction in which the user equipment130is actually oriented. FIG.21depicts another illustrative example of operation of a method, performed by a user equipment, of providing a plurality of home appliance control GUIs, according to an embodiment of the disclosure. Referring to an illustrative scenario2100adepicted inFIG.21, the user equipment130may display an air conditioner control GUI2110and information2120related to the other peripheral devices. According to an embodiment of the disclosure, the user equipment130may receive an input to drag a particular home appliance icon to a main screen in the information2120related to the other peripheral devices. For example, the user may desire to control the air conditioner and the TV at the same time. In this case, the user may desire to display the air conditioner control GUI2010and a TV control GUI at the same time on the user equipment130. According to an embodiment of the disclosure, the user may be provided with the TV control GUI by dragging a TV icon included in the information2120related to the other peripheral devices to the main screen. Referring to an illustrative scenario2100bdepicted inFIG.21, the user equipment130may display the air conditioner control GUI2110at the top and a TV control GUI2130at the bottom. According to an embodiment of the disclosure, in response to an input to drag the TV icon from the user, the TV control GUI2130may be displayed under the air conditioner control GUI2110. In this case, the order of a GUI screen displayed in an upper end and a GUI screen displayed in a lower end may be changed according to a user's drag input. For example, when the user desires to display the TV control GUI2130in the upper end, the user may drag the TV control GUI2130to the upper end while selecting the TV control GUI2130. In this case, the TV control GUI2130may be displayed in the upper end and the air conditioner control GUI2110may be displayed in the lower end. FIG.22is a block diagram illustrating structures of a first home appliance and a second home appliance according to an embodiment of the disclosure. The first home appliance120and the second home appliance140, according to an embodiment of the disclosure, may each correspond to a home appliance2200. The home appliance2200, according to an embodiment of the disclosure, may include a sensor2210, an output interface2220, an input interface2230, a memory2240, a communication module2250, a home appliance function module2260, a power module2280, and a processor2290. The home appliance2200may include various combinations of components shown inFIG.22, and all the components shown inFIG.22are not essential components. The home appliance2200ofFIG.22may correspond to the second home appliance140described with reference toFIG.3, the memory2240may correspond to the memory326described with reference toFIG.3, the processor1590may correspond to the processor320described with reference toFIG.3, and the communication module2250may correspond to the communication module322described with reference toFIG.3. The sensor2210may include various types of sensors, for example, an image sensor, an infrared sensor, an ultrasonic sensor, a lidar sensor, a human detection sensor, a motion detection sensor, a proximity sensor, an illuminance sensor, etc. A function of each sensor may be intuitively construed from a name of the sensor by those of ordinary skill in the art, and thus will not be described in detail. The output interface2220may include a display2221, an audio output module2222, etc. The output interface2220may output various notifications, messages, information, etc., generated by the processor2290. The input interface2230may include a key2231, a touch screen2232, etc. The input interface2230may receive a user input and transmit the same to the processor2290. The memory2240may store various information, data, an instruction, a program, etc., required for an operation of the home appliance2200. The memory2240may include at least one of volatile memory or non-volatile memory, or a combination thereof. The memory2240may include a storage medium of at least one type of a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (e.g., a secure digital (SD) or extreme digital (XD) memory, etc.), a random-access memory (RAM), a static random-access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, or an optical disk. Moreover, the home appliance2200may operate a web storage or a cloud server that performs a storage function on the Internet. The communication module2250may include at least one of a short-range wireless communication module2252or a long-range wireless communication module2254, or a combination thereof. The communication module2250may include at least one antenna for wireless communication with another device. The short-range wireless communication module2252may include, but not limited to, a Bluetooth communication module, a Bluetooth Low Energy (BLE) communication module, a near field communication (NFC) module, a wireless local area network (WLAN) (Wireless Fidelity (WiFi)) communication module, a ZigBee communication module, an infrared Data Association (IrDA) communication module, a WiFi Direct (WFD) communication module, an ultra-wideband (UWB) communication module, an Ant+ communication module, a microwave (uWave) communication module, etc. The long-range wireless communication module2254may include a communication module performing various types of long-range communication, and may include a mobile communication module. The mobile communication module may transmit and receive a radio signal to and from at least one of a base station, an external terminal, or a server over a mobile communication network. Herein, the radio signal may include various forms of data corresponding to transmission/reception of a voice call signal, a video communication call signal, or a text/multimedia message. The home appliance function module2260may include an operation module that performs the original function of the home appliance2200. When the home appliance2200is a laundry machine, the home appliance function module2260may include a washing module. The washing module may include a washing tube, a water supply unit, a motor, a door, a detergent inlet, and the like. When the home appliance2200is a refrigerator, the home appliance function module2260may include a refrigerating/freezing module. The refrigerating/freezing module may include a container, a cooler, a door, a temperature sensor, etc. When the home appliance2200is a drying machine, the home appliance function module2260may include a drying module. The drying module may include a laundry container, a motor, a dehumidifying unit, a drain unit, a door, a dust filter, a condenser, and the like. When the home appliance2200is a cleaning machine, the home appliance function module2260may include a cleaning module. The cleaning module may include a vacuum suction unit, a dust bin, a filter, a dust transfer pipe, and so forth. The processor2290may control an overall operation of the home appliance2200. The processor2290may control components of the home appliance2200by executing a program stored in the memory2240. According to an embodiment of the disclosure, the processor2290may include a separate network processing unit (NPU) that performs a machine learning model. The processor2290may also include a central processing unit (CPU), a graphic processing unit (GPU), etc. FIG.23is a block diagram illustrating a structure of a user equipment2301in a network environment2300, according to various embodiments of the disclosure. The user equipment2301ofFIG.23may correspond to the user equipment130described above. The processor310described with reference toFIG.3may correspond to a processor2320, and the communication module312described with reference toFIG.3may correspond to a communication module2390. The memory316described with reference toFIG.3may correspond to a memory2330, and the input/output interface318described with reference toFIG.3may correspond to an input module2350, an audio output module2355, a display module2360, an audio module2370, and a haptic module2379. The first home appliance120and the second home appliance140may correspond to an electronic device2302or an electronic device2304. Referring toFIG.23, in the network environment2300, the user equipment2301may communicate with the electronic device2302via a first network2398(e.g., a short-range wireless communication network), or may communicate with at least one of the electronic device2304or a server2308via a second network2399(e.g., a long-range wireless communication network). According to an embodiment of the disclosure, the user equipment2301may communicate with the electronic device2304via the server2308. According to an embodiment of the disclosure, the user equipment2301may include a processor2320, a memory2330, an input module2350, an audio output module2355, a display module2360, an audio module2370, a sensor module2376, an interface2377, a connection terminal2378, a haptic module2379, a camera module2380, a power management module2388, a battery2389, a communication module2390, a subscriber identification module2396, or an antenna module2397. In some embodiments of the disclosure, at least one (e.g., the connection terminal2378) of the components may be omitted from or one or more other components to the user equipment2301. In some embodiments of the disclosure, some of the components (e.g., the sensor module2376, the camera module2380, or the antenna module2397) may be integrated into one component (e.g., the display module2360). The processor2320may control at least one another component (e.g., a hardware or software component) of the user equipment2301connected to the processor2320by executing software (e.g., the program2340), and may perform various data processing or operations. According to an embodiment of the disclosure, the processor2320as at least a part of data processing or operations may store a command or data received from another component (e.g., the sensor module2376or the communication module290) in the volatile memory2332, process the command or data stored in the volatile memory2332, and store resulting data in the non-volatile memory2334. According to an embodiment of the disclosure, the processor2320may include a main processor2321(e.g., a CPU or an application processor) or an auxiliary processor2323(e.g., a GPU, an NPU, an image signal processor (ISP), a sensor hub processor, or a communication processor) capable of operating independently of or together with the main processor2321. For example, when the user equipment2301includes the main processor2321and the auxiliary processor2323, the auxiliary processor2323may use lower power than the main processor2321or may be configured to be specialized for a specific function. The auxiliary processor2323may be implemented separately from or as a part of the main processor2321. The auxiliary processor2323may control at least some of functions or states related to at least one (e.g., the display module2360, the sensor module2376, or the communication module2390) of the components of the user equipment2301, in place of the main processor2321in an inactive (e.g., sleep) state of the main processor2321or together with the main processor2321in an active (e.g., application execution) state of the main processor2321. According to an embodiment of the disclosure, the auxiliary processor2323(e.g., the image signal processor or the communication processor) may be implemented as a part of another component (e.g., the camera module2380or the communication module2390) functionally related thereto. According to an embodiment of the disclosure, the auxiliary processor2323(e.g., the NPU) may include a hardware structure specialized for processing an AI model. The AI model may be generated through machine learning. Such learning may be performed by the user equipment2301that executes the AI model, or through a separate server (e.g., the server2308). Examples of a learning algorithm may include, but is not limited to, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The AI model may include a plurality of artificial neural network layers. The artificial neural network may be, but is not limited to, a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof. The AI model may additionally or alternatively include a software structure as well as the hardware structure. The memory2330may store various data used by at least one component (e.g., the processor2320or the sensor module2376). Data may include input data or output data regarding, for example, software (e.g., the program2340) and a command related thereto. The memory2330may include the volatile memory2332or the non-volatile memory2334. The program2340may be stored as software in the memory2330, and may include an operating system2342, middleware2344, or an application2346. The input module2350may receive commands or data to be used in a component (e.g., the processor2320) of the user equipment2301from the outside (e.g., a user) of the user equipment2301. The input module2350may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen). The audio output module2355may output an audio signal to the outside of the user equipment2301. The audio output module2355may include, for example, a speaker or a receiver. The speaker may be used for a general purpose such as multimedia reproduction or record play. The receiver may be used to receive an incoming call. According to an embodiment of the disclosure, the receiver may be implemented separately from or as a part of the speaker. The display module2360may visually provide information to the outside (e.g., the user) of the user equipment2301. The display module2360may include, for example, a display, a hologram display device, or a projector, and a control circuit for controlling the corresponding device. According to an embodiment of the disclosure, the display module2360may include a touch sensor configured to detect a touch or a pressure sensor configured to measure a strength of a force generated by the touch. The audio module2370may convert sound into an electrical signal or convert an electrical signal into sound. According to an embodiment of the disclosure, the audio module2370may obtain sound through the input module2350or output sound through an external electronic device (e.g., the electronic device2302, for example, a speaker or a headphone) directly or wirelessly connected to the audio output module2355or the user equipment2301. The sensor module2376may sense an operating state (e.g., power or a temperature) of the user equipment2301, or an outer environmental state (e.g., a user state), and may generate an electrical signal or a data value corresponding to the sensed state. According to an embodiment of the disclosure, the sensor module2376may include, e.g., a gesture sensor, a gyro sensor, a pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface2377may support one or more designated protocols that may be used in order for the user equipment2301(e.g., the electronic device2302) to be directly or wirelessly connected to the external electronic device (e.g., the electronic device2302). According to an embodiment of the disclosure, the interface2377may include, for example, a high-definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. The connection terminal2378may include a connector through which the user equipment2301may be physically connected to the external electronic device (e.g., the electronic device2302). According to an embodiment of the disclosure, the connection terminal2378may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector). The haptic module2379may convert the electrical signal into a mechanical stimulation (e.g., vibration or motion) or an electric stimulation that the user may sense through a tactile or motion sensation. According to an embodiment of the disclosure, the haptic module2379may include, for example, a motor, a piezoelectric element, or an electric stimulation device. The camera module2380may capture a still image and a moving image. According to an embodiment of the disclosure, the camera module2380may include one or more lenses, image sensors, image signal processors, or flashes. The power management module2388may manage the power supplied to the user equipment2301. According to an embodiment of the disclosure, the power management module2388may be implemented as at least a part of, for example, a power management integrated circuit (PMIC). The battery2389may supply electric power to at least one component of the user equipment2301. According to an embodiment of the disclosure, the battery2389may include, for example, a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell. The communication module2390may support establishment of a direct (wired) communication channel and a wireless communication channel between the user equipment2301and an external electronic device (e.g., the electronic device2302, the electronic device2304, or the server2308), and execution of communication through the established communication channel. The communication module2390may operate independently of the processor2320(e.g., the application processor), and may include one or more communication processors that support direct (e.g., wired) communication or wireless communication. According to an embodiment of the disclosure, the communication module2390may include a wireless communication module2392(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module2394(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). The communication module2390may communicate with the external electronic device2304through the first network2398(e.g., a short-range communication network such as Bluetooth, WiFi Direct or IrDA) or the second network2399(e.g., a long-range communication network such as a legacy cellular network, a 5th-Generation (5G) network, a next-generation communication network, Internet, or a computer network (e.g., a LAN or WAN)). Such various kinds of communication modules may be integrated as one component (e.g., a single chip) or may be implemented as a plurality of components (e.g., a plurality of chips) separately from one another. The wireless communication module2392may identify or authenticate the user equipment2301in a communication network such as the first network2398or the second network2399by using subscriber information (e.g., an international mobile subscriber identifier (IMSI)) stored in the subscriber identification module2396. The wireless communication module2392may support a 5G network and next-generation communication technology, e.g., new radio (NR) access technology, after a 4th-Generation (4G) network. The NR access technology may support high-speed transmission (enhanced mobile broadband (eMBB)) of high-volume data, terminal power minimization and access of multiple terminals (massive machine type communications (mMTC)), or high reliability and low latency (ultra-reliable and low-latency communications (URLLC)). The wireless communication module2392may support, for example, a high-frequency band (e.g., an mmWave band) to achieve a high data transmission rate. The wireless communication module2392may support various techniques for securing performance in the high-frequency band, e.g., beamforming, massive multiple-input and multiple-output (MIMO), full dimensional (FD)-MIMO, an array antenna, analog beamforming, or a large-scale antenna. The wireless communication module2392may support various requirements prescribed in the user equipment2301, an external electronic device (e.g., the electronic device2304), or a network system (e.g., the second network2399). According to an embodiment of the disclosure, the wireless communication module2392may support a peak data rate (e.g., 20 Gbps or more) for eMBB implementation, a loss coverage (e.g., 164 dB or less) for mMTC implementation, or user-plane latency (e.g., 0.5 ms or less or a round trip of 1 ms or less for each of a downlink (DL) and an uplink (UL) for URLLC implementation. The antenna module2397may transmit or receive a signal or power to or from outside (e.g., an external electronic device). According to an embodiment of the disclosure, the antenna module2397may include an antenna including a conductor formed on a substrate (e.g., a printed circuit board (PCB)) or a radiator having a conductive pattern. According to an embodiment of the disclosure, the antenna module2397may include a plurality of antennas (e.g., an array antenna). In this case, at least one antenna suitable for a communication scheme used in a communication network such as the first network2398or the second network2399may be selected from, for example, the plurality of antennas, by the communication module2390. The signal or power may be transmitted or received between the communication module2390and the external electronic device through the selected at least one antenna. According to some embodiments of the disclosure, a component (e.g., a radio frequency integrated circuit (RFIC)) as well as the radiator may be additionally formed as a part of the antenna module2397. According to various embodiments of the disclosure, the antenna module2397may form an mmWave antenna module. According to an embodiment of the disclosure, the mmWave antenna module may include a PCB, an RFIC disposed on a first surface (e.g., a bottom surface) of the PCB or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., an mmWave band), and a plurality of antennas (e.g., an array antenna) disposed on a second surface (e.g., a top surface or a side surface) of the PCB or adjacent to the second surface and capable of transmitting or receiving a signal in the designated high-frequency band. At least some of the components may be connected to one another via a communication scheme between peripheral devices (e.g., a bus, general purpose input and output (GPIO), a serial peripheral interface (SPI), or a mobile industry processor interface (MIPI)) and may exchange signals (e.g., a command or data). According to an embodiment of the disclosure, a command or data may be transmitted or received between the user equipment2301and the external electronic device2304through the server2308connected to the second network2399. Each of the external electronic devices2302and the2304may be of a type that is the same as or different from the user equipment2301. According to an embodiment of the disclosure, all or some of operations performed in the user equipment2301may be performed in one or more of the external electronic devices2302,2304, and2308. For example, when the user equipment2301has to perform a certain function or service automatically or in response to a request from the user or another device, the user equipment2301may request one or more external electronic devices to perform at least a part of the function or service, instead of performing the function or service. The one or more external electronic devices having received the request may execute at least a part of the requested function or service or an additional function or service related to the request, and transmit a result of execution to the user equipment2301. The user equipment2301may intactly or additionally process the result and provide a result of the processing as at least a part of a response to the request. To this end, for example, cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing may be used. The user equipment2301may provide an ultra-low latency service by using the distributed computing or the MEC. In another embodiment of the disclosure, the external electronic device1604may include an Internet of Things (IoT) device. The server2308may be an intelligent server using machine learning and/or a neural network. According to an embodiment of the disclosure, the external electronic device2304or the server2308may be included in the second network2399. The user equipment2301may apply an intelligent service (e.g., a smart home, a smart city, a smart car, or health care) based on 5G communication technology and IoT-related technology. A term “module” used in various embodiments of this document may include a unit implemented with hardware, software, or firmware, and may be mutually compatibly used with a term such as logic, a logic block, a part, or a circuit. The module may be a component configured as one piece or a minimum unit of the component, which performs one or more functions, or a part thereof. For example, according to an embodiment of the disclosure, the module may be implemented in the form of an application specific integrated circuit (ASIC). Various embodiments of this document may be implemented as software (e.g., a program) including one or more instructions stored in a storage medium readable by a machine (e.g., the user equipment130, the first home appliance120, or the second home appliance140). For example, a processor of a machine (e.g., the user equipment130, the first home appliance120, or the second home appliance140) may call at least one of the one or more stored instructions from the storage medium and execute the instruction. This may enable the machine to operate to perform at least one function according to the at least one called instruction. The one or more instructions may include a code generated by a compiler or a code executable by an interpreter. A machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment of the disclosure, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., Play Store™), or between two user devices (e.g., smart phones) directly. In the case of online distribution, at least a part of a computer program product may be at least temporarily stored in a machine-readable storage medium such as a memory of a server of a manufacturer, a server of an application store, or a relay server, or may be generated temporarily. According to various embodiments of the disclosure, each of the above-described components (e.g., a module or program) may include a single entity or a plurality of entities, some of which may be separately disposed on other components. According to various embodiments of the disclosure, one or more of the above-described components or operations may be omitted or one or more other components or operations may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into one component. In this case, the integrated component may perform one or more functions of each of the plurality of components in a manner that is the same as or similar to a corresponding component of the plurality of components before the integration. According to various embodiments of the disclosure, operations performed by a module, a program, or other components may be executed sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. | 142,288 |
11863341 | DETAILED DESCRIPTION Preferred embodiments of this disclosure shall be described below with reference to the accompanying drawings. Embodiment 1 The embodiment of this disclosure provides an integrated control panel of a household appliance.FIG.1is a modularized schematic diagram of the integrated control panel of a household appliance of Embodiment 1 of this disclosure.FIG.2is a modularized schematic diagram of household appliances of Embodiment 1 of this disclosure.FIG.3is a modularized schematic diagram of loads of Embodiment 1 of this disclosure. As shown inFIG.1andFIG.2, an integrated control panel100of a household appliance1includes a substrate150and a processor module110and a drive module120disposed on the substrate150. The processor module110generates at least two drive control signals and outputs the at least two drive control signals to the drive module120, and the drive module120generates and outputs at least two drive signals driving at least two loads according to the at least two drive control signals. In the embodiment of this disclosure, the household appliance1may be a household appliance1with multiple loads, for example, the household appliance1is a washing machine, a refrigerator, a dishwasher, a range hood, or an oven, etc. For example, the household appliance1may also be an air conditioner. In the embodiment of this disclosure, the substrate150may be various types of circuit substrates, and modules of the integrated control panel100are provided on the substrate150. In this way, the modules of the integrated control panel100capable of driving at least two loads are disposed on the same substrate150, thereby achieving integration. In the embodiment of this disclosure, the drive module120is able to generate and output at least two drive signals to drive at least two loads respectively. In other words, drive control signals, drive signals and loads are in a one-to-one correspondence. In the embodiment of this disclosure, each drive control signal may include one or more paths of control signals. For example, one drive control signal includes multiple paths of control signals outputted from the processor module110for the drive module120to generate a drive signal driving a load. In the embodiment of this disclosure, the driving signals driving the loads include the power required to drive the loads. In the embodiment of this disclosure, the load driven by the drive module120may include a variable frequency load, or a fixed frequency load, i.e. a non-variable frequency load, or may include both a variable frequency load and a fixed frequency load. In the embodiment of this disclosure, the number of loads driven by the drive module120is at least two, and a particular number may be determined as actually demanded. For example, the number of loads driven by the drive module120is 2-5. In the embodiment of this disclosure, the load driven by the drive module120may include a variable frequency load and/or a fixed frequency load. For example, it may include at least two variable frequency loads. In the embodiment of this disclosure, as shown inFIG.3, the type of the load driven by the drive module120may be various loads in the household appliance1. For example, the load driven by the drive module120may include at least one of an electric motor11, a fan12, a water pump13, a compressor14and a light source15. In one implementation of the embodiment of this disclosure, the load may include at least two motors of different types. In one implementation of the embodiment of this disclosure, the load may include a main drive motor of a washing machine drum and a drainage pump. In the embodiment of this disclosure, the load driven by the drive module120may be multiple loads of the same type, or may be multiple loads of different types. In the case of multiple loads of different types, it may be that types of the loads may be different from each other, or the types of a part of the loads may be identical. In one implementation of the embodiment of this disclosure, the load may include at least one main drive motor of a washing machine drum and at least one main drive motor of a clothe dryer drum. In one implementation manner of the embodiment of this disclosure, the load may include at least two electric motors of identical types. In one implementation of the embodiment of this disclosure, the load may include at least two main drive motors of a washing machine drum. In one implementation of the embodiment of this disclosure, the load may include a permanent magnet synchronous motor. In one implementation manner of the embodiment of this disclosure, the load may include an induction motor. In one implementation manner of the embodiment of this disclosure, the load may include an internal-rotor electric machine. In one implementation manner of the embodiment of this disclosure, the load may include an external-rotor electric machine. For example, the load driven by the drive module120includes a variable frequency motor, a variable frequency fan, and a variable frequency water pump. In the embodiment of this disclosure, the drive module120outputs the at least two drive signals in a synchronous manner or a time-division manner. That is, the drive module120performs synchronous driving or time-division driving on the at least two loads. And a particular driving mode may be designed as actually demanded. A structure of the drive module120shall be described in detail below.FIG.4is a modularized schematic diagram of the drive module of Embodiment 1 of this disclosure. In one implementation of the embodiment of this disclosure, as shown inFIG.4, the drive module120includes: N variable frequency drive modules121configured to, according to N drive control signals, generate N variable frequency drive signals driving N variable frequency loads200respectively; where, N is an integer greater than or equal to 2. In this way, the drive module120may drive multiple variable frequency loads. As a cost of the variable frequency control panel is relatively high, integrating multiple variable frequency control panels into one control panel may further reduce the cost of the control panel. In the embodiment of this disclosure, the N variable frequency loads200may be variable frequency loads of identical types, or may be variable frequency loads of different types, or a part of the variable frequency loads may be variable frequency loads of identical types, and a part of the variable frequency loads may be variable frequency loads of different types. FIG.5is a modularized schematic diagram of the variable frequency drive module of Embodiment 1 of this disclosure. As shown inFIG.5, a variable frequency drive module121includes:a voltage inverting drive module1211configured to, according to the drive control signals, invert direct currents into the variable frequency drive signals; anda first output interface1212configured to connect a variable frequency load to which the variable frequency drive module121correspond, and output the variable frequency drive signal to the variable frequency load. In one implementation of the embodiment of this disclosure, the variable frequency drive signal is a three-phase alternating current with variable frequency and voltage. That is, the voltage inverting drive module1211inverts the direct current into the three-phase alternating current with variable frequency and voltage for driving the variable frequency load to which the variable frequency drive module121corresponds. In one implementation of the embodiments of this disclosure, a rated output current of the variable frequency drive module121is 0-20 A, and an alternating current peak value thereof is 20 A. For example, the rated output current of the variable frequency drive modules121is 0-10 A, and the alternating current peak value thereof is 10 A. For another example, the rated output current of the variable frequency drive module121is 0-6 A, and the alternating current peak value thereof is 6 A. In one implementation of the embodiments of this disclosure, a switching frequency of the variable frequency drive module121is 0-20 kHz. For example, the switching frequency of the variable frequency drive module121is 10-20 kHz. In one implementation of the embodiment of this disclosure, the voltage inverting drive module1211includes an overcurrent protection circuit. The overcurrent protection circuit is configured to perform overcurrent detection on the variable frequency load to which the variable frequency drive module121corresponds, lock the voltage inverting drive module1211when an overcurrent is detected and transmit a load failure signal to the processor module110. Thus, load overcurrent protection via hardware may be achieved. In one implementation of the embodiments of this disclosure, the voltage inverting drive modules1211of the at least two variable frequency drive modules121are voltage inverting drive modules of the same type, or the voltage inverting drive modules1211of the at least two variable frequency drive modules121include at least two types of voltage inverting drive modules. For example, the voltage inverting drive modules1211of the at least two variable frequency drive modules121are all IGBT discrete circuits, or the voltage inverting drive modules1211of the at least two variable frequency drive modules121are all IPM modules, or a part of the voltage inverting drive modules1211of the at least two variable frequency drive modules121are IGBT discrete circuits, and the other part of the voltage inverting drive modules1211are IPM modules. In one implementation of the embodiments of this disclosure, as shown inFIG.4, the drive module120may further include: M fixed frequency drive modules122configured to, according to M drive control signals, generate M variable frequency drive signals driving M fixed frequency loads300, M being an integer greater than or equal to 1. In this way, the drive module120may drive not only multiple variable frequency loads, but also fixed frequency loads, thereby further reducing the cost of the control panel and the overall cost of the household appliance1. In one implementation of the embodiments of this disclosure, the M fixed frequency loads300may be fixed frequency loads of the same type, or fixed frequency loads of different types, or a part of the fixed frequency loads fixed frequency loads may be fixed frequency loads of the same type, and a part of the fixed frequency loads fixed frequency loads may be fixed frequency loads of different types. FIG.6is a modularized schematic diagram of a fixed frequency drive module of Embodiment 1 of this disclosure. As shown inFIG.6, the fixed frequency drive module122includes:a relay module1221configured to, according to the drive control signal, generate the fixed frequency drive signal; anda second output interface1222configured to connect a fixed frequency load to which the fixed frequency drive module corresponds, and output the fixed frequency drive signal to the fixed frequency load. In one implementation of the embodiments of this disclosure, the processor module110may include at least one processor unit111, the processor unit111being, for example, a micro-control unit (MCU). In a case where the processor module110includes one processor unit only, the processor unit generates at least two drive control signals. And in a case where the processor110includes multiple processor units, each processor unit generates a respective drive control signal. In one implementation of the embodiment of this disclosure, the processor unit includes multiple I/O interfaces, wherein a part of the I/O interfaces are used to be connected the drive module. For example, six I/O interfaces of the processor unit output six paths of control signals to drive one voltage inverting drive module1211, that is, the six paths of control signals are configured as one drive control signal outputted to one voltage inverting drive module1211. In one implementation of the embodiments of this disclosure, in a case where the processor unit receives a load failure signal from the drive module120, the processor unit stops outputting a drive control signal corresponding to the load where the failure occurs to the drive module120. Thus, load overcurrent protection via software may be achieved. For example, when one I/O interface of the processor unit is inputted with a load failure signal from one voltage inverting drive module1211, the I/O interface stops outputting a drive control signal to the voltage inverting drive module1211. Thus, as described above, in a case where he voltage inverting drive module1211includes an overcurrent protection circuit, overcurrent protection via both software and hardware may be achieved. In one implementation of the embodiments of this disclosure, the processor unit may further include at least one of the following analog-to-digital conversion interfaces: a first analog-to-digital conversion interface configured to detect a temperature of the drive module; a second analog-to-digital conversion interface configured to detect a current of the load; and a third analog-to-digital conversion interface configured to detect a voltage of a bus. Thus, the processor unit has at least one of an over-temperature protection function, an overcurrent protection function and an overvoltage protection function. In one implementation of the embodiments of this disclosure, the second analog-to-digital conversion interface detects the current of the load by using a scheme of one resistance sample, or a scheme of two resistance samples, or a scheme of three resistance samples. In one implementation of the embodiments of this disclosure, the integrated control panel100may further include a power supply module130disposed on the substrate150, the power supply module130being configured to supply power to at least one module in the integrated control panel100. In one implementation of the embodiments of this disclosure, the power supply module130may further be configured to supply power to at least one load in the household appliance1. In this way, the power supply module130may be used as a power supply for at least one load. Hence, no separate power supply needs to be provided additionally for the load, thereby simplifying a circuit structure, and further reducing overall cost of the household appliance1. FIG.7is a modularized schematic diagram of the power supply module of Embodiment 1 of this disclosure. As shown inFIG.7, the power supply module130includes:a first power supply unit131configured to convert an alternating current into a direct current of a first voltage;a second power supply unit132configured to perform voltage reduction processing on the direct current of a first voltage, and output a direct current of a second voltage to the drive module. In one implementation of the embodiments of this disclosure, the first voltage and the second voltage may be set as actually demanded. For example, the first power supply unit131is inputted with AC power, and obtains DC power of a voltage 310V by filtering and rectification. For example, the second power supply unit132forms a BUCK step-down circuit by a switching power supply chip, an inductor, a diode and a capacitor, and performs voltage reduction processing on the DC power of a voltage 310V to obtain DC power of a voltage 15V. The DC power of a voltage 15V may be outputted to the drive module120, such as outputting to the multiple voltage inverting drive modules1211in the drive module120. In this way, the multiple voltage inverting drive modules1211may share one power supply module. In one implementation of the embodiments of this disclosure, as shown inFIG.1, the integrated control panel100may further include a communication module140disposed on the substrate150, and the processor module110is communication with the main control panel of the household appliance1via the communication module140, and generates the drive control signals according to an instruction from the main control panel. For example, according to an input signal of an input panel of the household appliance1, the main control panel generates a control instruction for a load and transmits it to the integrated control panel100; the communication module140of the integrated control panel100receives the control instruction and transmits it to the processor module110, and the processor module110generates a drive control signal for the load according to the control instruction and outputs it to the drive module120to generate a corresponding drive signal for driving the load. In a case where the main control panel generates multiple control instructions, the main control panel transmits the multiple control instructions to the integrated control panel100in turn, that is, the processor module110is in time-division communication with the main control panel. For example, the control instructions include operating parameters, and operating time of the load, etc. For example, for an electric motor, the control instructions include a speed of rotation, direction of rotation, and operational time, etc., of the electric motor In one implementation of the embodiments of this disclosure, the communication module140may include an optical coupler configured to perform low current and high current separation between the integrated control panel100and the main control panel. The number of optical couplers may be determined as actually demanded. For example, the communication module140includes two optical couplers. In one implementation of the embodiments of this disclosure, as shown inFIG.7, the power supply module130further includes:a third power supply unit133configured to perform voltage reduction processing on the direct current of a second voltage outputted by the second power supply unit132, and output a direct current of a third voltage to the processor module110and/or the communication module140. In the embodiment of this disclosure, the third voltage may be set as actually demanded. For example, the third power supply unit133performs voltage reduction processing on the DC power of a voltage 15V outputted by the second power supply unit132to obtain DC power of a voltage 3.3V or 5V, and outputs the DC power of a voltage 3.3V or 5V to the processor module110and/or the communication module140. In this way, with the power supply module130, power is supplied to the modules in the integrated control panel100needing power supply. It can be seen from the above embodiment that integration of the drive module is implemented by integrating the drive module driving multiple loads on the control panel, thereby reducing cost of the control panel and simplifying installation process. The overall cost of the household appliance1may be reduced while ensuring the performances of the household appliance1. And as relatively few devices and electric connection are employed, reliability of the system may be notably improved. Embodiment 2 The embodiment of this disclosure provides a control system for a household appliance, the control system including the integrated control panel of a household appliance described in Embodiment 1. FIG.8is a modularized schematic diagram of the control system of a household appliance of Embodiment 2 of this disclosure. As shown inFIG.8, a control system10of a household appliance includes:the integrated control panel100; anda main control panel400in communication with the integrated control panel100. In the embodiment of this disclosure, a structure of the integrated control panel100is as shown inFIG.1. The processor module110of the integrated control panel100generates at least two drive control signals according to an instruction from the main control panel400and outputs the two drive control signals to the drive module120, and according to the at least two drive control signals, the drive module120generates and outputs at least two drive signals driving at least two loads. Particular contents of the integrated control panel100are identical to those contained in Embodiment 1, and shall not be described herein any further. In the embodiment of this disclosure, the main control panel400is also referred to as an upper computer or an upper computer drive panel. For example, after a user operates the input panel of the household appliance1, the main control panel400generates a control instruction for a load according to an input signal generated by the input panel in response to the operation of the user and transmits the control instruction to the integrated control panel100. The communication module140of100receives the control instruction and transmits it to the processor module110. According to the control instruction, the processor module110generates a drive control signal for the load and outputs it to the drive module120for generating a corresponding drive signal so as to drive the load. In a case where the main control panel400generates multiple control instructions, the main control panel400sequentially transmits the multiple control instructions to the integrated control panel100, that is, the processor module110is in time-division communication with the main control panel400. In the embodiment of this disclosure, the control system10of the household appliance1may further include other components as demanded, and reference may be made to related art for a particular structure of the control system10, which shall not be described herein any further. In the embodiment of this disclosure, the household appliance1may be a household appliance1with multiple loads, for example, the household appliance1is a washing machine, a refrigerator, an air conditioner, a dishwasher, a range hood, or an oven, etc. It can be seen from the above embodiment that integration of the drive module is implemented by integrating the drive module driving multiple loads on the control panel, thereby reducing cost of the control panel and simplifying installation process. The overall cost of the household appliance1may be reduced while ensuring the performances of the household appliance1. And as relatively few devices and electric connection are employed, reliability of the system may be notably improved. The above apparatuses and methods of this disclosure may be implemented by hardware, or by hardware in combination with software. This disclosure relates to such a computer-readable program that when the program is executed by a logic device, the logic device is enabled to carry out the apparatus or components as described above, or to carry out the methods or steps as described above. This disclosure also relates to a storage medium for storing the above program, such as a hard disk, a floppy disk, a CD, a DVD, and a flash memory, etc. This disclosure is described above with reference to particular embodiments. However, it should be understood by those skilled in the art that such a description is illustrative only, and not intended to limit the protection scope of this disclosure. Various variants and modifications may be made by those skilled in the art according to the principle of this disclosure, and such variants and modifications fall within the scope of this disclosure. | 23,321 |
11863342 | DETAILED DESCRIPTION Preferred implementation modes of the present application will now be described with reference to the accompanying drawings. It should be understood by those skilled in the art that these implementation modes are merely illustrative of the technical principles of the present application and are not intended to limit the scope of the present application. It is to be noted that in the description of the present application, although each step of the control method of the present application is described in the present application in a particular sequence, these sequences are not limiting and those skilled in the art can perform the steps in a different sequence without departing from the basic principles of the present application. On the basis of the problem that the existing home system is difficult to meet the demand of running a plurality of household appliances in the same time period, the application provides a control method for a home system, which aims to ensure the number of the household appliances in a working state in the same time period to the maximum extent on the premise of the safe use of electricity so as to meet the demand of simultaneously using the plurality of household appliances by a user. As shown inFIG.1, the control method for a home system of the application includes:step S1: obtaining the power curve of a household appliance to be run;step S2: obtaining the power curves of all currently running household appliances;and step S3: if the power curve of the household appliance to be run and that of at least one currently running household appliance meet an off-peak running condition, making the household appliance to be run and the currently running household appliance run within an off-peak period of time. In the above steps, the “power curve” of the household appliance to be run or the currently running household appliance specifically refers to the fact that: in the whole running program that the household appliance to be run/the currently running household appliance is going to execute or is executing, the running program includes a plurality of stages of working conditions; when the household appliance to be run/the currently running household appliance is in the working conditions of different stages, the real-time power of the household appliance is different, and the curve formed by the real-time power of different working conditions of all stages is the power curve. For example, when the household appliance is clothes treatment equipment, the overall running program of the equipment includes a washing working condition-a dewatering working condition-a rinsing working condition-a dewatering working condition-a drying working condition, and the real-time power corresponding to each working condition is P1-P2-P3-P4-P5, the power of at least part of P1, P2, P3, P4and P5being different. In this case, the real-time power during the overall running procedure can form one fluctuating power curve. In the case that the power curves of the household appliance to be run and all the currently running household appliances are obtained, the specific manner for judging whether the household appliance to be run and the currently running household appliances meet the off-peak running condition can be as follows: in the case where the power curve of the household appliance to be run and the power curve of the currently running household appliance have fluctuations—i.e. the two power curves have peaks and troughs, it is judged whether the peaks (i.e. the high power section) and troughs (i.e. the low power section) of the two power curves meet the interleaving condition of “one running with high power (or low power), and the other one running with low power (or high power)”. All the high power sections of the two power curves must be strictly interleaved, that is to say, the case that the household appliance to be run and the currently running household appliance are simultaneously running with high power working condition at a certain moment cannot occur, but the case that the household appliance to be run and the currently running household appliance are simultaneously running with low power working condition at a certain moment can occur. Of course, in the off-peak running procedure, the total power of all the currently running household appliances and the household appliances to be run always does not exceed the total rated power defined by the circuit, and the basic premise of the off-peak running is that after the household appliance to be run is connected into the circuit, the case that the circuit is overloaded always does not occur in the whole running procedure. The conditions of the above-mentioned off-peak running are exemplified below with reference to one household appliance to be run and one currently running household appliance. As shown inFIG.2, each of the whole running procedures of the household appliance to be run and the currently running household appliances includes two working conditions, and the powers under the two working conditions are different in magnitude. The working condition of the currently running household appliance in the first stage is a low power working condition and the duration is T1, the working condition in the second stage is a high power working condition and the duration is T2, the working condition of the household appliance to be run in the first stage is high power working condition and the duration is t1, and the working condition in the second stage is a low power working condition and the duration is t2. After the currently running household appliance runs for Δt time, the household appliance to be run is intended to be connected into a circuit to run simultaneously with the currently running household appliance. In this case, if it is desired to make the household appliance to be run and the currently running household appliance run within an off-peak period of time, the high power section of the household appliance to be run (i.e. t1section) is strictly staggered from the high power section (i.e. T2section) of the currently running appliance, i.e. if it is desired to connect the household appliance to be run into the circuit after the currently running household appliance runs for Δttime, the duration of the t1section of the household appliance to be run cannot exceed the remaining duration (i.e., T1-Δt) of the T1section of the currently running household appliance. Of course, since the case where the household appliance to be run and the currently running household appliance run at a low power working condition at the same time, or the currently running household appliance ends the running program, only leaving the household appliance to be run running at a low power working condition will not cause a circuit overloading, in practice, the duration of the low power section (i.e., t2section) of the household appliance to be run can be greater than the duration of the T2section of the currently running household appliance. In the case where the household appliance to be run and the currently running household appliance are run strictly within an off-peak period of time, the duration of the t1section is equal to T1-Δt, and the duration of the t2section is equal to that of the T2section. As shown inFIG.3, when the power curves of the currently running household appliance and the household appliance to be run are opposite to the power curves shown inFIG.2, that is, when the working condition of the currently running household appliance in the first stage is a high power working condition and the duration is T1, the working condition in the second stage is a low power working condition and the duration is T2, the working condition of the household appliance to be run in the first stage is a low power working condition and the duration is t1, and the working condition in the second stage is a high power working condition and the time duration is t2, if the household appliance to be run is connected into the circuit after the currently running household appliance runs for Δt time to run simultaneously with the currently running household appliance, the high power section (i.e., t2section) of the household appliance to be run and the high power section (i.e., T1section) of the currently running household appliance still need to be strictly staggered. In this case, if it is desired to connect the household appliance to be run into the circuit after the currently running household appliance runs for Δt time, the duration of the t1section of the household appliance to be run must not be less than the remaining running duration (i.e., T1-Δt) of the currently running household appliance. In the second stage, when the household appliance to be run runs at a high power working condition, and the currently running household appliance runs at a low power working condition, the load condition of the circuit can be met so that obviously the circuit overloading case will not be caused even if the ending time of the household appliance to be run is after that of the currently running household appliance. It will be appreciated by a person skilled in the art that although both the above-mentioned cases are described in connection with the case where only one currently running household appliance is connected to the circuit, and the power curves of the currently running household appliance and the household appliance to be run respectively include two stages, this is merely for the purpose of illustrating the off-peak running conditions of the present application as examples and should not constitute any limitation to the protection of the present application. Without departing from the basic principles of the present application, the running environment of the circuit and the power curve of each household appliance may not be limited to the above-mentioned cases. For example, under the condition that the power curves of a plurality (e.g., two) of currently running household appliances fluctuate similarly and the plurality of power curves and the power curves of the household appliances to be run meet off-peak running, the household appliance to be run can also run within an off-peak period of time with the plurality of currently running household appliances, i.e., the household appliance to be run runs in a high power mode, and the plurality of the currently running household appliances runs in a low power mode. When the power curves of the household appliance to be run and the currently running household appliance both include a plurality of peaks and troughs, the duration of each peak and trough need to be compared so as to judge whether each high power section of the household appliance to be run is strictly staggered with each high power section of the currently running household appliance, and further judge whether the power curve of the household appliance to be run and the power curve of the currently running household appliance meet the conditions of off-peak running. In summary, in the case that the currently running household appliance is connected to the main circuit, if the overloading situation of the main circuit occurs when the maximum power of a program required to be run by the household appliance to be run is substituted into the current circuit environment of the main circuit, when the household appliance to be run and the currently running household appliance can run within an off-peak period of time, it means that the stage of the maximum power of the household appliance to be run will coincide with the stage of low power of the currently running household appliance. Therefore, when the household appliance to be run runs at high power, the currently running household appliance running at low power can make the household appliance to be run be in one circuit environment with a large power margin (i.e. the remaining capacity in the main circuit), so that the household appliance to be run can be connected into the circuit to run without causing circuit overloading. Preferably, the control method of the present application further includes:if the power curve of the household appliance to be run and the power curve of any one of the currently running household appliances do not meet the off-peak running condition, obtaining the priority sequence of all currently running household appliances and the priority of the household appliance to be run;and selectively adjusting the running state of the currently running household appliance according to the priority sequence of all currently running household appliances and the priority of the household appliance to be run. In the above steps, the setting manner of the priority sequence of the currently running household appliances and the running priority of the household appliances to be run is not unique. For example, the running priority of all household appliances can be set in advance by a user. For example, the running priority of each household appliance (or the priority sequence of a plurality of household appliances) is directly set and then uploaded to a cloud server or a control center of a home system; it is also possible that the household appliance is given the running priority before each running. Preferably, the home system of the present application further includes a smart plug capable of connecting a plurality of household appliances to a circuit, the smart plug having a plurality of sockets with different priorities, and each socket being capable of connecting to and supplying power to one household appliance. In the case where the socket supplies power to the household appliance, the priority of the socket is the running priority of the household appliance inserted into the socket. Since the priorities of the sockets to which each household appliance is connected are different, corresponding to the priorities of the sockets specifically corresponding to each household appliance, a plurality of household appliances can be sorted according to the priority of the socket to which they are connected, that is, the priorities of the sockets determine the priorities of the currently running household appliances linked to each socket, so that all the currently running household appliances have running priority sequence. The running priority of the household appliance to be run is the priority of the socket that supplies power thereto (the case where the same priority socket exists is not considered here). Further, the one-to-one correspondence between the sequence of priorities and each currently running household appliance is changeable. The user may adjust the priority sequence by adjusting the correspondence between the sockets and priorities thereof. For example, the user can adjust the priority sequence by changing the socket positions connected by the household appliances; alternatively, the user may change the priority sequence by directly changing the priority of each socket. For example, setting a socket with a high priority to the socket with a high priority, setting a socket with a low priority to the socket with a medium priority, and setting a socket with a medium priority to the socket with a low priority. The priority sequence can be adjusted in a manner of either equipment/program setting or manual setting. More preferably, the step of “selectively adjusting the running state of the currently running household appliance according to the priority sequence of all currently running household appliances and the priority of the household appliance to be run” specifically includes:determining the position of the priority of the household appliance to be run in the priority sequence of all the currently running household appliances;and selectively adjusting the running state of the currently running household appliance according to the position of the priority of the household appliance to be run in the priority sequence of all the currently running household appliances. By comparing the priority of the household appliance to be run with each priority in the priority sequence of all the currently running household appliances, the high-low relationship between the priority of the household appliance to be run and the priorities of the other currently running household appliances can be gained, so that the running state of which currently running household appliance is adjusted is determined according to the priority of the household appliances to be run and all the currently running household appliances. Therefore, by selectively adjusting the running states of the currently running household appliances, the circuit environment of the main circuit is changed and the household appliance to be run is connected as much as possible. In one possible implementation mode, the step of “selectively adjusting the running state of the currently running household appliances according to the position of the priority of the household appliance to be run in the priority sequence” includes: when the priority of the household appliance to be run is at the highest level, adjusting the running state of the currently running household appliance according to the priority sequence. In the above steps, since the priorities of all the currently running household appliances are lower than that of the household appliance to be run, all the currently running household appliances belong to adjustable objects. “Adjusting the running state of the currently running household appliances according to the priority sequence” specifically refers to the case that when the running state of the currently running household appliance is adjusted, the currently running household appliance with low priority is preferentially adjusted. If the circuit environment after the currently running household appliance with low priority is adjusted does not meet the connecting demand of the household appliance to be run, the currently running household appliance with the next lowest priority level is continuously adjusted until the currently running household appliance with the highest priority is adjusted. As an example, the manners for adjusting the currently running household appliance includes at least one of “enabling the currently running household appliance to run in a low mode with a small occupied power”, “enabling the currently running household appliance to suspend running a program”, “enabling the currently running household appliance to end running a program and be in a standby state”, and “enabling the currently running household appliance to be in an off/to be awakened state”. When the adjustment manners are plural, the execution sequence of the plurality of adjustment manners can be set according to usage habits, usage demand, and the like of users. For example, the execution sequence is: switch low power mode>suspend running>standby>off/to be awakened. Of course, the adjustment manner specifically adopted is not limited to the above four types, so long as the power occupation of the main circuit can be reduced on the premise of meeting the usage demand of users. Furthermore, the control method of the present application further includes, at the same time as or after the step of “adjusting the running state of the currently running household appliance according to the priority sequence”:obtaining a current power margin;obtaining a maximum power of the household appliance to be run;comparing the current power margin with the maximum power;and if the current power margin is greater than the maximum power, making the household appliance to be run run. The above-mentioned “current power margin” refers to the power margin in the main circuit after the running state of the currently running household appliance is adjusted (equivalent to the remaining loading capacity which can be borne by the main circuit after the running state of the currently running household appliance is adjusted). Specifically, after the running state of the household appliance with the lowest priority is adjusted, the power margin in the main circuit at the moment can be obtained, and whether the household appliance to be run can be connected to the circuit or not can be judged through the above-mentioned steps. If so, connecting the household appliance to be run to the circuit to start running, otherwise, adjusting the household appliance with the next lowest priority and repeating the judgment procedure. Of course, although the embodiment is described in connection with adjusting only one currently running household appliance each time, in practice, the number of currently running household appliances adjusted each time is not defined. For example, the currently running household appliances with the lowest priority and next lowest priority can be adjusted at the same time, and if the connecting condition is not met, the currently running household appliances with middle priority and high priority are adjusted. In another possible implementation mode, the step of “selectively adjusting the running state of the currently running household appliance according to the position of the priority of the household appliance to be run in the priority sequence” includes: when the priority of the household appliance to be run is at the middle level, adjusting the running state of the currently running household appliance with a priority lower than that of the household appliance to be run according to the priority sequence. In the above-mentioned steps, since the priorities of some of all the currently running household appliances are higher than the priority of the household appliance to be run, and the priorities of the others of the household appliances are lower than the priority of the household appliance to be run, in this case, the object with the adjustable running state is the currently running household appliance with a priority lower than that of the household appliances to be run. Likewise, the control method of the present application further includes, at the same time as or after the step of “adjusting the running state of the currently running household appliance with a priority lower than that of the household appliance to be run according to the priority sequence”:obtaining a current power margin;obtaining a maximum power of the household appliance to be run;comparing the current power margin with the maximum power;and if the current power margin is greater than the maximum power, making the household appliance to be run run. Since the judging procedure has been described in the foregoing, it will not be described in detail herein. In yet another possible implementation mode, the step of “selectively adjusting the running state of the currently running household appliance according to the position of the priority of the household appliance to be run in the priority sequence” includes: when the priority of the household appliance to be run is at the lowest level, not adjusting the running states of all the currently running household appliances. In the above steps, since the priorities of all the currently running household appliances are higher than the priority of the household appliance to be run, in this situation, there is no object whose running state is adjustable among all the currently running household appliances. Preferably, the control method of the present application further includes, at the same time as or after the step of “not adjusting the running states of all currently running household appliances”:determining whether the household appliance to be run needs to run in a low power mode;if the household appliance to be run needs to run in the low power mode, making the household appliance to be run run in the low power mode;and if the household appliance to be run doesn't need to run in the low power mode, prohibiting the household appliance to be run from running. That is to say, in the case where the running states of all the currently running household appliances are not adjustable, the to-be-run state of the household appliance to be run can be changed on the premise of meeting the usage demand of the user so that the household appliance to be run can be connected to the main circuit in a running mode with small occupied power. Of course, when the household appliance to be run only has one running mode or does not have a running mode that has a lower occupied power than that of the to-be-run mode, the steps are not executed. In the case where the running states of all the currently running household appliances are not adjustable, the running of the household appliance to be run is directly prohibited. Further, the step of “making the household appliance to be run run in the low power mode” includes:obtaining a current power margin;obtaining a minimum power of the household appliance to be run;comparing the current power margin with the minimum power;and if the current power margin is greater than the minimum power, making the household appliance to be run run in the low power mode. In the above-mentioned procedure, the “current power margin” refers to the power margin in the main circuit when the running states of all the currently running household appliances are not adjusted. “Minimum power” refers to the maximum power occupied in the main circuit by the household appliance to be run in the low power mode. In the case where the current power margin is greater than the minimum power, the remaining power in the main circuit can allow the household appliance to be run to be connected so that the household appliance to be run can be electrified at the moment and run in the low power mode. Since the above-mentioned three implementation modes are described separately in connection with one type of priority position, the control method of the present application, in practical applications, may include at least one running procedure of the above-mentioned three implementation modes. Preferably, the control method of the present application includes the running steps in the above-mentioned three cases at the same time so as to accurately judge whether the household appliance to be run can be connected to the circuit in terms of a plurality of possible connecting positions of the household appliance to be run. As shown inFIG.4, the detailed steps of the preferred implementation mode of the control method of the present application include:step S1001: obtaining the power curve of a household appliance to be run;step S1002: obtaining the power curves of all currently running household appliances;step S1003: judging whether the power curve of the household appliance to be run and that of at least one currently running household appliance meet an off-peak running condition,if the power curve of the household appliance to be run and that of at least one currently running household appliance meet the off-peak running condition, executing step S1004, and otherwise, executing step S1005;step S1004: making the household applicant to be run and the currently running household appliance run within an off-peak period of time;step S1005: obtaining the priority sequence of all the currently running household appliances and the priority of the household appliance to be run;step S1006: determining the position of the priority of the household appliance to be run in the priority sequence;if the priority of the household appliance to be run is at the highest level, executing step S1007; if the priority of the household appliance to be run is at the middle level, executing step S1012; and if the priority of the household appliance to be run is at the lowest level, executing step S1017;in the case where the priority of the household appliance to be run is at the highest level:step S1007: adjusting the current running state according to the priority sequence and executing step S1008;step S1008: obtaining a current power margin Δp in an adjusted main circuit;step S1009: obtaining the maximum power pmaxof the household appliance to be run;step S1010: judging whether the current power margin Δp is greater than the maximum power pmax, and if the current power margin Δp is greater than the maximum power pmax, executing step S1011, and otherwise, returning to step S1007;and step S1011: making the household appliance to be run run;in the case where the priority of the household appliance to be run is at the middle level:step S1012: adjusting the running state of the currently running household appliance with a priority lower than that of the household appliance to be run according to the priority sequence and executing step S1013;step S1013: obtaining the current power margin Δp in the adjusted main circuit;step S1014: obtaining the maximum power pmaxof the household appliance to be run;step S1015: judging whether the current power margin Δp is greater than the maximum power pmax;if the current power margin Δp is greater than the maximum power pmax, executing step S1016, and otherwise, returning to step S1012;and step S1016: making the household appliance to be run run;in the case where the priority of the household appliance to be run is at the lowest level:step S1017: not adjusting the running states of all currently running household appliances, and executing step S1018;step S1018: judging whether the household appliance to be run needs to run in the low power mode;if the household appliance to be run needs to run in the low power mode, executing step S1019, and otherwise, executing step S1023;step S1019: obtaining the current power margin Δp of the main circuit which is not adjusted;step S1020: obtaining the minimum power pminof the household appliance to be run;step S1021: judging whether the current power margin Δp is greater than the minimum power pmin, and if the current power margin Δp is greater than the minimum power pmin, executing step S1022, and otherwise, executing step S1023;step S1022: making the household appliance to be run run in the low power mode; and step S1023: prohibiting the household appliance to be run from running. In summary, according to the control method of the present application, the household appliance to be run can be connected into the circuit on the premise of not influencing the running states of all the currently running household appliances. The application not only meets the demand of a user that the household appliance to be run and the currently running household appliance work simultaneously, but also does not exceed the load of the circuit, thereby improving the overall working efficiency of the home system on the basis of ensuring the safe use of electricity, and greatly improving the user experience. The technical solution of the present application has thus far been described in connection with the preferred implementation modes shown in the accompanying drawings, but it will be readily understood by those skilled in the art that the scope of the present application is obviously not limited to these specific implementation modes. Those skilled in the art can make equivalent alterations or substitutions to the relevant technical features without departing from the principles of the present application, and the technical solution after such alterations or substitutions are intended to fall within the scope of the present application. | 31,390 |
11863343 | DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS FIG.1shows an automation environment100which includes a multi-role automation host104. A first network cloud102is accessible by multi-role automation host104. Multi-role automation host104includes a system configuration (not shown) which, in turn, includes a component or device profile (not shown) associated with each profiled device108a. . .108bcontrolled by multi-role automation host104. In general, a device profile is a file or other data structure which describes the inputs, outputs, and available set of commands for the associated device. Through either internal storage, network cloud102, or another external source, multi-role automation host104must have access to an associated device profile in order to control a given device. Examples of a system configuration, component profile, and hardware and software suitable for multi-role automation host104may be found in U.S. Pat. No. 9,153,125 incorporated by reference above. A user may interact with, and effectively control, multi-role automation host104by operating a user device116arunning a host control application120which is compatible with multi-role automation host104. User device116amay be implemented with a smartphone, tablet, or other device capable of running host control application120and having appropriate network connectivity to communicate with multi-role automation host104. Alternatively, a user may interact with and effectively control multi-role automation host104using a user remote118awhich is compatible with multi-role automation host104. Multi-role automation host104may, for example, be based upon a Smart Host Model SHC-S2-00, a surround sound soundbar, a “smart” Multistat thermostat, a “smart” audio speaker (as described below in connection withFIG.8), a “smart” amplifier (as described below in connection withFIG.9), or other devices offered by Savant Systems, LLC of Hyannis, MA. A second network cloud110is accessible by an automation hub112. Unlike multi-role automation host104which is capable of controlling any profiled device, automation hub112is conventionally capable of controlling only devices114a-114cwhich have been certified as control protocol compliant (i.e., devices which have been certified as compliant with the control protocol specified by the manufacturer of automation hub112). A user may interact with and effectively control automation hub112by operating a user device116brunning a hub control application122which is compatible with automation hub112. User device116bmay be implemented with a smartphone, tablet, or other device capable of running hub control application122and having appropriate network connectivity to communicate with automation hub112. Alternatively, a user may interact with and effectively control automation hub112using a user remote118bwhich is compatible with automation hub112. Automation hub112may, for example, be based on an Apple TV®, HomePod™, or iPad® running HomeKit®, all of which are offered by Apple Inc. of Cupertino, CA. Multi-role automation host104includes an automation bridge106. In one embodiment, an SDK available from Apple Inc. may be used to create automation bridge106and embed it in multi-role automation host104. As described in greater detail below, automation bridge106functions, in part, to communicate (bidirectionally) with automation hub112using the same manufacturer-specified control protocol as used by certified control protocol compliant devices114a-114c. Automation bridge106also functions to convert control protocol messages received from automation hub112to an appropriate form for processing by multi-role automation host104(and vice versa). Advantageously, through such communication and protocol translation, and depending upon the type of device, profiled devices108a. . .108bas well as user remote118aare exposed to automation hub112. That is, profiled devices108a. . .108band user remote118a, even though not certified as compliant with the control protocol used by automation hub112, become visible to and controllable by automation hub112by virtue of multi-role automation host104. FIG.2shows a block diagram of a powered surround sound soundbar200which may be used to implement multi-role automation host104(FIG.1). A microprocessor202is coupled in bidirectional communication with an audio digital signal processor (DSP)204. Audio DSP204is also coupled to a TosLink214. Audio DSP204is coupled to digital to analog converters (not shown) whose outputs are coupled to an amplifier206whose outputs are coupled to various audio speakers208. Microprocessor202is also coupled in bidirectional communication with a WISA module210as well as a WiFi dual band module212. In addition, microprocessor202is coupled to an LAN/AVB port216, a microphone218, control ports220which may include RS232 control ports infrared (IR) control ports, or other control ports. An AC/DC power supply224provides power to soundbar200. FIG.3shows a message flow diagram illustrating an example of higher level messaging needed to make profiled devices108a. . .108band user remote118a(FIG.1) visible to and controllable by automation hub112. Host control application120, running on user device116a, initiates a Discovery protocol300with multi-role automation host104. Host control application120next sends a Host Provision message302. In response, multi-role automation host104proceeds to Query304its configuration to identify any (profiled) device(s) that is of an appropriate type to be supported/controlled by automation hub112. Assuming at least one profiled device of an appropriate type is identified, multi-role automation host104, through bridge106, transmits an Add Bridged Devices to Hub message306to automation hub112. Subsequently, host control application120issues a Control Device message308to multi-role automation host104which responds by exerting control310over the profiled device to which message308is directed. In contrast, when hub control application122issues a Control Device message312that is directed to a profiled device, automation hub112responds by forwarding a Control Request message314to multi-role automation host104. In turn, multi-role automation host104responds by exerting control316over the profiled device to which message312is directed. FIG.4shows a message flow diagram illustrating an example of lower level messaging that occurs when hub control application122and automation hub112are exerting control over profiled device108athat has become visible through multi-role automation host104as described above. In this example, automation hub112represents an Apple TV®, HomePod™, or iPad® running HomeKit®. Hub control application122issues a HMActionSet execute message400to automation hub112which, in turn, issues a Device Characteristic Write message402to multi-role automation host104. Device Characteristic Write message402could represent, for example, to dim a lamp. In response, multi-role automation host104locates the profile corresponding with the device404to which message402is directed, and then sends a profiled command406to device108a. Profiled command406is in substance comparable to message402but is prepared by multi-role automation host104in a form that is expected and understood by device108a. Following receipt and execution of profiled command406, device108asends a command response408to multi-role automation host104which, in turns, sends a Characteristic Update message410to automation hub112. Turning now toFIG.5, an automation environment500is shown. In general, each component or device shown inFIG.5may include comparable structure and function to the corresponding component or device shown inFIG.1and described above with one exception. A multi-role automation hub512, in its role as a hub, functions in a substantially comparable manner to automation hub112(FIG.1) described above. However, multi-role automation hub512, in an additional role, produces an enhanced on screen display (EOSD) for a television524. In a preferred embodiment, as referenced above, automation hub112may be based on Apple TV® running HomeKit®. While Apple TV® includes a limited on screen display which allows a user to select audio/video to play, or power on/off an attached TV, that limited on screen display is not capable of controlling other devices. However, Apple TV® does include a high performance graphics processing capability. In accordance with one aspect of the present invention, an application is provided which runs on Apple Inc.'s tvOS®, and leverages the high performance graphics processing built into Apple TV® to produce an EOSD600as shown inFIGS.6A-6D. By placing an Apple TV® into a lockdown mode, the Apple TV® will automatically boot up into the application which produces EOSD600. As shown inFIG.6A, EOSD600may be formatted as three horizontal bands of rotating icons, graphics, images, video or other content. For example, band602displays a series of “Scenes” corresponding to various rooms or areas of a home. When a user selects a particular “Scene” (by interacting with EOSD600using user devices516aor516b, or user remotes518aor518b(FIG.5)) to center and select the corresponding image, previously defined conditions or states of lighting, devices, shades, media, or any other resource that is under the control of multi-role automation host504or multi-role automation hub512, will be set in the selected room or area. Band604of EOSD600displays icons corresponding to various systems and services. When a user selects a centered “Security” icon, EOSD600transitions to a screen shown inFIG.6Bin which band608displays live feeds from several closed circuit security cameras. Similarly, as shown inFIG.6C, when a user selects a centered “Lighting” icon, EOSD600transitions to a screen shown inFIG.6Din which band610displays true images of the current lighting conditions of individual rooms or lamps along with a vertical slider bar control. Exemplary techniques for generating and displaying such true images of current lighting conditions are described in U.S. Pat. Nos. 8,296,669, and 10,613,704, both of which are incorporated by reference above. EOSD600may also provide advanced video display management including windowing, tiling, and picture-in-picture among others. Details regarding hardware and software for generating EOSD600may be found in the patents and pending patent applications incorporated by reference above. FIG.7shows a message flow diagram illustrating an example of lower level messaging that occurs when multi-role automation hub512with EOSD600interacts with multi-role automation host504. Host control application520issues a Control Profiled Device message700to multi-role automation host504. Because message700is directed to a profiled device, multi-role automation host504proceeds to Execute Profiled Device Command702. Subsequently, host control application520issues a Control Certified Device message704. Because message704is not directed to a profiled device, multi-role automation host504issues a Forward Request message706, via bridge506, to multi-role automation hub512. Multi-role automation hub512, in turn, issues a Control Request message708to automation hub112which proceeds to Execute Certified Device Command710. In accordance with one aspect of the invention, an application is provided which runs on Apple Inc.'s tvOS®, and provides the necessary functionality to transform an Apple TV® into multi-role automation host. Other devices which are controllable by a dedicated remote may also be suitable to serve as a multi-role automation host. In general, the application embodies all necessary automation host functionality including the ability to recognize and control profiled devices. Because of the native functionality of an Apple TV®, this type of multi-role automation host is capable of effectively controlling a mix of profiled devices and certified devices without an additional hub or other intermediate device. In addition, the capability of this type of multi-role automation host may be further enhanced by the addition of AVB network connectivity. Such connectivity, in the case of an Apple TV® multi-role automation host, permits distribution of music in digital form directly to AVB enabled speakers, soundbars or other devices. In accordance with another aspect of the invention, multiple Apple TVs may be pooled together to become a shared resource for multiple TVs. As shown inFIG.7A, an automated environment700includes a group of TVs712a-712c, each of which is coupled to a video switch714. A multi-role automation host716is coupled to video switch714, as are multiple Apple TVs718aand718b, and multiple hubs with enhanced OSD726aand726b(which are substantially similar to hub with enhanced OSD512(FIG.5) described above). In general, in response to commands from multi-role automation host716, video switch714functions to set up and tear down video signal connections or paths between various ones of TVs712a-712c, and various ones of Apple TVs718a-718bor hubs with enhanced OSD726a-726b. Multi-role automation host716may also command video switch714to set up a video signal connection between a single Apple TV718a-718band multiple TVs712a-712c. Similarly, multi-role automation host716may also command video switch714to set up a video signal connection between a single hub with enhanced OSD726a-726band multiple TVs712a-712c. FIG.7Bis a message flow diagram illustrating communication among various devices inFIG.7Aneeded to display an enhanced OSD on one of TVs712a-712c. The communication begins with a user (not shown) pressing a button on user remote724which causes transmission of an OSD Request for Zone X message728to multi-role automation host716. Multi-role automation host716responds first by Dequeue Menu Token Y which, in effect, dedicates a particular Apple TV718a-718b(associated with the token) to the requested OSD session. Next, multi-role automation host716transmits an Activate OSD message732to a Menu Resource Y762(i.e., a particular Apple TV associated with Token Y). This is followed by a Start Stream 1 message734from multi-role automation host716to a Menu Resource Y's Video Transmitter758which, in turn, is followed by a Start Stream 1 With Resized Video message736to Current Transmitter Used By Zone X764. Next, multi-role automation host716sends a message738to Zone X Video Receiver760which sets up picture in picture and assigns the appropriate transmitters. At this point, the enhanced OSD appears in a picture in picture740. A User OSD Navigation message742is then transmitted from user remote724to Menu Resource Y762. This is followed by a Cancel or Select message744from user remote724. If a Cancel message744was transmitted, an OSD Deactivated message746is transmitted from Menu Resource Y to multi-role automation host716. This is followed by Menu Token Y being returned to the Queue748because the OSD session is over. Subsequently, a message750is transmitted from multi-role host716to restore full screen video for the original transmitter. This is followed by a Stop Stream 1 message752to Current Transmitter764, a Stop Stream 1 message to Menu Resource Y's Video Transmitter758, and restoration of full screen video756. In addition, similar to the enhanced OSD example described above, other services available on an Apple TV718a-718bmay be shared with multiple TVs712a-712c. Thus, services such as Netflix, Weather, or many others may be shared with multiple TVs712a-712cby placing an Apple TV in lockdown mode and transmitting the selected video over IP, taking advantage of video switch714to set up the appropriate video connections. In accordance with another aspect of the invention, a ceiling or wall mountable speaker, such as those disclosed in co-pending application Ser. No. 16/254,245 and incorporated by reference above, or bookshelf or other speaker, is provided enhanced functionality and becomes a multi-role speaker. The previously disclosed speakers include a LAN/AVB port through which both digital audio and power over Ethernet (PoE) are received. As shown inFIG.8, a multi-role speaker800includes a LAN/AVB port802coupled to a microprocessor804. A microphone806is coupled to microprocessor804as is a digital signal processor (DSP)808. An amplifier810is coupled to DSP808as well as speaker812. Multi-role speaker800also includes a WISA module814and a WiFi dual band module816, both of which provide wireless connectivity. Through its WISA module814or LAN/AVB port802, multi-role speaker800may distribute digital audio in either of two forms: 16-bit, 44.1 kHz form (used in Airplay 2), or higher fidelity 24-bit, 96 kHz form. Through its WiFi dual band module816, multi-role speaker800provides support for Airplay 2 as well as serving as a wireless access point. To provide whole house audio, speakers are frequently distributed throughout most rooms or other areas of a house. By providing such speakers with wireless network connectivity so that they become multi-role speakers, robust wireless network coverage is attained without the necessity of separate wireless access points. Through its WISA module814, multi-role speaker800may be paired with a WISA speaker820to provide stereo sound. Paired WISA speaker820includes a microcontroller822which is coupled to a WISA module830and a DSP824. DSP824is coupled to an amplifier826, which in turn is coupled to a speaker828. In general, paired WISA speaker820receives digital audio from multi-role speaker800via WISA module830. Within an automation environment100(FIG.1), multi-role speaker800is capable of serving in any combination of four roles: audio speaker; multi-role automation host104(FIG.1); wireless access point; and pairing with WISA speaker820. FIG.9shows a multi-role amplifier900which includes a microprocessor902coupled to a DSP904which in turn is coupled to an amplifier906. Microprocessor902is coupled, respectively, to a LAN/AVB port908, an analog input910, a TosLink input912, and control ports914which may include RS232, IR, or other control ports. An AC/DC power supply916provides power to multi-role amplifier900. Multi-role amplifier900provides an RCA pre-amp output920and may be used to drive external passive speakers918. Within an automation environment100(FIG.1), multi-role amplifier900is capable of serving as both an audio amplifier and a multi-role automation host104(FIG.1). The foregoing description has been directed to specific embodiments of this invention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For example, it is expressly contemplated that the teachings of this invention can be implemented as software, including a computer-readable medium having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly this description is to be taken only by way of example and not to otherwise limit the scope of the invention. It is thus the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention. | 19,181 |
11863344 | DETAILED DESCRIPTION The description below provides methods, computer program products, and systems for enabling global quality of service for real-time selection of best available data communications channel in autonomous driving vehicle communications. One of ordinary skill in the art will recognize many additional variations made possible by the succinct description of techniques below. For example, although CAN bus technology is referred to herein for illustration, different bus technologies can also be substituted given the teachings of the disclosure herein. I. Systems for Best Vehicle Communication QOS Among Independent Silos (FIG.1-2) FIG.1is a high-level block diagram illustrating a vehicle QOS system100for enabling global quality of service for real-time selection of data communications channel in autonomous driving vehicles, according to one embodiment. The vehicle communication system100includes a CAN orchestrator110, autonomous devices110A,B,C which are capable of communication over any of silo channel1, silo channel2or silo channel3. Besides a peer-to peer communication, there can also be centralized communications over a wide area network199through vehicle communication server110. There can be many other network components integrated within the autonomous devices110A,B,C or as independent components, such as a firewall server, an access point, and stations, coupled through a wide area network. Many other embodiments are possible, for example, with more access points, more or fewer stations, additional components, such as firewalls, routers, switches, and the like. Hardware and software components can be implemented similar to the example ofFIG.6. The wide area network199links components of the system100with a channel for data communication, for example, over the Internet through CAN orchestrator110. The CAN orchestrator110receives, as an intermediary, communications from one vehicle to another. The wide area network199links components of the system100with a channel for data communication, for example, over the Internet through centralized CAN server120. The centralized CAN server120receives, as an intermediary, communications from one vehicle to another. The CAN orchestrator110intercepts data transfers intended for a first vehicle technology and reroutes to a second vehicle technology, based on available vehicle technologies, in an embodiment. Otherwise, the CAN data bus line passively allows devices to make data transfers as determined and intended by the device. The CAN standard is designed to allow microcontrollers and devices to communicate with each other's applications without a host computer. In some cases, device type is used as a priority. However, the CAN orchestrator110can override to interject quality of service by directing data transfers to the best available vehicle communication technology. The CAN orchestrator110can be a software module plugged into the vehicle operating system. In another embodiment, an application is downloaded to execute on a processor utilizing the operating system. Still another embodiment includes a hardware control system that is coupled in data communication and electrical communication with other vehicle components, on a master system of the autonomous vehicle. The CAN orchestrator110of an embodiment is subject to a higher-level system, i.e., autonomous vehicle master control system, that overrides best silo algorithms. The autonomous vehicles110A,B,C communicate data with each other using peer-to-peer vehicle communication technologies, as individual nodes. Various wireless technologies are available over various bandwidths and ranges. Generally, autonomous vehicles are self-driving cars controlled by robots without a human driver. To do so, sensor inputs with the environment are critical, and passing autonomous vehicles can provide data for visibility beyond the individual vehicle. The sensor inputs are communicated by different devices such as radar, lidar, sonar, GPS, odometry and inertial measurement units. A centralized processing system considers all of the inputs for advanced control systems that respond by interpretations of sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. In other embodiments, partially autonomous vehicles or human-driven vehicles also participate in communications. In one example, as the autonomous vehicles110A,B,C sit at a stop light, data can be exchanged between vehicles. One part of data can be road conditions, hazards, traffic data, average speed, safety info, nearby pedestrian info, and the like. Another part of data can be more personal, such as starting point, destination point, cargo data, or passenger data for various applications. In another example, the autonomous vehicles110A,B can be passing each other at full speed and only have a momentary data exchange. In some cases, peer-to-peer communication can be continued over a centralized communication channel if needed. In another example, rather than sitting at a stop light, the autonomous vehicles pass each other in traffic briefly. Some embodiments exchange on critical information for short connections. In yet other embodiment are implemented on a traditional vehicle that is modified either partially or fully for the autonomous environment. There can be many devices connected to the CAN bus. Communication technologies built into the vehicle can be tied into the CAN internally. Others can be plugged in or wirelessly connected. As a result of quality of service guarantees over the CAN data bus architecture shown inFIG.1, applications are able to run reliably such as traffic prediction applications. More robust data exchanges, especially over a brief connection, lead to more robust analytics. In one embodiment, network security features are enforced by the orchestrator. The network components of the system100can implemented in any of the computing devices discussed herein, for example, a personal computer, a laptop computer, a tablet, a smart phone, a smart watch, a mobile computing device, a server, a cloud-based device, a virtual device, an Internet appliance, an IoT (Internet of things) device, or any of the computing devices described herein, using hardware and/or software (see e.g.,FIG.6). II. Methods for Best Vehicle Communication QOS Among Independent Silos (FIGS.2-5) FIGS.2-5are block diagram showing how quality of service is injected to independent vehicle communication silos for real-time best availability. Starting withFIG.2, an orchestrator is injected to a CAN line200A relative to200B. Each of the nodes represent independent silos of vehicle communication technologies for autonomous driving vehicle technologies. The plurality of independent silos broadcast intended data transfers. Next,FIG.3shows the introduction of real-time accurate strength signals associated with the plurality of independent silos being received300A relative to300B. In an embodiment, a data transfer over one of the plurality of independent silos is intercepted, with the orchestrator as master over the CAN line. Or the orchestrator is located downstream from the nodes to allow interceptions. A data type involved in the data transfer is determined before allowing the data transfer to continue on any silo of independent technology. In still another embodiment,FIG.4shows flow chart400A with one of the plurality of independent silos of communication is selected for rerouting the data transfer, based on a type of data involved in the data transfer, and based on a best available of the plurality of independent silos for the data transfer type. The data transfer is directed over the selected independent silo over the best available vehicle communication technology, as summarized inFIG.5in500A. III. Generic Computing Device (FIG.6) FIG.6is a block diagram illustrating an example computing device600for use in the system100ofFIG.1, according to one embodiment. The computing device600is implementable for each of the components of the system100. The computing device600can be an autonomous vehicle or a control system on an autonomous vehicle, a vehicle communication device, a mobile computing device, a laptop device, a smartphone, a tablet device, a phablet device, a video game console, a personal computing device, a stationary computing device, a server blade, an Internet appliance, a virtual computing device, a distributed computing device, a cloud-based computing device, or any appropriate processor-driven device. The computing device600, of the present embodiment, includes a memory610, a processor620, a storage drive630, and an I/O port640. Each of the components is coupled for electronic communication via a bus699. Communication can be digital and/or analog, and use any suitable protocol. The memory610further comprises network applications612and an operating system614. The network applications612can include a web browser, a mobile application, an application that uses networking, a remote application executing locally, a network protocol application, a network management application, a network routing application, or the like. The operating system614can be one of the Microsoft Windows® family of operating systems (e.g., Windows 96, 98, Me, Windows NT, Windows 2000, Windows XP, Windows XP x64 Edition, Windows Vista, Windows CE, Windows Mobile, Windows 6 or Windows 8), Linux, HP-UX, UNIX, Sun OS, Solaris, Mac OS X, Alpha OS, AIX, aIRIX32, IRIX64, or Android. Other operating systems may be used. Microsoft Windows is a trademark of Microsoft Corporation. The processor620can be a network processor (e.g., optimized for IEEE 802.11, IEEE 802.11AC or IEEE 802.11AX), a general purpose processor, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a reduced instruction set controller (RISC) processor, an integrated circuit, or the like. Qualcomm Atheros, Broadcom Corporation, and Marvell Semiconductors manufacture processors that are optimized for IEEE 802.11 devices. The processor620can be single core, multiple core, or include more than one processing elements. The processor620can be disposed on silicon or any other suitable material. The processor620can receive and execute instructions and data stored in the memory610or the storage drive630 The storage drive630can be any non-volatile type of storage such as a magnetic disc, EEPROM (electronically erasable programmable read-only memory), Flash, or the like. The storage drive630stores code and data for applications. The I/O port640further comprises a user interface642and a network interface644. The user interface642can output to a display device and receive input from, for example, a keyboard. The network interface644(e.g. RF antennae) connects to a medium such as Ethernet or Wi-Fi for data input and output. Many of the functionalities described herein can be implemented with computer software, computer hardware, or a combination. Computer software products (e.g., non-transitory computer products storing source code) may be written in any of various suitable programming languages, such as C, C++, C #, Oracle® Java, JavaScript, PHP, Python, Perl, Ruby, AJAX, and Adobe® Flash®. The computer software product may be an independent application with data input and data display modules. Alternatively, the computer software products may be classes that are instantiated as distributed objects. The computer software products may also be component software such as Java Beans (from Sun Microsystems) or Enterprise Java Beans (EJB from Sun Microsystems). Some embodiments can be implemented with artificial intelligence. Furthermore, the computer that is running the previously mentioned computer software may be connected to a network and may interface with other computers using this network. The network may be on an intranet or the Internet, among others. The network may be a wired network (e.g., using copper), telephone network, packet network, an optical network (e.g., using optical fiber), or a wireless network, or any combination of these. For example, data and other information may be passed between the computer and components (or steps) of a system of the invention using a wireless network using a protocol such as Wi-Fi (IEEE standards 802.11, 802.11a, 802.11b, 802.11e, 802.11g, 802.11i, 802.11n, and 802.11ac, just to name a few examples). For example, signals from a computer may be transferred, at least in part, wirelessly to components or other computers. In an embodiment, with a Web browser executing on a computer workstation system, a user accesses a system on the World Wide Web (WWW) through a network such as the Internet. The Web browser is used to download web pages or other content in various formats including HTML, XML, text, PDF, and postscript, and may be used to upload information to other parts of the system. The Web browser may use uniform resource identifiers (URLs) to identify resources on the Web and hypertext transfer protocol (HTTP) in transferring files on the Web. The phrase “network appliance” generally refers to a specialized or dedicated device for use on a network in virtual or physical form. Some network appliances are implemented as general-purpose computers with appropriate software configured for the particular functions to be provided by the network appliance; others include custom hardware (e.g., one or more custom Application Specific Integrated Circuits (ASICs)). Examples of functionality that may be provided by a network appliance include, but is not limited to, Layer ⅔ routing, content inspection, content filtering, firewall, traffic shaping, application control, Voice over Internet Protocol (VoIP) support, Virtual Private Networking (VPN), IP security (IPSec), Secure Sockets Layer (SSL), antivirus, intrusion detection, intrusion prevention, Web content filtering, spyware prevention and anti-spam. Examples of network appliances include, but are not limited to, network gateways and network security appliances (e.g., FORTIGATE family of network security appliances and FORTICARRIER family of consolidated security appliances), messaging security appliances (e.g., FORTIMAIL family of messaging security appliances), database security and/or compliance appliances (e.g., FORTIDB database security and compliance appliance), web application firewall appliances (e.g., FORTIWEB family of web application firewall appliances), application acceleration appliances, server load balancing appliances (e.g., FORTIBALANCER family of application delivery controllers), vulnerability management appliances (e.g., FORTISCAN family of vulnerability management appliances), configuration, provisioning, update and/or management appliances (e.g., FORTIMANAGER family of management appliances), logging, analyzing and/or reporting appliances (e.g., FORTIANALYZER family of network security reporting appliances), bypass appliances (e.g., FORTIBRIDGE family of bypass appliances), Domain Name Server (DNS) appliances (e.g., FORTIDNS family of DNS appliances), wireless security appliances (e.g., FORTIWIFI family of wireless security gateways), FORIDDOS, wireless access point appliances (e.g., FORTIAP wireless access points), switches (e.g., FORTISWITCH family of switches) and IP-PBX phone system appliances (e.g., FORTIVOICE family of IP-PBX phone systems). This description of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications. This description will enable others skilled in the art to best utilize and practice the invention in various embodiments and with various modifications as are suited to a particular use. The scope of the invention is defined by the following claims. | 16,081 |
11863345 | DETAILED DESCRIPTION Various embodiments will now be described in detail with reference to the accompanying drawings in which some example embodiments are illustrated. Optional features or components are shown in dotted lines. Although embodiments may be modified and altered in various ways, embodiments are illustrated as examples in the figures and are described in detail herein. However, it should be made clear that it is not the intention to limit embodiments to the respective forms disclosed, but rather that embodiments should cover all functional and/or structural modifications, equivalents and alternatives that lie within the scope of the invention. It is noted, that an element which is referred to a being “connected” or “coupled” to another element, may be directly connected or coupled to the other element or that intervening elements may be present. If an element is referred to as being “directly connected” or “directly coupled” to another element, however, no intervening elements are present. Other terms used to describe a relationship between elements ought to be interpreted likewise (e.g. “between” versus “directly between”, “adjacent” versus “directly adjacent”, etc.). The terminology used herein only serves the description of specific embodiments and should not limit the embodiments. As used herein, the singular forms such as “a,” “an” and “the” also include the plural forms, as long as the context does not indicate otherwise. It will be further understood that the terms e.g. “contain”, “containing”, “comprises,” “comprising,” “includes” and/or “including,” as used herein, specify the presence of the stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one and/or more other features, integers, steps, operations, elements, components and/or any group thereof. FIG.1illustrates a flowchart of an embodiment of a method10for a monitoring entity30.FIG.2shows an associated flowchart of an embodiment of a method20for a communication component40.FIG.3shows an overview of an embodiment of a system80with embodiments of monitoring entities30,60and embodiments of communication components40,50. FIG.1shows a method10for a monitoring entity30for monitoring at least one communication component40. The communication component40is configured to communicate with one or more other communication components50via a data bus70on an event-based basis. The method10includes regularly checking12the function of the communication component40using the monitoring entity30. The method further includes cyclically communicating14with at least one other monitoring entity60assigned to another communication component50in order to monitor the function of the communication component30, the function of the other communication component50and the function of the data bus70. Here and in the following, “regular” is understood to mean a time sequence based on rules. For example, a rule may provide that such communication takes place within a cycle, i.e. within a specified time period. A regular check may therefore be carried out on the basis of a defined time base, a clock or a period, for example every 10 ms, 20 ms, 50 ms, 100 ms, 200 ms, 500 ms, 1 s, 2 s, 5 s, etc. A certain tolerance is to be expected, adapted to the practical conditions, and a (theoretically) perfect period cannot be assumed. In this respect, “regular” communication may also mean “periodic” or “cyclic” communication, in the sense that a maximum time span between two messages is not exceeded according to a specified probability. Cyclic communication may take place according to a guaranteed or at least highly predictable schedule. Another rule would be linked to an event, so that a malfunction may be detected with a high degree of certainty within a certain time. In this case the event occurs with a sufficiently high probability. Finally, a combination is also conceivable, namely that a check cycle is adapted depending on events. For example, some communication components may then be checked more frequently (so that an error may be detected more quickly) when certain events have occurred. For example, a frost alert may be monitored relatively rarely or not at all at temperatures above 20° C., whereas it is monitored more frequently at low temperatures. Analogously,FIG.2shows a method20for a communication component40to communicate with one or more other communication components50via a data bus70. The method20includes event-based communication22with the one or more other communication components50via the data bus70and cyclic communication24with the monitoring entity30. In embodiments, the communication components40,50in a communication system80may therefore be carried out by monitoring entities30,60associated with the communication components40,50. Such systems80may, for example, be used in vehicles, e.g. motor vehicles, ships, trains, aircraft, trucks, passenger cars, two-wheelers, etc. Here, for example, controllers communicate with each other, which have been entrusted with different functions. On the one hand, central controllers exist, where information is collected and evaluated, and on the other hand, controllers are used which monitor, control or regulate components such as lighting, indicators, brake lights, sensors, actuators, warning lights, displays, input devices such as buttons or levers, etc. In the following, these controllers are also referred to as communication components which communicate with each other via a data bus70, as shown inFIG.3. FIG.3shows a system80with two monitoring entities30,60and two communication components40,50, which communicate with each other via a data bus70. In embodiments, the data bus70may be the CAN (Controller Area Network) bus or another field bus. AsFIG.3shows, the individual components30,40,50,60each have one or more interfaces32,42,52,62as well as one control unit34,44,54,64each, which are further coupled with each other. For example, the one or more interfaces32,42,52,62may correspond to one or more inputs or outputs for receiving or providing information or signals, such as in digital bit values, voltages, currents or electromagnetic waves, for example based on a code, within a module, between modules, or between modules of different entities. In this respect, the one or more interfaces32,42,52,62are suitable for exchanging signals or information on the data bus70or between monitoring entities30,60and communication components40,50, i.e. for transmitting and/or receiving. Hereby, further components may exist or be switched between the one or more interfaces32,42,52,62, examples are amplifiers, filters, diplexers, duplexers, mixers, phase shifters, low noise amplifiers (LNA), plugs, sockets etc. In embodiments, the control unit34,44,54,64may correspond to any controller or processor or to a programmable hardware component. For example, a control unit34,44,54,64may also be realized as software which is programmed for a corresponding hardware component. In this respect a control unit34,44,54,64may be implemented as programmable hardware with accordingly adapted software. Here, any processors may be used, such as digital signal processors (DSPs). Embodiments are not restricted to a certain type of processor here. Any processors or also several processors or microcontrollers are conceivable for implementing the control module. Implementations in integrated form with other control units are also conceivable, for example in a control unit for a vehicle, which additionally includes one or more other functions. In embodiments, the method steps described herein may be executed by the control units34,44,54,64and/or by the respective one or more interfaces32,42,52,62. In this respect, the described method steps may be carried out by the device components. With the double-sided arrows,FIG.3further illustrates that a corresponding communication between the components or between components and data bus may take place. In embodiments, a first communication component40may be, for example, a controller for a vehicle motion manager (VMM). This controller may offer corresponding client functions. The clients, for example also controllers other than communication component50, which manage a corresponding interface to the driver, may make requests that may be valid for several seconds (e.g. trajectory) to theoretically minutes or even hours (e.g. target speed). However, if the client function (and/or the communication) fails, a replacement reaction has to take place promptly (less than one second). In this case, this function is performed by monitoring entities30,60, which are directly assigned to communication components40,50and monitor them locally, for example via cyclical communication between the monitoring entity30,60and communication component40,50via a connection that differs from the data bus70. The monitoring entities30,60may, for example, communicate cyclically with each other via the data bus70. If the data bus fails, this cyclic communication fails and the malfunction of the data bus70may be detected promptly. If one of the communication components40,50malfunctions, this may be detected by the monitoring entities30,60via the local interfaces and communicated with each other via the data bus70. In a further embodiment, warning lights (communication component50) are activated and deactivated by a client function (communications component40) on an event-based basis, whereby the monitoring entities30,60detect any errors promptly in accordance with the above description. This may require a reduction of the communication on the data bus, as the cyclic communication may be limited to the monitoring entities30,60and these may also monitor several communication components simultaneously without putting more load on the data bus70. This may result in a lower message density or message size on the data bus70than would be the case if the associated respective client function were to actively suppress a warning lamp (e.g. brake warning lamp), which requires constant individual messages between the respective communication components, and the warning lamp is activated when the client function fails. FIG.3thus shows an embodiment of a monitoring entity30for at least one communication component40, which is configured for communication with one or more other communication components50via a data bus70. The monitoring entity30includes one or more interfaces32for communication via the data bus70and for communication with the communication component40. The monitoring entity30includes a control unit34, configured to control the one or more interfaces32and to communicate cyclically with the communication component40and with at least one other monitoring entity60of another communication component50. As already described above, the control unit34of the monitoring entity30may be configured to communicate with the communication component40via a connection which differs from the data bus70, for example via a local interface32. Furthermore, the control unit34is configured to transmit an error indication to at least one other monitoring entity60, when a malfunction or a communication error of the communication component40is detected. In addition,FIG.3illustrates an embodiment of a communication component40to communicate with one or more other communication components50via a data bus70. The communication component40includes one or more interfaces42for communication via the data bus70and for communication with a monitoring entity30. The communication component40further includes a control unit44, configured to control the one or more interfaces42, to communicate with the one or more other communication components50in an event-based manner and to communicate cyclically with the monitoring entity30. The embodiment of a system80inFIG.3includes at least two monitoring entities30,60as described above and at least two communication components40,50associated with the monitoring entities30,60, as described above. A further embodiment is a vehicle with such a system80. FIG.4shows an architecture of a monitoring service in an embodiment. In the following, the monitoring service will also be called watchdog service, analogously the monitoring entity30,60will also be called watchdog. On the left side,FIG.4shows a first controller “SG 1” (from German “Steuergerät 1”) on which a client “foo” is implemented as communication component40, and a “Watchdog 1” is implemented as monitoring entity30. On the right side, a second controller “SG 2” (from German “Steuergerät 2”) is shown, on which a second communication component50is implemented as (service) “bar” and a second monitoring entity60is implemented as “Watchdog 2”. “foo” and “bar” are placeholders for any “clients” and services (services, client functions). AsFIG.4further shows, a number of other controllers may be present, wherein the monitoring entities30,60communicate cyclically with each other via the dotted arrows. Communication between monitoring entities30,60and the associated communication components40,50takes place locally (double-dotted arrows). Here, the clients (communication component40,50) are monitored cyclically on a local processor, also “localhost” (SG 1, SG2), so that the bus load does not play a role. The monitoring service (watchdog service) for the two communication components40,50is implemented here via the two monitoring entities30,60. On each “SG 1, SG 2” involved in event-based communication, a local instance of the watchdog service (method10, monitoring entities30,60) is implemented. The local instances of the watchdog service30,60synchronize themselves via bus70. A “client”, here a communication component40,50, may register with the watchdog service30,60. In case of a communication error, the watchdog service30,60informs the communication partner of the “client”40,50as a substitute reaction. FIG.5illustrates a failure of a communication component40in the embodiment, which has been explained usingFIG.4. The client “foo”40fails in this embodiment, which is indicated by the lightning bolt. The monitoring entity30“Watchdog 1” notices this due to the local cyclic communication with “Client foo”, which may also fail, for example, or via which “Client foo” may report an error to the watchdog. “Watchdog 1”30may then transmit a corresponding message via the cyclic communication with “Watchdog 2”60, which may then be passed on to the “Service bar”50by “Watchdog 2”. In this way, “Watchdog 1”30may inform the “Service bar”50about the error. In this embodiment, the method10for monitoring entity30includes transmitting an error indication to at least the one other monitoring entity50when a malfunction or a communication error of the communication component40is detected. In further embodiments, a transmission to several further monitoring entities60may also take place. FIG.6shows a failure of a data bus70in one embodiment. The scenario already explained usingFIGS.4and5is assumed again. The lightning bolt inFIG.6indicates that a fault or failure happens on data bus70. This also interrupts the cyclic communication between the monitoring entities30,60, so that both sides notice the error and may communicate it to their communication components40,50. The delay with which such an error may be detected and reported depends on the frequency of the cyclic communication on the data bus70. In addition, synchronization of the individual monitoring entities30,60may help avoid additional delays. Method10, which is performed in a monitoring entity30,60, may therefore include synchronization with at least one other monitoring entity60via data bus70. If the monitoring entities30,60, which are cyclically active on the data bus70, are synchronized with each other, i.e. have a common time base or clock, the communication on the data bus70may run more efficiently and/or the transmission capacities of the data bus may be better utilized. The control unit34of the monitoring entity30may be configured appropriately, to synchronize with at least one other monitoring entity60and/or vice versa, via the data bus70. This may also be done by efficiently managing the capacities of the monitoring entity30,60itself. In order to achieve this, method10may provide registration of the communication component40,50with the monitoring entity30,60for the monitoring entity30,60. Correspondingly, method20for communication component40,50may also provide registration with the monitoring entity30,60. The control unit34,64of the monitoring entity30,60may be configured to perform a registration of a communication component40,50. The control unit44,54of the communication component may be configured to perform a registration with the monitoring entity30,60. FIG.7shows a sequence of exchanged messages in an embodiment, whereinFIG.7shows the normal sequence of the watchdog service, in which no errors occur. On the left side,FIG.7shows the “Client foo”40as described above. The monitoring service “Watchdog”, which is implemented by the monitoring entities30,60described above, is illustrated in the middle. Both communication components40,50register for the watchdog service. In this embodiment, the “Client foo”40may request a notification service with a time limit (time-out), whereby the time limit specifies the time after which a message about an error should be distributed at the latest after the error has occurred. Hereby the communication direction is important or necessary for at least some services for bus monitoring, as will be explained in the following. In the embodiment shown, the cyclic communication between the “client foo”40and the watchdog service30,60is realized in such a way that the “client foo” makes requests to the watchdog 30 at regular time intervals (also known as “keep alive request”), which are then answered by the watchdog 30. AsFIG.7shows in the following, the service may also be removed. FIG.8shows a sequence of exchanged messages in a further embodiment.FIG.8shows the scenario fromFIG.7and the exchanged messages at the error pattern according toFIG.5. Accordingly, the lightning bolt again indicates the error at “Client foo”40. This error occurs in the scenario already described inFIG.7, if the “Client foo”40does not make a request to the watchdog service within the set time limit. The watchdog service30,60then multicasts a corresponding message “Time-Out foo” (“foo” exceeding the time limit). In the following embodiment, in case of a bus error, the watchdog service reports with information about which communication connections of the registered clients are affected. In other words, the watchdog service has knowledge about which communication paths the registered clients have with their used services. If then a bus error occurs, the watchdog service may communicate via event on both “sides” of the interruption which communication paths between clients and services have been disrupted.FIG.9shows a sequence of exchanged messages in a further embodiment with a data bus error according toFIG.6. After detection of the error (indicated by the lightning bolt) the watchdog service30,60distributes a corresponding message “Time-Out foo” (“foo” exceeding the time limit) to “Service bar”50and a message “Time-Out bar” to “Client foo”40, following the error. In embodiments, the methods10,20disclosed herein may be used to monitor communication components40,50and a data bus70in a vehicle. Correspondingly, the monitoring entities30,60may be configured to monitor communication components40,50and a data bus70in a vehicle. Embodiments may be used in passenger cars, trucks, trains, aircraft or vehicles in general, for example. Embodiments may, for example, use central services/services for monitoring the communication buses on a top communication layer for that purpose. Further embodiments are computer programs for performing one of the methods described herein, when the computer program is executed on a computer, a processor, or a programmable hardware component. Depending on certain implementation requirements, embodiments of the invention may be implemented in hardware or in software. The implementation may be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray-Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, a hard disc or another magnetic or optical memory having electronically readable control signals stored thereon, which cooperate or are capable of cooperating with a programmable hardware component in such a way that the respective method is performed. A programmable hardware component may be formed by a processor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a computer, a computer system, an Application-Specific Integrated Circuit (ASIC), an Integrated Circuit (IC), a System on Chip (SOC), a programmable logics element or a Field Programmable Gate Array (FPGA) comprising a microprocessor. Therefore, the digital storage medium may be machine or computer readable. Some embodiments include also a data carrier comprising electronically readable control signals which are capable of cooperating with a programmable computer system or a programmable hardware component such that one of the methods described herein is performed. One embodiment is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the program for executing one of the methods described herein is stored. Generally speaking, embodiments of the present invention may be implemented as a program, firmware, a computer program or a computer program product having a program code or as data, wherein the program code or the data is effective to execute one of the methods when the program is executed on a processor, or a programmable hardware component. The program code or the data may, for example, also be stored on a machine-readable carrier or data carrier. The program code or the data may, among others, be present as a source code, machine code or byte code or any other intermediate code. The embodiments described above are merely an illustration of the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. Therefore, it is intended that the invention is limited only by the scope of protection of the claims below and not by the specific details presented in the description and explanation of the embodiments herein. LIST OF REFERENCE NUMBERS 10Method for monitoring entity12regular check14,24cyclic communication20Method for communication component22event-based communication30,60monitoring entity32,42,52,62one or more interfaces34,44,54,64control unit40,50communication component70data bus80system | 22,881 |
11863346 | The figures depict, and the detailed description describes, various non-limiting embodiments for purposes of illustration only. DETAILED DESCRIPTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, the described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Embodiments relate to including information in a data packet transmitted by a transmitting integrated circuit (e.g., SOC) to account for a time delay associated with an unsuccessful arbitration attempt to send the data packet over a multi-drop bus. The unsuccessful arbitration attempt by the integrated circuit may delay the transmission of the data packet until the multi-drop bus becomes available for the integrated circuit to send the data packet. The data packet includes a data field to include time delay information caused by the unsuccessful arbitration attempt. A receiving integrated circuit may determine the time that the data packet would have been sent out from the transmitting integrated circuit absent the unsuccessful arbitration attempt based on the delay information. Embodiments also relate to a synchronization generator circuit in an integrated circuit that generates timing signals indicating times at which periodic events occur at another integrated circuit. The synchronization generator circuit receives event timing information derived from data packets that indicate when periodic events occurred at the other integrated circuit. The synchronization generator circuit is set to generate the timing signals based on the event timing information. A component that receives event timing information separate from the synchronization generator circuit may send an adjustment request to update the setting of the synchronization generator circuit so that the deviation of the timing signals and actual times at which the period events do not exceed a threshold. Example Electronic Device Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as personal digital assistant (PDA) and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, Apple Watch®, and iPad® devices from Apple Inc. of Cupertino, California. Other portable electronic devices, such as wearables, laptops or tablet computers, are optionally used. In some embodiments, the device is not a portable communications device, but is a desktop computer or other computing device that is not designed for portable use. In some embodiments, the disclosed electronic device may include a touch sensitive surface (e.g., a touch screen display and/or a touch pad). An example electronic device described below in conjunction withFIG.1(e.g., device100) may include a touch-sensitive surface for receiving user input. The electronic device may also include one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick. FIG.1is a high-level diagram of an electronic device100, according to one embodiment. Device100may include one or more physical buttons, such as a “home” or menu button104. Menu button104is, for example, used to navigate to any application in a set of applications that are executed on device100. In some embodiments, menu button104includes a fingerprint sensor that identifies a fingerprint on menu button104. The fingerprint sensor may be used to determine whether a finger on menu button104has a fingerprint that matches a fingerprint stored for unlocking device100. Alternatively, in some embodiments, menu button104is implemented as a soft key in a graphical user interface (GUI) displayed on a touch screen. In some embodiments, device100includes touch screen150, menu button104, push button106for powering the device on/off and locking the device, volume adjustment buttons108, Subscriber Identity Module (SIM) card slot110, head set jack112, and docking/charging external port124. Push button106may be used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device100also accepts verbal input for activation or deactivation of some functions through microphone113. The device100includes various components including, but not limited to, a memory (which may include one or more computer readable storage mediums), a memory controller, one or more central processing units (CPUs), a peripherals interface, an RF circuitry, an audio circuitry, speaker111, microphone113, input/output (I/O) subsystem, and other input or control devices. Device100may include one or more image sensors164, one or more proximity sensors166, and one or more accelerometers168. Device100may include more than one type of image sensors164. Each type may include more than one image sensor164. For example, one type of image sensors164may be cameras and another type of image sensors164may be infrared sensors that may be used for face recognition. In addition to or alternatively, the image sensors164may be associated with different lens configuration. For example, device100may include rear image sensors, one with a wide-angle lens and another with as a telephoto lens. The device100may include components not shown inFIG.1such as an ambient light sensor, a dot projector and a flood illuminator. Device100is only one example of an electronic device, and device100may have more or fewer components than listed above, some of which may be combined into a component or have a different configuration or arrangement. The various components of device100listed above are embodied in hardware, software, firmware or a combination thereof, including one or more signal processing and/or application specific integrated circuits (ASICs). While the components inFIG.1are shown as generally located on the same side as the touch screen150, one or more components may also be located on an opposite side of device100. For example, the front side of device100may include an infrared image sensor164for face recognition and another image sensor164as the front camera of device100. The back side of device100may also include additional image sensors164as the rear cameras of device100. Example Communication System in Electronic Device FIG.2is a block diagram illustrating components of electronic device100communicating over multi-drop bus220, according to one embodiment. Electronic device100may include, among other components, an application processor208(also referred to as “a central processor” herein), a coexistence hub device212(also referred to as “a coexistence hub device” herein), SOCs234A through234N (collectively referred to as “SOCs234” herein), a multi-drop bus220, and fabrics222A through222N. The components illustrated inFIG.2may be part of a communication system in electronic device100. Electronic device100may include additional components (e.g., user interfaces) not illustrated inFIG.2. Application processor208is a processing circuit in electronic device100for executing various operations. Application processor208may include one or more processing cores for executing various software programs as well as dedicated hardware circuits for performing specialized functions such as processing images, performing security operations, performing machine learning operations, and processing audio signals. Application processor208may also execute operations to coordinate the operations of other components in electronic device100including coexistence hub device212and SOCs234. Application processor208can operate in multiple power modes including a low power mode where application processor208turns off most of its components to save power consumption, and a high-power mode where most of its components are active. Application processor208may also incorporate one or more communication components (e.g., cellular modem) that may also be embodied as a separate SOC. In one or more embodiments, application processor208, in the low power mode, relays data between components connected over multi-drop bus220. For this purpose, application processor208may (i) receive a signal from a device (e.g., SOCs234and coexistence hub device212) over multi-drop bus220, (ii) modify or copy the received signal according to a predetermined rule, and (iii) send the modified signal to another device (e.g., SOCs234, and coexistence hub device212) over multi-drop bus220to enable the SoCs234to communicate effectively. Coexistence hub device212is a circuit or a combination of circuit and software that coordinates the operations of the communication system (including, e.g., coexistence hub device212and SOCs234) and related components in electronic device100. For this purpose, coexistence hub device212stores and executes an operation policy for defining and/or coordinating the operations of the communication system and the related components. Coexistence hub device212may operate based on the operation policy without further intervention or with reduced intervention by application processor208. The operation policy may for, example, determine real time operations of components in the communication system based on factors such as operating conditions of the communication system, the length of time a communication subsystem remained in a waiting state, power consumption of each communication subsystem, and conditions of channels used by communication subsystems. Based on the operation policy, coexistence hub device212performs operations in advance to set up or prepare communication subsystems to activate or deactivate so that activation or deactivation communication subsystems occur without any error. The SoCs in an aggressor-victim pairing benefit from knowing when events are due to occur and being able to observe how long the events are likely to last because a victim SOC can plan ahead for the events. Coexistence hub device212may also include one or more communication subsystems that perform communication operations over various physical interfaces. By locally performing such coexistence operations at the communication subsystem, application processor208may be retained in the low power mode for a longer time despite activities in the communication subsystem, and also frees the resources of application processor208during its high-power mode. The details of coexistence hub device212is described below in detail with reference toFIGS.3and4. Coexistence hub device212may also perform functions other than coordinating the operations that were performed by application processor208. Each of SOCs234is a circuit, by itself or in conjunction with software or firmware, that performs operations for communicating with one or more external networks or devices using communication protocols or security protocols. Each of SOCs234and coexistence hub device212may handle different communication protocols and/or are associated with different wireless bands. For example, SOC234A may perform processing for long range communication (e.g., cellular communication) while SOC234B or coexistence hub device212handles short range communication (e.g., Bluetooth communication). The operations of the SOCs234are at least partially controlled by coexistence hub device212. An example of SOC234B is described below in detail with reference toFIG.5. Fabrics222are communication channels enabling components in the communication system to communicate with application processor208. One or more of fabrics222may be embodied as point-to-point connections such as Peripheral Component Interconnect Express (PCIe), I2C, or Serial Peripheral Interface (SPI). As illustrated inFIG.2, SOC234A, coexistence hub device212and SOCs234B through234N communicate with application processor208via corresponding fabrics222A through222N. One or more of fabrics222may have high bandwidth and low latency compared to multi-drop bus220. Fabrics222illustrated inFIG.2may be physically separate communication channel or one or more shared physical channel with multiple logical sub-channels. Multi-drop bus220is a communication channel that enables multiple components to communicate over a shared connection. Multi-drop bus220may be used primarily to transmit various messages including, but not limited to, data packets, timing packets and coexistence messages between components in the communication system. The data packets described herein refer to messages that include data for processing by devices connected to multi-drop bus220such as SOCs234and coexistence hub device212. The timing packets described herein refer to messages that indicates times when periodic events occur at one of SOCs234or coexistence hub device212. The coexistence messages refer to messages for coordinating operations between SOCs234and coexistence hub device212. In one or more embodiments, System Power Management Interface (SPMI) is used to embody multi-drop bus220. Other serial bus interfaces such as I2C may be used instead of the SPMI to embody multi-drop bus220. Although only a single multi-drop bus220is illustrated inFIG.2, two or more multi-drop buses may be used. Although not illustrated inFIG.2, coexistence hub device212may also control the operations or access to one or more antennas (not shown) associated with the communication system. Example Architecture of Coexistence Hub Device FIG.3is a block diagram illustrating coexistence hub device212, according to one embodiment. Coexistence hub device212is part of the communication system that coordinates operations of components in the communication system. Coexistence hub device212may also handle communication over protocols that are distinct from or partly overlap with communication performed by SOCs234. For this purpose, coexistence hub device212may include, among other components, processor304, coexistence control circuit314, fabric interface310, multi-drop interface340, communication subsystems336A through336Z (collectively referred to as “communication subsystems336”), internal fabric342, local clock360and synchronization generator350. Coexistence hub device212may include additional components not illustrated inFIG.3or may omit components illustrated inFIG.3(e.g., one or more of communication subsystems336). Processor304is a circuit, by itself or in conjunction with software or firmware, that controls the overall operation of the coexistence hub device212as well as coordinating operations of other SOCs234using coexistence messages. Processor304may include memory to store operation policy352for controlling the operations. The operation policy352may be received from application processor208via fabric222B, fabric interface310and internal fabric342. After receiving the operation policy352, processor304may decode the operation policy352and program other components in coexistence hub device212(e.g., coexistence control circuit314), if applicable, to enforce the operation policy352. Additional information related to the operation policy352may also be received from application processor208. Such additional may be stored or processed at processor304to affect how the operation policy352is implemented. Furthermore, processor304may send a portion of the operation policy352relevant to other SOCs234, via multi-drop bus220, to program SOCs234to operate according to the operation policy352. The processor304may make coexistence decisions according to the operation policy352by analyzing coexistence messages (e.g., state information or requests) received via interface340from SOCs234and communication subsystems336. The processor304may stores current states354of communication subsystems336in the coexistence hub device212and the other SOCs234. Current states354may include, for example, radio frequency (RF) bands/channels and bandwidths of those channels in active use by SOCs234and coexistence hub device212, transmission power of radio signals and the exact frequencies and bandwidths being used for the transmitted signals. Such information may also be sent to application processor208or other SOCs234to enable real-time adjustment of operations in other SOCs234. Processor304may delegate some coordination operations (e.g., coordination for communication subsystems336) to arbiterer322. The operation policy as described herein refers to scenarios of operating combinations in the communication system that may be problematic or combinations of components having interworking issues, and also a set of rules that define the operations to be taken by SOCs234and coexistence hub device212to resolve or cope with such problematic scenarios. In other embodiments, the operation policy may include firmware code and enable dynamic response to maintain a balanced operation between multiple communication subsystems. Each of communication subsystems336includes a circuit to process signals received from or for sending to corresponding physical layer interfaces308A through308Z (collectively referred to as “physical layer interfaces308”) external to coexistence hub device212. Such circuits may include local processors378A through378Z (collectively referred to as “local processors378”) that perform one or more of the following operations: (i) execute commands associated with certain communication protocols, (ii) process received input communication signals according to a corresponding protocol to decode the input radio signals and respond by encoding certain responses within required time budgets on the RF link, (iii) control an associated radio frequency (RF) path to adjust transmit power or receive gain control, and (iv) configure, disable or enable components in the communication subsystem336based on the operation policy. All local processors378or at least a subset of these local processors378may be initialized (e.g., by application processor208or automatically) when coexistence hub device212is initialized. Among other things, the local processors378are programmed with a portion of the operation policy relevant to the operations of their communication subsystems336. The operation policy downloaded to a local processor378of a communication subsystem336may define how the communication subsystem336should operate (e.g., the data rate of the communication subsystem, turning on or off of components in the communication subsystem336, and changing the number of active transmitters). Alternatively, the relevant portion of the operation policy may be sequentially downloaded and programmed directly by application processor208through fabric222B or processor304as each of communication subsystems336are turned on. One or more of communication subsystems336may communicate with physical layer interfaces (e.g., RF devices) via, for example, Radio Frequency Front-End Control Interface (RFFE). In some embodiments, physical layer interfaces308may be merged into a reduced set where a local processor378supports more than one communication protocols or switch between different communication protocols over time. Local processor387may control a fixed set of radio paths or only front end switches, LNAs or PAs may be controlled by physical layer interfaces308. Interface340is a circuit or combinations of a circuits and software for communication with multi-drop bus220. In one or more embodiments, interface340includes circuit components for processing data into outbound packets for sending over multi-drop bus220, and unpacking inbound packets received from multi-drop bus220into data for processing in coexistence hub device, as described below in detail with reference toFIG.4B. The interface340is connected to processor304and coexistence control circuit314via connection328. Fabric interface310is a circuit or a combination of a circuit and software for enabling coexistence hub device212to communicate with application processor208over fabric222B. Fabric interface310is also referred to as an internal communication channel herein. In one or more embodiments, fabric interface310performs operations such as buffering, segmenting/combining data, serializing/deserializing and packaging/unpacking of data for communication over a point-to-point communication channel (e.g., PCIe). As illustrated inFIG.3, fabric interface310is connected to internal fabric342to enable communication of components in coexistence hub device212with application processor208. Local clock360is hardware or a combination of hardware and software for generating local clock signal472for tracking time within coexistence hub device212. The clock signal may oscillate between a high state and a low state, and is used for coordinating timing of actions/events within coexistence hub device212. Coexistence hub device212may also receive global clock signal474from a global clock outside coexistence hub device212via fabric222B or multi-drop bus220. Global clock signal474is a signal that is used across different components in electronic device100to coordinate timing of actions/events of the different components. Synchronization generator350is hardware or a combination of hardware and software for generating timing signals that indicate times at which periodic events or non-periodic events occur at one or more SOCs234outside coexistence hub device212. Synchronization generator350may send out the timing signals over internal fabric342to other components of coexistence hub device212. The timing signals are used, for example, to coordinate timing of events/actions at communication subsystems336according to the events at one or SOCs external to synchronization generator350. The timing signals may synchronize a global time across one or more SOCs234and coexistence hub device212so that their operations can be coordinated using the global time. Details of synchronization generator350is described below with reference toFIG.4D. Coexistence control circuit314is a circuit, by itself or in conjunction with software, that processes coexistence messages transmitted over multi-drop bus220. Coexistence control circuit314is programmed by processor304to enforce the operation policy352by making real time decisions on coexistence events, distribute inbound coexistence messages to relevant communication subsystems336, sharing real time coexistent messages among communication subsystems336and sending outbound coexistence messages to other SOCs234. The coexistence event described herein refers to a condition or occurrence defined by the operation policy that would prompt coordinating of operations in components of electronic device100. Specifically, coexistence control circuit314may include, among other components, dispatcher312, memory316, arbiterer322and billboard326. Dispatcher312is a programmable circuit or a circuit in combination with software or firmware for filtering and sending messages for each communication subsystems336to memory316. The details of the dispatcher312and its functions are described below with reference toFIG.4A. Memory316has multiple buffers318A through318Z (collectively referred to as “buffers318”) where each buffer corresponds to each of communication subsystems336. Each of buffers318receives and stores inbound messages (received from components outside coexistence hub device212via multi-drop bus220) relevant to a corresponding communication subsystem336. The stored inbound coexistent messages in a buffer318may be sent to a corresponding communication subsystem336(as indicated by arrow372) based on priority (e.g., time sensitive data has a higher priority relative to time insensitive data) via an internal fabric342. If one or more communication subsystems336are inactive, buffers318stores the messages until the communication systems336are turned on and become available to receive the messages. In one or more embodiments, different buffers318may be associated with different priorities. When a buffer assigned with high priority is filled with a message, a communication system336may wake up to service to ensure that the message is handled in a timely manner. Each of buffers318also stores outbound messages348(received from a corresponding communication subsystem336via internal fabric342). The outbound messages are retrieved by dispatcher312and sent out over multi-drop bus220to components outside coexistence hub device212, also based on priority (e.g., time sensitive data has a higher priority relative to time insensitive data). Memory316also includes shared memory section320that may be accessed by arbiterer322to resolve conflicting use of resources and by different local processors378to exchange time-sensitive messages among communication subsystems336. Communication subsystems336may submit their tasks along with requests from other SOCs234to memory queues to be serviced by arbiterer322. Billboard326is a circuit, by itself or in conjunction with software or firmware, that stores state information of communication subsystems336. The status information346is received from communication subsystems336and stored for access. Billboard326enables a communication subsystem in the coexistence hub device212or an external component to accurately determine operating context of another system by accessing the state information in billboard326. In one or more embodiments, other SoCs234may also include billboards that enable SOCs234to advertise their context concurrently. The billboard may include a memory region (e.g., a subdivision of the memory) so that multiple SOCs can share their own subset of the memory to post context information. An incoming message into the memory region of the billboard may trigger a communication subsystem to respond within a predetermined time via an interrupt when the message transaction is complete. In one or more embodiments, billboard326is also be used as a ping-pong buffer for exchanging signals or data between SOCs234over multi-drop bus220if SOCs234cannot perform direct messaging among themselves for some reasons. Arbiterer322is a circuit, by itself or in conjunction with software or firmware, that makes decisions on real time coordination of operations of communication subsystems336and sends out the decisions to the communication subsystems336over internal fabric342and memory316. Such decisions may include resolving competing needs of common resources by multiple communication subsystems336or requests for incompatible resources by different communication subsystems336. Arbiterer322makes the decisions in real time, which may remain effective for a shorter time period compared to decisions made at processor304to implement the operation policy352. In addition, arbiterer322may resolve requests for use of resources by external communication subsystems that compete with the local communication subsystems336for use of the same resource. For this purpose, arbiterer322may access current states354of communication subsystems336and the other SOCs234stored in processor304as well as using information about the priority of the different competing operations. The algorithm for resolving the resource conflicts at arbiterer322may be adjusted based on the operation policy352executed by processor304. Arbiterer322may be programmed by processor304or application processor208. The decision made by arbiter322may include controlling RFFE transactions associated with communication subsystems336, for example, to change the settings of an external RF device. Such operation may include blanking a power amplifier transmission of corresponding communication subsystem336. Because the real-time decisions are sent out over shared internal fabric342, a communication subsystem (e.g., communication subsystem336A) may receive the decisions intended for another communication subsystem (e.g., communication subsystem336B) and adjust its operations accordingly. Arbiterer322may include processor323to control the overall operation of arbiterer322. In one or more embodiments, processor304determines a larger scale coordination operation based on its operation policy352, and configures components of coexistence control circuit314, communication subsystems336and possibly SOCs234to enforce the operation policy352. Arbiterer322, on the other hand, coordinates a smaller scale, real time coexistence operations that are consistent with the larger scale coordination operation as defined by operation policy352. The components of coexistence hub device212illustrated inFIG.3are merely illustrative. Coexistence hub device212may include fewer components (e.g., lack memory316or separate processor304) or include additional components (e.g., general purpose input/output) not illustrated inFIG.3. Example Architecture of Dispatcher FIG.4Ais a block diagram of dispatcher312in coexistence hub device ofFIG.3, according to one embodiment. Dispatcher312is a circuit or a combination of circuit, software and/or firmware for processing messages. Dispatcher312determines when outbound messages from communication subsystems336should be sent to the processor304or SOCs234, and when the time arrives, forwards the outbound messages to interface340for sending over multi-drop bus220. The times for sending the outbound messages are determined based on the priority of the outbound messages, whether other messages are remaining in the memory316for sending over multi-drop bus220, and when arbitration for using multi-drop bus220for transmitting data is successful. Dispatcher312also receives messages from SOCs234over multi-drop bus220and forward them to the communication subsystems336over internal fabric342. The dispatcher may forward these messages based on a predefined set of rules to pertinent communication subsystems336. Further, dispatcher312may also filter out some messages which are not marked as being of interest to any of the active communication subsystems336. Dispatcher312may include, among other components, processor436, interrupt manager428, time stamper440and message filter432. One or more of interrupt manager428, time stamper440and message filter432may be embodied as firmware of software executed by processor436. Also, additional components may be added to dispatcher312. Processor436is a circuit that may perform various operations in dispatcher312such as (i) managing contending resources within each communication subsystem336, (ii) control external RF control blocks outside of coexistence hub device212, (iii) support the functions and operations of arbiterer322, and (iv) coordinating reporting of the results from arbiterer322to components on the multi-drop bus220. Processor436may be a part of processor304or it may be a standalone processor. Processor436may also update the operations of other components in dispatcher312over time or depending on the activities in electronic device110. Message filter432is hardware, software, firmware or a combination thereof that receives inbound messages422from multi-drop bus220via interface340, filters inbound messages422for relevancy to communication subsystems336, and sends the filtered inbound messages454to appropriate buffers318and/or shared section320of memory316. Message filter432may also redirect the inbound messages454to buffers associated with communication subsystems336other than a default communication subsystem336to ensure that the active communication subsystems336receives all relevant inbound messages. By configuring message filter432, a communication subsystem (e.g.,336A) may receive an inbound intended for another communication subsystem (e.g.,336B) as well and take such inbound message into account for its operation. If an inbound message includes an interrupt, the message filter432sends the corresponding coexistence message442to interrupt manager428. Interrupt manager428is hardware, software, firmware or a combination thereof that manages interrupts. When interrupt manager428receives the coexistence message442including an interrupt, interrupt manager428extracts the interrupt and sends out an interrupt signal414to corresponding communication subsystem336. The interrupt signal414can cause the corresponding communication subsystem336to shut down, power down a subset of its components, wake-up from a power down mode or indicate real time state of components on multi-drop bus220(e.g., SOCs234). These interrupt signals may only involve a simple decoder and no microprocessor, which enables low cost components to send interrupt signals for communicating a simple message over multi-drop bus220. These interrupts can also be used as system status indicators for external SOCs or components. One of the characteristics of the interrupt signals is that they are sticky, meaning that even if an SOC (e.g., SOC234B) is asleep when a coexistence hub device212sends an interrupt signal, the SOC (e.g., SOC234B) will respond to the interrupt signal after the SOC (e.g., SOC234B) wakes up at a later time. These interrupt signals can also be used to guarantee that an external SOC (e.g., SOC234B) may abruptly go to inactive/sleep state without requiring other components (e.g., SOC234A) to stay awake long enough to complete handshake operations with the SOC (e.g., SOC234B). By using always on interrupt signals, the burden on the originating message source may be reduced. Message filter432may also receive interrupt signal450from communication subsystems336. If the interrupt signal450is intended for SOCs234, message filter432sends the interrupt450as an outbound coexistence message418to interface340for sending out via multi-drop bus220. An interrupt signal between the communication subsystems336is transmitted over internal fabric342without intervention of coexistence control circuit314. Time stamper440is a circuit that keeps track of time for incoming and outgoing messages on multi-drop bus220. Time stamper330tracks the actual time the messages are sent or received to account for arbitration delays, for example, using local clock signal472and/or global clock signal474. Example Architecture of Interface340 FIG.4Bis a block diagram of interface340, according to one embodiment. Interface340generates outbound data packets459from outbound data468received from dispatcher312as well as depacketizes inbound data packets461to generate inbound data463. For this purpose, interface340may include, among other components, delay calculator circuit456, packet assembly circuit458, physical layer circuit460, and inbound packet processor462. Interface340may include further components not illustrated inFIG.4Bsuch as buffers. Delay calculator circuit456is a circuit or a combination of a circuit and software that determines delay time457indicating the amount of time delayed for sending outbound data packet459due to the arbitration process associated with the use of multi-drop bus220to transmit outbound data packets459. The delayed time may be a difference between (i) an earliest possible transmission time when the outbound data packet459could have been transmitted over multi-drop bus220with an earliest arbitration attempt to transmit over multi-drop bus220being successful, and (ii) an actual time when outbound data packet459is actually sent out over multi-drop bus220with at least one failed arbitration attempt and subsequent successful arbitration attempt to transmit outbound data packet459. To determine the delay time, delay calculator circuit456receives local clock signal472, outbound data468from dispatcher312, and arbitration result465from physical layer circuit460. Delay calculator circuit456also determines an identification of outbound data468so that delay time457may be applied to a correct outbound data packet459associated with outbound data468. If the first arbitration attempt to transmit an outbound data packet459associated with outbound data468is successful, a time delay value of zero or another value indicating no delay is output as time delay457by delay calculator circuit456. If the first arbitration attempt is unsuccessful but subsequent arbitration is successful, a time delay value corresponding to the delayed arbitration success is output as time delay457by delay calculator circuit456. The delay time457may be represented in terms of a local clock time or a global clock time derived from the local clock time and sends delay time457to packet assembly circuit458. The global clock time may be derived at delay calculator circuit456by using relationships between the global clock time and the global clock time stored in delay calculator circuit456. By including a data field including the delay time in outbound data packet459, a SOC234receiving outbound data packet459may determine when outbound data packet459would have been sent out from interface340if there was no delay associated with arbitration. Hence, despite the delay time in receiving outbound data packet459due to arbitration over the use of multi-drop bus220, SOC234may perform desired actions or operations while compensating or taking into account the delayed time associated with the arbitration. Another advantage of embodiments is that the recipient SOC can reconstruct a timeline for when events of interest may occur on another SoC and can take actions ahead of time when these events are likely to occur in the future. In this way, the recipient SOC may proactively plan for future events. This mechanism can be used, for example, to supplement a message that was sent ahead of time to the recipient SOC, and by tracking the time the events occurred, and more accurately revise its interval state in anticipation of future unavailability of shared resources. Packet assembly circuit458is a circuit that performs packetizing of outbound data468from dispatcher312. In one embodiment, packet assembly circuit458starts the process of packetizing outbound data468as soon as the outbound data458is received from dispatcher312. The packetizing includes the process of segmenting outbound data468into multiple parts as payloads and adding relevant header information to outbound data packets. One of header fields in the outbound data packets is a time delay field indicating delay time457for outbound data468received from delay calculator circuit456. Physical layer circuit460is a circuit or a combination of a circuit and software that performs various operations of transmitting outbound data packets and/or receiving bitstreams of input data packets over a physical data link. Operations performed by physical layer circuit460may include, among others, arbitrating the use of multi-drop bus220for transmitting the output data packets from coexistence hub device212, converting the outbound data packet459into outbound bitstreams, and converting the received bitstreams over multi-drop bus220into inbound data packet461. Inbound packet processor462is a circuit or a combination of a circuit and software that converts inbound data packet461into inbound data463for transmission to other components of coexistence hub device212. Operations performed by inbound packet processor462may include, among others, extracting the delay time of the inbound data packet461associated with failure of a source of inbound data bitstream to arbitrate the use of multi-drop bus220to transmit the inbound data bitstream over multi-drop bus220. The extracted delay time and inbound data463may be sent to various components of coexistence hub device212for further operations or actions. By extracting and identifying the delay time in inbound data packet461, coexistence hub device212may determine when inbound data packet461would have been sent out from a source SOC absent a delay due to arbitration for transmitting data packets over multi-drop bus220. Hence, components in coexistence hub device212can compensate for the delay time and perform appropriate operations. The structure of interface340is merely illustrative. In other embodiments, interface340may have additional components or include fewer components. For example, interface340may process only transmittal of outbound data packet459and not include inbound packet processor462. Further, one or more components of interface340may be combined into fewer components or split up into more components than what is described inFIG.4B. FIG.4Cis a flowchart illustrating a process of assembling an output data packet at interface340, according to one embodiment. Delay calculator456determines802delay time representing a first time at which an outbound data packet would have been sent out from the source SOC absent delay due to arbitration, and a second time at which the outbound data packet is actually sent out with the arbitration delay. Packet assembly circuit458assembles806the outbound packet by at least adding a field in the outbound packet indicating the delay time as determined by delay calculator456. Physical layer circuit460sends810the assembled output packet over multi-drop bus220. Example Architecture of Synchronization Generator FIG.4Dis a block diagram of synchronization generator350, according to one embodiment. Synchronization generator350receives event timing information476indicating certain periodic events from one or more SOCs234, and generates timing signals478A through478M (hereinafter collectively referred to as “timing signals478”) that are sent to other components of coexistence hub device212for coordinating various operations or actions. The timing signals478represent timing at which other periodic events occur at the one or more SOCs external to coexistence hub device212. For this purpose, synchronization generator350may include, among other components, a counter programmer480and a plurality of programmable counters470A through470M (hereinafter collectively referred to as “programmable counters470”). Counter programmer480is logic, either in the form of a circuit or a combination of circuit and software, that programs programmable counters470by sending programming signal482. Counter programmer480receives local clock signal472from local clock360, global clock signal474from a global clock source (e.g., application processor208) through fabric222and fabric interface310, and event timing information476from coexistence hub device212via multi-drop bus220and interface340. Event timing information476is derived from inbound timing packets and indicates when periodic events occurs at the external SOC. In one embodiment, event timing information476is determined by interface340from a transmittal time at which an inbound data packet is transmitted from the SOC and then adjusting the transmittal time according to the delay time as indicated by the inbound timing packets. Programmable counters470are circuits or combinations of circuits and software that are programmed to periodically generate timing signals478. In one or more embodiments, the timing signals478are in the form of interrupts that are sent to other components of coexistence hub device212over internal fabric342. Each of programmable counters may count cycles in local clock signal472to determine whether a certain amount of time has elapsed before sending out its timing signal478. All of programming counters470may track times associated with the same events from the same SOC. Alternatively, one or more of programming counters470may track distinct and independent events from the same or different SOCs. Counter programmer480may request adjustment requests490A through490I (hereinafter collectively referred to as “adjustment requests490”) from other components in coexistence hub device212. Each of adjustment requests490may be generated by a component in coexistence hub device212to indicate that one or more timing signals478have deviated from accurate timing beyond a predetermined threshold and that corresponding programmable counters470should be adjusted to correct the deviation. Such adjustment request490may be prompted by a component in coexistence hub device212that tracks timing of the periodic events independent of the timing signals478and has more accurate event timing information than synchronization generator350. For example, the component may communicate with an external SOC associated with the periodic events via fabric222B or general purpose input/output (GPIO) (not shown). In response to receiving adjustment request490, counter programmer480generates programming signal482for updating a corresponding programmable counter470, and makes adjustments to the advances or delays time at which the corresponding programmable counter470generates subsequent timing signals, as described below in detail with reference toFIG.7B. The adjustment request490may indicate the amount of time to be advanced or delayed in terms of local clock cycles or global time cycles. In one or more embodiments, synchronization generator350programs its programmable counter to generate timing signals478indicating fractions of periods indicated by event timing information476. For example, synchronization generator350may generate multiple timing signals of equal interval between two other timing signals, as described below in detail with reference toFIG.7A. In this way, the number of inbound data packets indicating the timing of events at a SOC can be reduced while still providing all timing signals478that are relevant to timing the operations of the components of coexistence hub device212. Example Architecture of SOC FIG.5is a block diagram of SOC234B, according to one embodiment. Although SOC234B is illustrated inFIG.5as an example, other SOCs234A,234C through234N may have the same or similar architecture as SOC234B. SOC234B may send messages to coexistence hub device212or other SOCs and/or receive messages from coexistence hub device212or other SOCs over multi-drop bus220. Alternatively or in addition, SOC234B may send messages including event timing information476to coexistence hub device212or other SOCs over multi-drop bus220. SOC234B is part of the communication system in electronic device100and can execute one or more communication protocols using its communication subsystems536A,536B (collectively referred to as “communication subsystems536”). Although only two communication subsystems536A,536B are illustrated inFIG.5, more than two communication subsystems or only a single communication subsystem may be included in SOC234B. Each of communication subsystems536A,536B may be associated with different communication protocols, or both may be associated with the same communication protocol. Communication subsystems536are substantially identical to communication subsystems336of coexistence hub device212except that messages associated with communication subsystems536are processed by processor512instead of coexistence control circuit314. Communication subsystems536can send messages over multi-drop bus220to coexistence hub device212to coexist with communication subsystems in coexistence hub device and/or other SOCs. Inbound messages to SOC234B are processed locally by processor512and sent to corresponding communication subsystems536. Other detailed explanation on communication subsystems536is omitted herein for the sake of brevity. In addition to communication subsystems536, SOC234B may further include, among other components, fabric interface502, bus interface504, processor512, local clock544, synchronization generator542, and an internal bus540for connecting these components. SOC234B may include further components such as memory for buffering coexistence messages associated with each communication subsystems536. Bus interface504is a circuit, by itself or in conjunction with software or hardware, that enables components of SOC234B to communicate with coexistence hub device212and other SOCs over multi-drop bus220. Bus interface540may perform the same function and have the structure as interface340described above with reference toFIG.4B. Fabric interface502is a circuit, by itself or in conjunction with software or hardware, that enables components of SOC234B to communicate with application processor208over fabric222C. The communication of fabric interface502is capable of transmitting data at faster speed and higher bandwidth than the communication over bus interface504. Processor512manages overall operation of SOC234B. Processor512may include, among other components, interrupt manager516and message filter518as software or hardware components. The functions and operations of interrupt manager516and message filter518are substantially the same as those of interrupt manager428and message filter432, and therefore, detailed explanation of these components is omitted herein for the sake of brevity. Processor512and/or communication subsystems536may be programmed by processor304of coexistence hub device212or application processor208to implement operation policy352. In one embodiment, such programming may be performed when the SOC234B is turned on. Local clock544and synchronization generator542may perform the same function and have the same structure as local clock360and synchronization generator350as described above with reference toFIGS.3and4Dexcept that local clock544and synchronization generator542are used in SOC234instead of coexistence hub device212. Example Delay Time for Arbitration FIGS.6A and6Bare timing diagrams illustrating delay time TD associated with transmitting data packet (e.g., timing packet or data packet) over multi-drop bus220, according to one embodiment. The example ofFIGS.6A and6Buses SPMI as multi-drop bus220where identification of a message transmitter (e.g., SOC234B) device are sent out at the first low to high transition time of SPMI clock630followed by the transmission of data (or command) frames608when the attempt602for arbitration to use SPMI data bus634is successful. In the timing diagram ofFIG.6A, the SPMI bus is not busy and the transmitter SOC (e.g., SOC234A) successfully arbitrates the use of the SPMI bus for transmitting its data packet or timing packet to a destination SOC (e.g., coexistence hub device212). After an amount of time TA representing the time at which the arbitration attempt is started and the first transition of the SPMI clock, the identification of the message transmitter (e.g., USID) is sent over the SMPI data bus634. If another multidrop protocol bus is used then a similar consistent marker point can be used. Then data (or command) frames608are transmitted followed by another arbitration attempt610for transmitting subsequent data packets. Because the first arbitration attempt was successful, the delay time due to arbitration is zero. Therefore, the header in the data (or command) frames will indicate a delay time value of zero. To the contrary, the SPMI bus is busy and unavailable inFIG.6B, and therefore, the same transmitter SOC's first attempt to arbitrate for the use of the SMPI bus is unsuccessful. In this case, the transmitter SOC's use of the SPMI bus for transmitting its data packets or timing packets is delayed by the amount of time614consumed by transmission of data by other SOCs and a subsequent attempt arbitration attempt602. The amount of time delayed due to the arbitration is TD. Hence, the header of the data (or command) frames include the delay time that corresponds to TD. AlthoughFIGS.6A and6Buse SPMI bus as an example of multi-drop bus220, the same principle and mechanism can be applied to other types of multi-drop buses. Example Operation of Synchronization Generator FIG.7Ais a timing diagram illustrating timing signals702,704and706generated by synchronization generator350, according to one embodiment. Each of solid arrows702A,702B,704C inFIG.7Amay indicate a starting time of a frame in a wireless Long-Term Evolution Time-Division Duplex (LTE-TDD) or Long-Term Evolution Frequency-Division Duplex (LTE-FDD), which has a period of 10 ms. Dashed arrows704A through704I and706A through706I may indicate the starting time of subframes in each LTE-TDD or LTE-FDD frame (collectively referred to as “LTE frame” herein), which has a period of 1 ms. Referring toFIG.4C, synchronization generator350receives event timing information476indicating the starting time of two or more frames (indicated by sold arrows702). Counter programmer480sets one or more of programmable counters470to generate timing signals478at starting times702A through702C of LTE frames. Further, counter programmer480programs one or more of programmable counters470to generate timing signals478at times704,706when subframes of each frame starts. For example, one or more of the programmable counters470may generate 9 timing signals at times704A through704I at equal intervals between starting times702A,702B of adjacent LTE frames. Similarly, one or more of the programmable counters470may generate 9 timing signals at times706A through706I at equal intervals between starting times702B,702C of LTE frames. Timing signals478generated by the synchronization generator350may be sent to one or more communication subsystems336(e.g., communication subsystem336A) so that the communication subsystems336may take certain actions or operations in anticipation of frame transmittal by a SOC (e.g., SOC234A) that is responsible for LTE communication. Such actions or operations may be clearing out buffers in the communication subsystem336to free up the channel for use by other communication subsystems, or stopping communication operation in anticipation of interference from LTE SOC. FIG.7Bis a timing diagram illustrating an adjusting operation of synchronization generator350, according to one embodiment. The times at which synchronization generator350generates timing signals478may deviate from accurate event times of external SOC as time progresses. Counter programmer480may adjust deviation of times at which the timing signals are generated using subsequent event timing information476received from the external SOC. Alternatively or in addition, counter programmer480may use adjustment request490received from other components of coexistence hub device212that has accurate event timing information. For example, a communication subsystem (e.g., communication subsystem336A) may communicate directly with the external SOC via a GPIO or fabric222. Such components of coexistence hub device212may determine deviation time Dt corresponding to a difference in time702B at which a timing signal478is generated and the actual time708at which an event occurs at the external SOC. If the deviation time Dt is above a threshold, the component may send adjustment request490to synchronization generator350to adjust subsequent timing signals. After receiving the adjustment request490, counter programmer480sends programming signal482to update one or more of the programmable counters470. As a result, the time702C at which the subsequent timing signal is adjusted relative to the time710at which the timing signal would have been generated without the adjustment. Although the above example describes using timing signals478in the context of LTE frames and subframes, the same principle and mechanism can be applied to tracking and taking actions based on other periodic events. Such periodic events include Bluetooth tick or start of an agreed interval of cooperation between subsystems, other examples include Continuous DRX sleep timer interval count. Example Process of Operating Synchronization Generator FIG.8is a flowchart illustrating the process of operating synchronization generator350, according to one embodiment. An interface of a first SOC (e.g., coexistence hub device212) receives802timing packets including the event timing information from a second SOC (e.g., SOC234A). The interface of the first SOC determines806times at which the timing packets would have been sent out by the second SOC absent delay due to unsuccessful arbitration attempts to use a multi-drop bus for sending the timing packets. Based on the times at which the timing packets would have been sent out, one or more programmable counters in a synchronization generator is set810. Using the programmed counters, the synchronization generator generates814first timing signals (e.g., timing signals478). In one or more embodiments, a component of the first SOC receives a second timing signal from the second SOC. Using the second timing signal, the first SOC determines816a time difference between a first time at which the first timing signal is received from the synchronization generator and a second time at which the second timing signal is received from the second SOC by the component of the first SOC. It is then determined818if the time difference is above a threshold. If the time difference is not above a threshold, the setting of the programmable counters is retained820. Conversely, if the time difference is above the threshold, the setting of the programmable counters is updated824to account for the time difference. The steps and the sequence described above with reference toFIG.8are merely illustrative. One or more steps inFIG.8may be omitted or the sequence of steps may be changed. While particular embodiments and applications have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the present disclosure. | 58,074 |
11863347 | The details of various embodiments of the methods and systems are set forth in the accompanying drawings and the description below. DETAILED DESCRIPTION For purposes of reading the description of the various embodiments below, the following descriptions of the sections of the specification and their respective contents can be helpful:Section A describes a network environment and computing environment which can be useful for practicing embodiments described herein; andSection B describes embodiments of systems and methods for inter-device networking using intra-device protocols, according to one or more embodiments of the solution. A. Computing and Network Environment Prior to discussing specific embodiments of the present solution, it can be helpful to describe aspects of the operating environment as well as associated system components (e.g., hardware elements) in connection with the methods and systems described herein. Referring toFIG.1A, an embodiment of a network environment is depicted. In brief overview, the network environment includes a wireless communication system that includes one or more access points (APs)106, one or more wireless communication devices102and a network hardware component192. The wireless communication devices102can for example include laptop computers102, tablets102, personal computers102, Internet of Things (IoT) devices102, and/or cellular telephone devices102. The details of an embodiment of each wireless communication device102and/or AP106are described in greater detail with reference toFIGS.1B and1C. The network environment can be an ad hoc network environment, an infrastructure wireless network environment, a subnet environment, etc. in one embodiment. The APs106can be operably coupled to the network hardware component192via local area network connections. The network hardware component192, which can include a router, gateway, switch, bridge, modem, system controller, appliance, etc., can provide a local area network connection for the communication system. Each of the APs106can have an associated antenna or an antenna array to communicate with the wireless communication devices in its area. The wireless communication devices102can register with a particular AP106to receive services from the communication system (e.g., via a SU-MIMO or MU-MIMO configuration). For direct connections (e.g., point-to-point communications), some wireless communication devices can communicate directly via an allocated channel and communications protocol. Some of the wireless communication devices102can be mobile or relatively static with respect to AP106. In some embodiments an AP106includes a device or module (including a combination of hardware and software) that allows wireless communication devices102to connect to a wired network using wireless-fidelity (WiFi), or other standards. An AP106can sometimes be referred to as a wireless access point (WAP). An AP106can be implemented (e.g., configured, designed and/or built) for operating in a wireless local area network (WLAN). An AP106can connect to a router (e.g., via a wired network) as a standalone device in some embodiments. In other embodiments, an AP106can be a component of a router. An AP106can provide multiple devices access to a network. An AP106can, for example, connect to a wired Ethernet connection and provide wireless connections using radio frequency links for other devices102to utilize that wired connection. An AP106can be implemented to support a standard for sending and receiving data using one or more radio frequencies. For example, the AP106may support bandwidth between 5 Gbps to 500 Gbps. Those standards, and the frequencies they use can be defined by the IEEE (e.g., IEEE 802.11 standards like 802.11aa 60 GHz). An AP106can be configured and/or used to support public Internet hotspots, and/or on a network to extend the network's Wi-Fi signal range. In some embodiments, the APs106can be used for (e.g., in-home or in-building) wireless networks (e.g., IEEE 802.11aa, any other type of radio frequency based network protocol and/or variations thereof). Each of the wireless communication devices102can include a built-in radio and/or are coupled to a radio. Such wireless communication devices102and/or APs106can operate in accordance with the various aspects of the disclosure as presented herein to enhance performance, reduce costs and/or size, and/or enhance broadband applications. Each wireless communication device102can have the capacity to function as a client node seeking access to resources (e.g., data, and connection to networked nodes such as servers) via one or more APs106. The network connections can include any type and/or form of network and can include any of the following: a point-to-point network, a broadcast network, a telecommunications network, a data communication network, a computer network. The topology of the network can be a bus, star, or ring network topology. The network can be of any such network topology as known to those ordinarily skilled in the art capable of supporting the operations described herein. In some embodiments, different types of data can be transmitted via different protocols. In other embodiments, the same types of data can be transmitted via different protocols. The communications device(s)102and access point(s)106can be deployed as and/or executed on any type and form of computing device, such as a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein.FIGS.1B and1Cdepict block diagrams of a computing device100useful for practicing an embodiment of the wireless communication devices102or AP106. As shown inFIGS.1B and1C, each computing device100includes a central processing unit121, and a main memory unit122. As shown inFIG.1B, a computing device100can include a storage device128, an installation device116, a network interface118, an I/O controller123, display devices124a-124n, a keyboard126and a pointing device127, such as a mouse. The storage device128can include an operating system and/or software. As shown inFIG.1C, each computing device100can also include additional optional elements, such as a memory port103, a bridge170, one or more input/output devices130a-130n, and a cache memory140in communication with the central processing unit121. The central processing unit121is any logic circuitry that responds to and processes instructions fetched from the main memory unit122. In many embodiments, the central processing unit121is provided by a microprocessor unit, such as: those manufactured by Intel Corporation of Santa Clara, California; those manufactured by International Business Machines of White Plains, New York; or those manufactured by Advanced Micro Devices of Sunnyvale, California. The computing device100can be based on any of these processors, or any other processor (e.g., integrated digital signal processor (DSP)) capable of operating as described herein. For example, in many implementations, a processor may be an application-specific integrated circuit (ASIC), field programmable gate array (FPGA), or any other type and form of dedicated silicon logic or processing circuitry. Main memory unit122can be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor or central processing unit121, such as any type or variant of Static random access memory (SRAM), Dynamic random access memory (DRAM), Ferroelectric RAM (FRAM), NAND Flash, NOR Flash and Solid State Drives (SSD). The main memory unit122can be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. In the embodiment shown inFIG.1B, the processor or central processing unit121communicates with main memory unit122via a system bus150(described in more detail below).FIG.1Cdepicts an embodiment of a computing device100in which the processor communicates directly with main memory unit122via a memory port103. For example, inFIG.1Cthe main memory unit122can be DRDRAM. FIG.1Cdepicts an embodiment in which the main processor or central processing unit121communicates directly with cache memory140via a secondary bus, sometimes referred to as a backside bus. In other embodiments, the main processor or central processing unit121communicates with cache memory140using the system bus150. Cache memory140typically has a faster response time than main memory unit122and is provided by, for example, SRAM, BSRAM, or EDRAM. In the embodiment shown inFIG.1C, the processor or central processing unit121communicates with various I/O devices130via a local system bus150. Various buses can be used to connect the central processing unit121to any of the I/O devices130, for example, a VESA VL bus, an ISA bus, an EISA bus, a MicroChannel Architecture (MCA) bus, a PCI bus, a PCI-X bus, a PCI-Express bus, or a NuBus. For embodiments in which the I/O device is a video display124, the processor or central processing unit121can use an Advanced Graphics Port (AGP) to communicate with the display124.FIG.1Cdepicts an embodiment of a computer device100in which the main processor or central processing unit121can communicate directly with I/O device130b, for example via HYPERTRANSPORT, RAPIDIO, or INFINIBAND communications technology.FIG.1Calso depicts an embodiment in which local busses and direct communication are mixed: the processor or central processing unit121communicates with I/O device130ausing a local interconnect bus while communicating with I/O device130bdirectly. A wide variety of I/O devices130a-130ncan be present in the computing device100. Input devices include keyboards, mice, trackpads, trackballs, microphones, dials, touch pads, touch screen, and drawing tablets. Output devices include video displays, speakers, inkjet printers, laser printers, projectors and dye-sublimation printers. The I/O devices can be controlled by an I/O controller123as shown inFIG.1B. The I/O controller can control one or more I/O devices such as a keyboard126and a pointing device127, e.g., a mouse or optical pen. Furthermore, an I/O device can also provide storage and/or an installation medium or installation device116for the computing device100. In still other embodiments, the computing device100can provide USB connections (not shown) to receive handheld USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Los Alamitos, California. Referring again toFIG.1B, the computing device100can support any suitable installation device116, such as a disk drive, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, a flash memory drive, tape drives of various formats, USB device, hard-drive, a network interface, or any other device suitable for installing software and programs. The computing device100can further include a storage device, such as one or more hard disk drives or redundant arrays of independent disks, for storing an operating system and other related software, and for storing application software programs such as any program or software120for implementing (e.g., configured and/or designed for) the systems and methods described herein. Optionally, any of the installation devices116could also be used as the storage device. Additionally, the operating system and the software can be run from a bootable medium. Furthermore, the computing device100can include a network interface118to interface to the network104through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., 802.11, 56 kb, X.25, SNA, DECNET), broadband connections (e.g., Gigabit Ethernet, Ethernet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, Ethernet, SONET, SDH, RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, IEEE 802.11ac, IEEE 802.11ad, IEEE 802.11ax, CDMA, GSM, WiMax and direct asynchronous connections). In one embodiment, the computing device100communicates with other computing devices100via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface118can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device100to any type of network capable of communication and performing the operations described herein. In some embodiments, the computing device100can include or be connected to one or more display devices124a-124n. As such, any of the I/O devices130a-130nand/or the I/O controller123can include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of the display device(s)124a-124nby the computing device100. For example, the computing device100can include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display device(s)124a-124n. In one embodiment, a video adapter can include multiple connectors to interface to the display device(s)124a-124n. In other embodiments, the computing device100can include multiple video adapters, with each video adapter connected to the display device(s)124a-124n. In some embodiments, any portion of the operating system of the computing device100can be configured for using multiple displays124a-124n. In further embodiments, an I/O device130can be a bridge between the system bus150and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a FibreChannel bus, a Serial Attached small computer system interface bus, a USB connection, or a HDMI bus. A computing device100of the sort depicted inFIGS.1B and1Ccan operate under the control of an operating system, which control scheduling of tasks and access to system resources. The computing device100can be running any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the Unix and Linux operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: Android, produced by Google Inc.; WINDOWS 7 and 8, produced by Microsoft Corporation of Redmond, Washington; MAC OS, produced by Apple Computer of Cupertino, California; WebOS, produced by Research In Motion (RIM); OS/2, produced by International Business Machines of Armonk, New York; and Linux, a freely-available operating system distributed by Caldera Corp. of Salt Lake City, Utah, or any type and/or form of a Unix operating system, among others. The computer device100can be any workstation, telephone, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. In some embodiments, the computing device100can have different processors, operating systems, and input devices consistent with the device. For example, in one embodiment, the computing device100is a smart phone, mobile device, tablet or personal digital assistant. Moreover, the computing device100can be any workstation, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone, any other computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein. Aspects of the operating environments and components described above will become apparent in the context of the systems and methods disclosed herein. B. Systems and Methods for Inter-Device Networking Using Intra-Device Protocols Protocols generally used between device components, such as the PCIe standard (e.g., PCIe 5.0) support high speed transfer rates. Although not strictly limited to communication within a device, and frequently used for communication between devices in a local data center, such high-speed and typically short range protocols are referred to herein generally as intra-device protocols. For example, PCIe 5.0 allows up to 32 GT/s transfer rates for a single lane for connecting a device component to one or more peripheral device components. Lanes may be combined (e.g., up to 16 lanes) for 512 GT/s. PCIe transfers are limited however, if the number of devices becomes too large and/or if applications have significant shared memory resources. For example, in data centers, multiple devices share vast memory pools that stretch PCIe to its functional limits. Another intra-device protocol, the CXL standard (e.g., CXL 2.0), facilitates communication between multiple peer processors (and other devices and processors) and shared memory using the PCIe physical layer. Data may be copied from a host processor memory to another device's memory. However, when a device updates a memory location (e.g., writing to the memory location), all copies of the memory in memory locations and/or caches are marked invalid, and processors must refetch data from the host memory. CXL is able to achieve low latency communication using transaction protocols, a new handshake, and auto-negotiation to replace and improve upon PCIe protocols. Referring toFIG.2, depicted is a block diagram of an example system200connecting devices using a PCIe switch to connect devices. As shown, a CPU202connects to I/O devices210aand210b(collectively referred to as I/O devices210) using a PCIe switch206. The PCIe switch206contains an upstream port204connecting to CPU202and downstream ports208aand208bconnecting to I/O devices210aand210brespectively. The PCIe switch206connects CPU202to devices210within a single network. While intra-device protocols such as CXL and PCIe improve facilitating communication within a single network, CXL and PCIe are limited to a single network and lack features of inter-device protocols for traversing wide area networks such as the Internet. Previous attempts of CXL and/or PCIe network deployment are limited to the Ethernet layer 2 (e.g., within a single rack or limited to a single switch). Communication across racks requires layer 3 forwarding. To address these and other problems, intra-device protocols CXL and PCIe may be employed over inter-device protocols such as IP to allow for CXL or PCIe devices to go across an Ethernet layer 2 boundary and connect devices together. For example, layer 3 forwarding allows CXL and/or PCIe communication across a network while utilizing existing network infrastructure to enable larger distributed computing systems. For example, PCIe and CXL networks may be extended across a data center network and connect devices such as accelerators together across an existing IP network (e.g., IPv4 or IPv6). Referring toFIG.3A, depicted is a block diagram of an example system300aconnecting devices across an IP network. As shown, I/O device302connects to a switch308in the same (or different network) using Ethernet and/or IP. Port304(an upstream port or a downstream port) interfaces with I/O device302using PCIe and/or CXL. If the traffic is traveling to switch308from I/O device302, a shim header may be placed (e.g., at Ethernet IP transport shim layer306) between the Ethernet and IP headers before the traffic from the I/O device302is transmitted to the switch308. Referring toFIG.3B, depicted is another block diagram of an example system300bconnecting devices across an IP network. As shown, CPU320interfaces with upstream port322using PCIe and/or CXL. An Ethernet IP Transport Shim layer326(or Multiprotocol Label Switching (MLPS) header) may apply a shim header to improve data packet delivery by separating service levels, improving traffic flows through virtual private networks (VPNs) and/or traffic engineering (TE). The MLPS protocol forwards packets on the switching layer (layer 2) instead of operating at the routing layer (layer 3). The TSHIM header is applied before the traffic is transmitted to (or received from) the same network or a different network over Ethernet IP318. Ethernet IP318may be any IP network (e.g., IPv4 or IPv6). The traffic may be directed to I/O devices332aand/or332b(collectively referred to as I/O devices332) and received by I/O devices332via downstream ports330aand/or330b(collectively referred to as downstream ports330) using PCIe and/or CXL. If the I/O devices332transmit traffic directed to CPU320, a shim header may be placed (e.g., at Ethernet IP transport shim layers328aand328brespectively) between the Ethernet and IP headers before the traffic from the I/O device302is transmitted to the CPU320. Referring toFIG.4, depicted is a flowchart of an embodiment of a method400of transmitting intra-device protocols, such as PCIe and/or CXL, over inter-device protocols, such as IP. The functionalities of the method may be implemented using, or performed by, the components detailed herein in connection withFIGS.1A-1C. In brief overview, traffic to be transmitted may be received in step402. A computing device may map packets to be transmitted (e.g., transaction layer packets (TLPs)) to a destination IP address in step404. In step406, the TLPs may be encapsulated into IP packets, and in step408the IP packets may be transmitted to a destination IP address via an Ethernet Media Access Control (MAC). In step402, traffic to be transmitted may be received. For example, a CXL and/or PCIe core may receive traffic such as a request to read memory and/or write to memory (or a response to a request). The traffic may be transaction layer packets (TLPs) including a header and an optional data payload. The TLP packet header may include the transaction type (e.g., a request or a response to a request), a priority, a source and/or destination address, routing rules, among other packet characteristics. The destination IP address may be any IP address (e.g., IPv4 or IPv6). Referring briefly ahead toFIG.9, depicted is a block diagram of example TLP fields. PCIe and CXL TLPs may be forwarded based on Requestor ID fields902(or Memory Address fields). There may be a Type field indicating the type of field present in the packet (e.g., Requestor ID field or Memory Address field) because switching PCIe/CXL may be based on ID routing (in which case the Requestor ID field is present) or Memory Address routing (in which case the Memory Address field is present). The Requestor ID field902may be 16 bits long (e.g., bits 15:0). The Bus Number904may be 8 bits long (e.g., bits 7:0) and identify a number assigned to a PCIe logical bus. The Device Number (Dev. Num.906) may be 5 bits long (e.g., bits 7:3) and identify each device on a PCIe logical bus. The Function Number (Fn908) may be three bits long (e.g., bits 2:0) and identify a function on a device (where each PCIe device may have up to 8 logically independent functions). Referring back toFIG.4, in step404, the traffic may be mapped to a destination IP address. A lookup table may be used to map PCIe addresses (or CXL addresses) and IP addresses. For example, the lookup table may identify (and map) Memory Addresses and/or Requestor IDs to IP sources and destinations. The lookup table may also include UDP and/or TCP source and destination ports numbers. The lookup table may be statically configured. For example, a text file may be used to configure the lookup table. The lookup table may be copied to various devices upon devices being connected to a network. For example, the table may be distributed to the nodes in a network using, for example, Secure Copy Protocol (SCP), Network Configuration Protocol (NetConf), Yet Another Next Generation (YANG) protocols, or Representational State Transfer Application Program Interface (Rest API). The CXL/PCIe to IP function may be implemented in a standalone device, or may be integrated into an Ethernet/IP switch device. In step406, the TLPs may be encapsulated into IP packets. Encapsulating CXL and/or PCIe packets into IP packets allows resources across the network to be combined into systems. Encapsulating the TLPs may include wrapping the layer 2 information into a payload and adding a layer 3 header. In step408, the encapsulated IP packets may be transmitted. For example, an Ethernet MAC implementing a data-link layer may be implemented to transmit the IP packets to a destination IP address. The packets may be transmitted wirelessly or connectedly across one or more networks using various routing protocols to reach the destination IP address. Reliable transport protocols such as Transmission Control Protocol (TCP) may be utilized to transmit the IP packets to the destination IP address. The transport protocol may be selected based on various networking requirements. For example, while TCP may be a commonly used reliable transport protocol, TCP may not be optimal for latency-sensitive processing because of the timing associated with the retransmission of lost TCP packets. However, the latency associated with TCP may be improved, for example, using a hardware implementation of TCP. Additionally, unreliable transport protocols such as User Datagram Protocol (UDP), Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE), Scalable Reliable Datagram (SRD), and/or New Datacenter Protocol (NDP) may be utilized to transmit IP packets to the destination IP address. The transport protocol may be selected based on networking requirements. For example, RoCE protocols may be employed for high-throughput low-latency communications. However, RoCE may not be optimal in large scale networks because priority flow control creates head-of-the-line blocking, congestion spreading, and deadlocks. Further, SRD may load balance and recover quickly from link failures and packet drops. However, SRD may deliver packets out of order, leaving packet restoration to higher layers. Similarly, NDP may provide for low latency for short data transfers at the expense of reordering packets. UDP based protocols may be made reliable by layering on top of them mechanisms to support ordering and retransmission of lost packets. For example, the Quick UDP Internet Connections (QUIC) protocol, a low latency transport protocol over UDP, may be used to transport PCIe and CXL traffic. Alternatively, various retransmission controls may be employed to guarantee (or improve the likelihood of) the transmitted IP packets. In a simple example, an alternating bit protocol (ABP) may be utilized to alternate numbering packets ‘0’ and ‘1’ such that a receiver may acknowledge a numbered packet by transmitting an acknowledgement back to the sender with the same numbered packet. If the wrong numbered packet is acknowledged, or the acknowledgement is not received within a predetermined time window, the packet may be retransmitted. Referring toFIG.5, depicted is a flowchart of an embodiment of a method500of receiving intra-device protocols, such as PCIe and/or CXL over inter-device protocols, such as IP. The functionalities of the method may be implemented using, or performed by, the components detailed herein in connection withFIGS.1A-1C. In brief overview, traffic is received from a source IP address in step502. A computing device may map the source IP address to TLPs in step504. In step506, the IP packets may be decapsulated into TLPs, and in step508the TLPs may be routed. In step, traffic502, traffic may be received. For example, IP packets may be routed from across various networks (or the same network) to a computing device. Traffic may include requests to read memory and/or to write memory (e.g., a response to a request). TLPs that are not order sensitive may be sent out of order, while TLPs that are order sensitive may be buffered and sent in order. In some implementations, packet reordering may be performed. For example, out of order packets may be placed in a buffer, dropped, or forwarded for layer 3 processing, depending on the transport protocol used to transport the packets. For example, TCP corrects out of order packets by requesting retransmission of packets. In step504, the traffic may be mapped to TLPs. The traffic may be IP packets (e.g., standard IPv4 or IPv6 packets). For example, a lookup table may be used to map IP addresses to PCIe (or CXL) addresses. For example, the lookup table may identify (and map) Memory Addresses and/or Requestor IDs to IP sources and destinations. The lookup table may also include UDP and/or TCP source and destination ports numbers. The lookup table may be statically configured. For example, a text file may be used to configure the lookup table. The lookup table may be copied to various devices upon devices being connected to a network. For example, the table may be distributed to the nodes in a network using, for example, Secure Copy Protocol (SCP), Network Configuration Protocol (NetConf), Yet Another Next Generation (YANG) protocols, or Representational State Transfer Application Program Interface (Rest API). The CXL/PCIe to IP function may be implemented in a standalone device, or may be integrated into an Ethernet/IP switch device. In step506, the IP packets may be decapsulated into TLP packets. Decapsulation may be the process of unwrapping (or opening) data as the data moves up the protocol stack. For example, unwrapping layer 3 networking information may reveal layer 2 forwarding information (e.g., TLP packets). In step508, the TLP packets may be routed. TLP packets (including PCIe and CXL TLPs) are forwarded using, for example, a CXL and/or PCIe core. The TLP packets are routed to the PCIe address and/or CXL address such that memory resources may be shared and/or updated. Referring toFIG.6, depicted are example packet formats600a-600dgiven example transport protocols. Similar to standard packet formats, packet formats600a-600dmay include layer 2 information602. For example, the Ethernet frame may include sender/receiver MAC addresses. Packet formats600a-600dmay also include layer 3 information604(e.g., sender/receiver IP address). Packet formats600a-600dmay also include layer 4 information606. Layer 4 information606may differentiate between the packet formats depending on the transport protocol. For example, as shown, packet format600duses TCP (as identified by the TCP header606) as the transport protocol while packet formats600a-600cuse UDP based protocols. As discussed herein, unreliable protocols such as UDP may be used, but reliability mechanisms may be added to support PCIe and/or CXL. Accordingly, particular transport protocols608may be identified in the packet format. For example, if using the RoCE protocol as identified by packet format600a, a base transport header (BTH)608amay be identified in the packet format. If using the SRD protocol as identified by packet format600b, the SRD protocol608bmay be identified in the packet format. If using the NDP protocol as identified by packet format600c, the NDP protocol608cmay be identified in the packet format. The packet formats600a-600dmay also include a Shim header610that contains sequence numbers and PCIe and/or CXL address information to correctly order the frame to the egress. The message may be contained in the CXL and/or PCIe TLP packet612. Referring toFIG.7, depicted is a block diagram of an example700system connecting devices across an IP network using CXL. As shown, CPU720interfaces with upstream port722using CXL and PCIe. The message721transmitted (or received) contains CXL data. An Ethernet IP Transport Shim layer726may apply a shim header before the traffic is transmitted within the same network or to a different network over Ethernet IP718. When the traffic is transmitted through Ethernet IP718, the message may be encapsulated in packet format729. Ethernet IP718may be any version IP network (e.g., IPv4 or IPv6). The traffic may be directed to I/O devices732aand/or732b(collectively referred to as I/O devices732). When the traffic is received by I/O devices732, the traffic has the packet format729. The traffic is received by I/O devices732via downstream ports730aand/or730b(collectively referred to as downstream ports730) using CXL and PCIe. Decapsulation occurs such that the traffic729is unwrapped and the CXL message721is received by the destination I/O device732(e.g., I/O device732aand732brespectively). Referring toFIG.8, depicted is a block diagram of an example800system connecting devices across an IP network using PCIe. As shown, CPU820interfaces with upstream port822using PCIe. The message821transmitted (or received) contains PCIe data. An Ethernet IP Transport shim layer826may apply a shim header before the traffic is transmitted within the same network or to a different network over Ethernet IP818. When the traffic is transmitted through Ethernet IP818, the message may be encapsulated in packet format829. Ethernet IP818may be any version IP network (e.g., IPv4 or IPv6). The traffic may be directed to I/O devices832aand/or832b(collectively referred to as I/O devices832). When the traffic is received by I/O devices832, the traffic has the packet format829. The traffic is received by I/O devices832via downstream ports830aand/or830b(collectively referred to as downstream ports830) using PCIe. Decapsulation occurs such that the traffic829is unwrapped and the PCIe message821is received by the destination I/O device832(e.g., I/O device832aand832brespectively). Although primarily discussed in terms of PCIe and CXL, as discussed above, any intra-device protocol may be utilized using the systems and methods discussed herein. Similarly, although primarily discussed in terms of network protocols such as IP and TCP, any inter-device protocols or combination of inter-device protocols may be utilized with these systems and methods. It should be noted that certain passages of this disclosure can reference terms such as “first” and “second” in connection with subsets of transmit spatial streams, sounding frames, response, and devices, for purposes of identifying or differentiating one from another or from others. These terms are not intended to merely relate entities (e.g., a first device and a second device) temporally or according to a sequence, although in some cases, these entities can include such a relationship. Nor do these terms limit the number of possible entities that can operate within a system or environment. It should be understood that the systems described above can provide multiple ones of any or each of those components and these components can be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. In addition, the systems and methods described above can be provided as one or more computer-readable programs or executable instructions embodied on or in one or more articles of manufacture, e.g., a floppy disk, a hard disk, a CD-ROM, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. The programs can be implemented in any programming language, such as LISP, PERL, C, C++, C#, or in any byte code language such as JAVA. The software programs or executable instructions can be stored on or in one or more articles of manufacture as object code. While the foregoing written description of the methods and systems enables one of ordinary skill to make and use embodiments thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The present methods and systems should therefore not be limited by the above described embodiments, methods, and examples, but by all embodiments and methods within the scope and spirit of the disclosure. | 36,563 |
11863348 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially used in other embodiments without specific recitation. DESCRIPTION OF EXAMPLE EMBODIMENTS Overview According to an embodiment, a method includes receiving, at a home controller of a home domain and from a first device in the home domain, a first message concerning a user device that is anchored to the home domain and that has roamed from the home domain to a visitor domain. The method also includes, in response to determining that the first device is a router, opening a tunnel between the home controller and a visitor controller of the visitor domain and communicating the first message to the user device through the tunnel. The method further includes receiving, at the home controller and from a second device in the home domain, a second message concerning the user device and in response to determining that the second device is not a router, generating a response to the second message and communicating, to the second device, the response to the second message as a proxy for the user device. Other embodiments include an apparatus that performs this method. According to another embodiment, a method includes in a first device roaming in a visited domain, receiving a message from a second device in a home domain and communicating, to the second device, a response to the message. The message and the response are communicated through a tunnel between a controller in the visited domain and a controller in the home domain. Other embodiments include an apparatus that performs this method. Example Embodiments This disclosure describes a message handling scheme that allows devices that have roamed to a different level 3 domain (e.g., a visited domain) to discover other devices and services in that domain while remaining anchored to an original level 3 domain (e.g., a home domain). Generally, a controller in the home domain (e.g., a home controller) generates proxy responses to defend the link local address of the device so that the device's link local address is not overwritten in the home domain. The device also defends its link local address in the visited domain so that the device retains its link local address in the visited domain. The controllers in the home domain and the visited domain retain messages with link local addresses in their respective domains except when the messages originated from or are intended for certain networking services (e.g., routers and dynamic host configuration protocol (DHCP) relays). For example, if the home controller receives a message for the device from a router in the home domain when the device is in the visited domain, the controller opens a tunnel to the visited domain and communicates the message to the device through the tunnel. The device can then respond to that message through the tunnel. As another example, if a printer in the visited domain communicates a message with the link local address of the device, the controller in the visited domain (e.g., visited controller) retains that message in the visited domain. In this manner, the device can discover services and devices in the visited domain but continue to interact with networking services (e.g., routers and DHCP) in the home domain. FIG.1illustrates an example system100. As seen inFIG.1, the system100includes one or more devices104, one or more devices106, a home controller108, a visited controlled110, and a router112. Generally, the system100is divided into different level 3 domains that are serviced by the home controller108and the visited controller110. The home controller108services a home domain and the visited controller110services a visited domain. The router112services the home controller108and the visited controller110and forms a level 2 boundary that divides the two domains. Each of the devices104and106may connect to the home controller108or the visited controller110to receive access to a network. In particular embodiments, the home controller108and the visited controller110allow a device104to discover devices and services in one domain while remaining anchored to the other domain. Each of the domains in the system100may cover a different location. For example, the system100may be implemented in a large office building. The home controller108and the visited controller110may be positioned on different floors of the office building and provide service to those floors. As a result, the home domain and the visited domain may cover different floors of the office building. As another example, the system100may be implemented in a large auditorium or conference space. The home controller108and the visited controller110may be positioned in different areas of the space. The home domain and the visited domain may then cover different areas of the space. A user102may use the device104to connect to the home controller108and/or the visited controller110to gain access to a network. In the example ofFIG.1, the system100includes a device104A, a device104B, and a device104C. The device104A and the device104B are connected to the home controller108. The device104C is connected to the visited controller110. The devices104A and104B may be positioned in an area that is serviced by the home controller108. Thus, the devices104A and104B are in the home domain. The device104C may be in an area that is serviced by the visited controller110. Thus, the device104C is in the visited domain. Each of the devices104may be anchored to the controller108or110or domain to which the device104first connects. If the device104then moves or roams to another domain, the device104remains anchored to the domain to which the device104first connected. Each of the devices104includes a processor114and a memory116, which are configured to perform any of the functions or actions of the device104described herein. The device104A includes a processor114A and a memory116A. The device104B includes a processor114B and a memory116B. The device104C includes a processor114C and a memory116C. The device104is any suitable device for communicating with components of the system100. As an example and not by way of limitation, the device104may be a computer, a laptop, a wireless or cellular telephone, an electronic notebook, a personal digital assistant, a tablet, or any other device capable of receiving, processing, storing, or communicating information with other components of the system100. The device104may be a wearable device such as a virtual reality or augmented reality headset, a smart watch, or smart glasses. The device104may also include a user interface, such as a display, a microphone, keypad, or other appropriate terminal equipment usable by the user102. The device104may include a hardware processor, memory, or circuitry configured to perform any of the functions or actions of the device104described herein. For example, a software application designed using software code may be stored in the memory and executed by the processor to perform the functions of the device104. The processor114is any electronic circuitry, including, but not limited to one or a combination of microprocessors, microcontrollers, application specific integrated circuits (ASIC), application specific instruction set processor (ASIP), and/or state machines, that communicatively couples to memory116and controls the operation of the device104. The processor114may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. The processor114may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The processor114may include other hardware that operates software to control and process information. The processor114executes software stored on the memory116to perform any of the functions described herein. The processor114controls the operation and administration of the device104by processing information (e.g., information received from the devices106, controllers108or110, and memory116). The processor114is not limited to a single processing device and may encompass multiple processing devices. The memory116may store, either permanently or temporarily, data, operational software, or other information for the processor114. The memory116may include any one or a combination of volatile or non-volatile local or remote devices suitable for storing information. For example, the memory116may include random access memory (RAM), read only memory (ROM), magnetic storage devices, optical storage devices, or any other suitable information storage device or a combination of these devices. The software represents any suitable set of instructions, logic, or code embodied in a computer-readable storage medium. For example, the software may be embodied in the memory116, a disk, a CD, or a flash drive. In particular embodiments, the software may include an application executable by the processor114to perform one or more of the functions described herein. The devices106are any suitable device that can connect to the home controller108or the visited controller110. For example, the devices106may be printers, routers, servers, relays, gateways, scanners, or any other network device. The devices106connect to the home controller108and/or the visited controller110to provide services to other devices104in the system100. In the example ofFIG.1, the device106A is connected to the home controller108, and the device106B is connected to the visited controller110. Thus, the device106A is in the home domain and the device106B is in the visited domain. The devices104A and104B may transmit messages to and/or receive messages from the device106A to receive services from the device106A. The device104C may transmit messages to or receive messages from the device106B to receive services from the device106B. For example, if the device106is a printer, then the devices104may communicate messages to and from the device106to print a document using the device106. As another example, if the device106is a router, then the devices104may respond to requests from the device106to prevent the device106from assigning an address of the device104to another device. The home controller108is a wireless controller (e.g., WLAN controller) that services the home domain. The home controller108may service any suitable number of devices104or106in the home domain. As seen inFIG.1, the home controller108includes a processor118A and a memory120A, which are configured to perform any of the functions or actions of the home controller108described herein. The visited controller110is a wireless controller that services the visited domain. Any suitable number of devices104or106may connect to the visited controller110. As seen inFIG.1, the visited controller110includes a processor118B and a memory120B, which are configured to perform any of the functions or actions of the visited controller110described herein. The processor118is any electronic circuitry, including, but not limited to one or a combination of microprocessors, microcontrollers, application specific integrated circuits (ASIC), application specific instruction set processor (ASIP), and/or state machines, that communicatively couples to memory120and controls the operation of the controller108or110. The processor118may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. The processor118may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The processor118may include other hardware that operates software to control and process information. The processor118executes software stored on the memory120to perform any of the functions described herein. The processor118controls the operation and administration of the controller108or110by processing information (e.g., information received from the devices104, devices106, and memory120). The processor118is not limited to a single processing device and may encompass multiple processing devices. The memory120may store, either permanently or temporarily, data, operational software, or other information for the processor118. The memory120may include any one or a combination of volatile or non-volatile local or remote devices suitable for storing information. For example, the memory120may include random access memory (RAM), read only memory (ROM), magnetic storage devices, optical storage devices, or any other suitable information storage device or a combination of these devices. The software represents any suitable set of instructions, logic, or code embodied in a computer-readable storage medium. For example, the software may be embodied in the memory120, a disk, a CD, or a flash drive. In particular embodiments, the software may include an application executable by the processor118to perform one or more of the functions described herein. The router112services the home controller108and the visited controller110and allows the home controller108and the visited controller110to communicate with each other through the router112(e.g., by opening a communication tunnel). The router112represents the level 2 boundary between the home domain and the visited domain. The router112may maintain certain state information about the home domain, the home controller108, the visited domain, and the visited controller110. The router112may not be the same as the devices106that are assigned to the home domain or the visited domain, even though the devices106may themselves be routers. In the example ofFIG.1, the user102A using the device104A moves from an area serviced by the home controller108to an area serviced by the visited controller110. For example, the user102A may have moved to a different floor in an office building or to a different area of a conference space. As a result, the user102A and the device104A move from the home domain to the visited domain. The device104A shifts its connection from the home controller108to the visited controller110. A primary controller (not illustrated) may track the domain and/or controller to which the device104A is connected. Even though the device104A moves from the home domain to the visited domain, the device104A remains anchored to the home domain. For example, the home controller108may maintain a table122that stores addresses of the device104A, such as a link local address of the device104A in the home domain (e.g., a home domain address) and/or a global address of the device104A. The global address of the device104is unique to the device104within the system100. The global address may be used to identify or locate the device104regardless of which domain the device104is located. As a result, even though the device104A shifts its connection from the home controller108to the visited controller110, the device104A is still considered anchored to the home controller108and the home domain. The home controller108and the visited controller110implement a message handling scheme that protects the link local addresses of devices104that are anchored to the home domain and the visited domain, respectively. In this manner, even if a device104roams to another domain, the device104may maintain its link local address. Additionally, the home controller108and the visited controller110retain messages with link local addresses in their respective domains, except messages that are intended for or are communicated by certain networking services (e.g., routers and DHCP). In this manner, a device104that has roamed to a visited domain can discover other devices106and services in the visited domain while continuing to interact with networking services in the home domain. FIG.2illustrates an example message flow in the system100ofFIG.1. As seen inFIG.2, the device106A and the home controller108are in the home domain, and the visited controller110and the device104A are in the visited domain. As discussed previously, the device104A has roamed to the visited domain, but remains anchored to the home domain. The device106A communicates a message to the home controller108. The message concerns the device104A. For example, the message may be an NS Lookup command that is broadcasted by the device106A. The NS Lookup command may include the home domain address of the device104A. The device106A may have broadcasted the NS Lookup command to determine whether any devices in the home domain are using the home domain address of the device104A. If the device104A is using the home domain address, then the device106A expects a response from the device104A to the NS Lookup command. As another example, the message may be an Unreachability Detection command (NS NUD) that is communicated by the device106A to determine whether the device104A is still reachable within the home domain. The NS NUD may include the home domain address of the device104A. If the device104A is still reachable within the home domain, the device106A expects a response from the device104A to the NS NUD. As another example, the message may be a Defend The Address (NS DAD) command communicated by the device106A. The NS DAD may include the home domain address of the device104A. The device106A communicates the NS DAD to determine whether the home domain address may be assigned to another device. If the home domain address is currently being used, the device106A expects a response from the device104A indicating that the home domain address is being used and should not be assigned to another device. Because the device104A has roamed to the visited domain, the device104A may not be able to respond to the message from the device106A, which may result in the home domain address of the device104A being assigned to another device. The home controller108receives the message from the device106A and determines that the message concerns the device104A. For example, the home controller108may use the home domain address in the message to reference the table122. The home controller108may determine from the table122that the home domain address is assigned to the device104A, which is anchored to the home domain. In response to determining that the device104A uses the home domain address and is anchored to the home domain, the home controller108generates a proxy response and communicates the proxy response to the device106A in place of the device104A. When the device106A receives the proxy response, the device106A considers the proxy response as a response from the device104A to the message. For example, if the message is an NS Lookup command, the proxy response may be an NA (non-override) message, and the device106A considers the proxy response as a response from the device104A indicating that the device104A is using the home domain address in the NS Lookup command. As another example, if the message is an NS NUD, the proxy response may be aa NA (non-override) message, and the device106A considers the proxy response as a response from the device104A indicating that the device104A is still reachable within the home domain. As another example, if the message is an NS DAD, the proxy response may be an NA (non-override) message, and the device106A considers the proxy response as a response from the device104A indicating that the home domain address should not be assigned to another device. In this manner, the home controller108preserves the home domain address of the device104A within the home domain. The device104A defends its own home domain address in the visited domain. For example, if a device1066in the visited domain communicates an NS DAD that includes the home domain address of the device104A, the device104A may receive the NS DAD from the visited controller110. The device104A then communicates an NA (non-override) message to the device106B to indicate that the home domain address should not be assigned to another device in the visited domain. As another example, if the device106B broadcasts an NS Lookup command that includes the home domain address of the device104A, the device104A may communicate an NA (non-override) message to the device106B that indicates that the device104A is using the home domain address. As another example, if the device106B communicates an NS NUD that includes the home domain address of the device104A, the device104A communicates an NA (non-override) message that indicates that the device104A is reachable in the visited domain. In this manner, the device104A defends its own home domain address within the visited domain. In certain embodiments, the device104A communicates a message to discover other (non-networking) devices or services in the visited domain, such as printers. The message may include the device's104A home domain address (e.g., the link local address of the device104A in the home domain). The visited controller110receives the message and retains the message in the visited domain by communicating the message to a device1066in the visited domain. The device1066may be a printer. The device1066responds to the message, and the visited controller110communicates the response to the device104A. In this manner, the device104A discovers other devices or services using link local messages in the visited domain. The home controller108and the visited controller110handle messages from certain predetermined network services (e.g., routers and DHCP relays) differently.FIG.3illustrates an example message flow in the system100ofFIG.1. In the example ofFIG.3, the device106A is a router or a relay. The device106A communicates a message to the home controller108. For example, the message may be an NS Lookup command that includes the home domain address of the device104A. As another example, the message may be an NS NUD or NS DAD that includes the home domain address of the device104A. The home controller108receives the message from the device106A and determines that the device106A is a router or relay. For example, the message may include information that identifies the device106A as a router or a relay. As another example, the home controller108may reference the table122to determine if the device106A is a router or relay. In response to determining that the device106A is a router or relay and that the device104A has roamed to the visited domain (e.g., by using information from a primary controller), the home controller108opens a tunnel302between the home controller108and the visited controller110. The home controller108then communicates the message to the visited controller110through the tunnel102. In this manner, the home controller108communicates the message across the level 2 boundary. The visited controller100receives the message and communicates the message to the device104A. The device104A then generates a response and communicates the response to the visited controller110. For example, the device104A may generate an NA (non-override) message and communicate the NA (non-override) message to the visited controller110. The visited controller110then communicates the response back to the home controller108through the tunnel302. The home controller108then communicates the response to the device106A. In this manner, the device104A responds to the message communicated by the device106A (e.g., to indicate to the device106A that the device104A is using the home domain address and/or that the device104A is reachable). FIG.4illustrates an example message flow in the system100ofFIG.1. In the example ofFIG.4, the device106A is a router or relay. The device104A in the visited domain communicates a message intended for the device106A. For example, the message may be a router solicitation (RS) message. As another example, the message may be an NS Lookup command that includes the home domain address of the device106A. The visited controller110receives the message from the device104A and determines that the intended recipient of the message is a router or relay. For example, the message may include information that identifies the device106A as a router or a relay. As another example, the visited controller110may reference the table122to determine if the device106A is a router or relay. The visited controller110opens a tunnel402between the visited controller110and the home controller108. The visited controller110then communicates the message through the tunnel402to the home controller108. The home controller108then communicates the message to the device106A. The device106A generates a response to the message and communicates the response to the home controller108. For example, if the message was an RS message, the device106A may generate a router advertisement (RA) message and communicate the RA message to the home controller108. As another example, if the message is an NS Lookup command, the device106A generates an NA (non-override) message and communicates the NA (non-override) command to the home controller108. The home controller108communicates the response back to the visited controller110through the tunnel402. The visited controller110then communicates the response to the device104A. In certain embodiments, the device104A communicates a message that is intended for a device106B in the visited domain. The device106B may be a relay or a router. The message may include a global address of the device106B. When the visited controller110receives the message from the device104A, the visited controller110may determine that the message includes a global address and may open the tunnel402to the home controller108. The visited controller110then communicates the message to the home controller108through the tunnel402. The home controller108then determines that the message includes a global address and should be sent to the router112. The home controller108communicates the message to the router112. The router112then determines that the device106B is in the visited domain and communicates the message to the visited controller110. The visited controller110then communicates the message to the device106B in the visited domain. In some embodiments, the home controller108communicates a message with the global address of the device104A through the tunnel402. For example, if the home controller108receives a message that includes the global address of the device104A, the home controller108may determine from the table122that the global address belongs to the device104A, which is in the visited domain. The home controller108opens the tunnel402to the visited controller110and communicates the message through the tunnel402to the visited controller110. The visited controller110then communicates the message to the device104A. FIG.5illustrates an example message flow in the system100ofFIG.1. In the example ofFIG.5, the device106A is in the home domain and communicates a message with the home domain address of the device104A. The device104A is in the visited domain. The home controller108receives the message and determines that the message does not concern the overriding or assignment of the home domain address of the device104A. For example, the home controller108may determine that the message is not an NS Lookup command, an NS NUD, or an NS DAD. In response, the home controller108ignores the message. The device106A may wait for the response to the message until a timeout is reached. FIG.6illustrates an example message flow in the system100ofFIG.1. As seen inFIG.6, the device104C that was in the visited domain has roamed to the home domain and is connected to the home controller108. The device106A is a router or relay. The device106A communicates a message that includes a visited domain address of the device104C (e.g., a link local address of the device104C in the visited domain). The home controller108receives the message and determines that the message includes the visited domain address of the device104C. In response, the home controller108blocks the message from being communicated to the device104C. Alternatively or additionally, if the device104C communicates a message that includes the home domain address of the device106A, the home controller108also blocks that message. In this manner, the home controller108prevents the device104C that is anchored in a different domain from interacting with certain predetermined network services (e.g., routers and DHCP relays) in the home domain when connected to the home domain. FIG.7is a flowchart of an example method700performed in the system100ofFIG.1. The home controller108may perform the method700. In particular embodiments, by performing the method700, the home controller108prevents a home domain address of a device104from being overwritten, even after the device104has roamed to a visited domain. In block702, the home controller108receives, from a device106, a message for the device104that has roamed to another domain, such as the visited domain. The message may concern the home domain address of the device104. For example, the message may be an NS Lookup command, an NS NUD or an NS DAD that includes the home domain address of the device104. If a response is not provided to the message, the home domain address of the device104may be overwritten or assigned to another device. In block704, the home controller108determines whether the device106is a router. In some embodiments, the home controller108determines whether the device106is a relay. If the message is from a router, the home controller108proceeds to the block706to open a tunnel to the visited domain. The home controller108then communicates the message through the tunnel in block708. When the message reaches the visited domain, a visited controller110in that domain communicates the message to the device104. The device104may then generate a response and communicate the response to the home controller108through the tunnel. The home controller108then communicates the response to the router. If the device106is not a router, the home controller108proceeds to block710to generate a proxy response. The proxy response is a response that the home controller108generates in place of the device104. The response may be an NA (non-override) message. The home controller108communicates the proxy response to the device106in block712. By the home controller108communicating the proxy response in place of the device104, the home controller108indicates to the device106that the home domain address of the device104is in use and/or that the device104is reachable within the home domain. As a result, the home domain controller108defends the home domain address of the device104even if the device104has roamed to another domain. FIG.8is a flowchart of an example method800performed in the system100ofFIG.1. A device104performs the method800. In particular embodiments, by performing the method800, the device104communicates with a router in a home domain even after the device has roamed to a visited domain. The method800may be performed after the blocks706and708inFIG.7in which the home controller108opens a communication tunnel to the visited domain and communicates a first message from a device16(e.g., a router or relay) through the tunnel to a device104in the visited domain. A visited controller110in the visited domain receives the first message through the tunnel and communicates the first message to the device104. In block802, the device104in the visited domain receives the first message from the visited controller110. In block804, the device104communicates a response to the first message to the device106in the home domain. The device104communicates the response to the visited controller110. The visited controller110then communicates the response to the home controller108through the tunnel. The home controller108then communicates the response to the device106. In block806, the device104communicates a second message to the device106in the home domain. The device104communicates the second message to the visited controller110. The visited controller110then communicates the second message to the home controller108through the tunnel. The home controller108then communicates the second message to the device106. The device106receives the second message and generates a response to the second message. The device106then communicates the response to the second message to the home controller108. The home controller108then communicates the response to the second message through the tunnel to the visited controller110. The visited controller110then communicates the response to the second message to the device104. In block808, the device104receives the response to the second message from the visited controller110. In this manner, the device104communicates with a device106(e.g., a router or relay) in the home domain even though the device104is in a visited domain. In the current disclosure, reference is made to various embodiments. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the described features and elements, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Additionally, when elements of the embodiments are described in the form of “at least one of A and B,” it will be understood that embodiments including element A exclusively, including element B exclusively, and including element A and B are each contemplated. Furthermore, although some embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the aspects, features, embodiments and advantages disclosed herein are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). As will be appreciated by one skilled in the art, the embodiments disclosed herein may be embodied as a system, method or computer program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems), and computer program products according to embodiments presented in this disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block(s) of the flowchart illustrations and/or block diagrams. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other device to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the block(s) of the flowchart illustrations and/or block diagrams. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process such that the instructions which execute on the computer, other programmable data processing apparatus, or other device provide processes for implementing the functions/acts specified in the block(s) of the flowchart illustrations and/or block diagrams. The flowchart illustrations and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart illustrations or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In view of the foregoing, the scope of the present disclosure is determined by the claims that follow. | 39,588 |
11863349 | Throughout the description, similar reference numbers may be used to identify similar elements. DETAILED DESCRIPTION It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment. Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention. Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. FIG.1depicts a communications system100in accordance to an embodiment of the invention. In the embodiment depicted inFIG.1, the communications system includes a cloud server102and a deployed network150within a customer site114. The cloud server and/or the network may be implemented in hardware (e.g., circuits), software, firmware, or a combination thereof. Although the illustrated communications system100is shown with certain components and described with certain functionality herein, other embodiments of the communications system may include fewer or more components to implement the same, less, or more functionality. For example, in some embodiments, the communications system includes more than one cloud server, more than one deployed network, and/or more than one customer site. In another example, although the cloud server and the deployed network are shown inFIG.1as being connected in certain topology, the network topology of the communications system100is not limited to the topology shown inFIG.1. The cloud server102can be used to provide at least one service to a customer site (e.g., to the deployed network150located at the customer site114). The cloud server may be configured to facilitate or perform a network management service (e.g., a network segmentation service) to network devices (e.g., the deployed network150) at the customer site. In some embodiments, the cloud server is configured to divide the deployed network150into multiple segments or subnets, e.g., to improve network performance and/or or enhance network security. Each segment or subnet of the deployed network may act as its own small network, which allows flow of traffic between subnets to be controlled based on one or more network segmentation policies or rules. Because the cloud server can facilitate or perform a network segmentation service or operation for network devices at the customer site, network segmentation efficiency can be improved. In some embodiments, the cloud server is configured to generate a user interface to obtain input information, for example, a floor plan of a customer site. In some embodiments, the user interface includes a graphical user interface. The cloud server may be implemented in hardware (e.g., circuits), software, firmware, or a combination thereof. In some embodiments, the cloud server is hosted or executed in a public cloud computing environment such as Amazon Web Services (AWS), and/or a private cloud computing environment such as an enterprise cloud server. In some embodiments, the cloud server is implemented on a server grade hardware platform, such as an x86 architecture platform. For example, the hardware platform of the cloud server may include conventional components of a computing device, such as one or more processors (e.g., central processing units (CPUs)), system memory, a network interface, storage system, and other Input/Output (I/O) devices such as, for example, a mouse and a keyboard (not shown). In some embodiments, the processor is configured to execute instructions such as, for example, executable instructions that may be used to perform one or more operations described herein and may be stored in the memory and the storage system. In some embodiments, the memory is volatile memory used for retrieving programs and processing data. The memory may include, for example, one or more random access memory (RAM) modules. In some embodiments, the network interface is configured to enable the cloud server to communicate with another device via a communication medium. The network interface may be one or more network adapters, also referred to as a Network Interface Card (NIC). In some embodiments, the cloud server includes local storage devices (e.g., one or more hard disks, flash memory modules, solid state disks and optical disks) and/or a storage interface that enables the host to communicate with one or more network data storage systems, which are used to store information, such as executable instructions, cryptographic keys, virtual disks, configurations, and other data. In the embodiment depicted inFIG.1, the cloud server102includes a network management module (NMM)110, a customer information portal108connected to the NMM module110, and an NMM database112configured to store NMM data. The NMM module, the customer information portal, and/or the NMM database may be implemented in hardware (e.g., circuits), software, firmware, or a combination thereof. Although the illustrated cloud server is shown with certain components and described with certain functionality herein, other embodiments of the cloud server may include fewer or more components to implement the same, less, or more functionality. For example, in some embodiments, the cloud server includes more than one NMM module, more than one customer information portal, and/or more than one NMM database. In another example, although the NMM module, the customer information portal, and the NMM database are shown inFIG.1as being connected in a certain topology, the network topology of the cloud server is not limited to the topology shown inFIG.1. In addition, although the customer information portal108is shown inFIG.1as being a component of the cloud server102, in other embodiments, the customer information portal may be implemented outside of the cloud server. In some embodiments, the NMM module110is configured to facilitate or perform an NMM service (e.g., a network segmentation service) to network devices (e.g., the deployed network150) at the customer site114, for example, using an NMM rule set130. The NMM rule set130may include one or more NMM rules (e.g., network segmentation rules) for network devices at the customer site114, for example, for performing an NMM service (e.g., network segmentation) to network devices at the customer site114. In some embodiments, the NMM module110is configured to is configured to divide the deployed network150into multiple segments or subnets, e.g., to improve network performance and/or or enhance network security. Each segment or subnet of the deployed network may act as its own small network, which allows flow of traffic between subnets to be controlled based on one or more network segmentation policies or rules. In some embodiments, the NMM module110is configured to generate and/or transmit at least one alert (e.g., a network segmentation alert or error) regarding a network deployed and/or to be deployed at the customer site or a network operator site, for example, to an administrator or a user or customer (e.g., a layperson such as a worker on-site or an end-user such as an employee) at the customer site114. In some embodiments, the NMM database112is configured to store NMM data for a network deployed and/or to be deployed at the customer site (e.g., a list of network devices deployed or to be deployed at the customer site). In some embodiments, the NMM database112is configured to store the at least one NMM alert. Because the NMM module can facilitate or perform network segmentation for network devices at the customer site, network segmentation efficiency can be improved. In addition, because the NMM deployment module can facilitate or perform a network segmentation service or operation for network devices at the customer site, an administrator or a customer can be notified of network conditions. Consequently, network outage or low performance time can be shortened. The customer information portal108is configured to receive user input128. In some embodiments, the customer information portal is configured to include or generate a user interface that allows a customer to input information related to the customer site114(e.g., the floor plan of the customer site114) and/or information associated with an NMM service (e.g., a network segmentation service) for the customer site114, such as one or more specific requirements or restrictions. In the communications system100depicted inFIG.1, the customer site114may include one or more buildings, and each building may include one or more floors. Network devices that can be deployed at the customer site may include any type of suitable network devices. For example, network devices may be designated to be deployed to a specific building, a specific floor within a building, and/or a specific location on a floor of a building. A network device that can be deployed at the customer site may be fully or partially implemented as an Integrated Circuit (IC) device. In the embodiment depicted inFIG.1, the network150includes one or more network devices104-1, . . . ,104-N, where N is a positive integer. In some embodiments, at least one of the one or more network devices104-1, . . . ,104-N is a wired and/or wireless communications device that includes at least one processor (e.g., a microcontroller, a digital signal processor (DSP), and/or a CPU), at least one wired or wireless communications transceiver implemented in one or more logical circuits and/or one or more analog circuits, at least one wired or wireless communications interface and that supports at least one wired or wireless communications protocol, and/or at least one antenna. For example, at least one of the one or more network devices104-1, . . . ,104-N is compatible with Institute of Electrical and Electronics Engineers (IEEE) 802.3 protocol and/or one or more wireless local area network (WLAN) communications protocols, such as IEEE 802.11 protocol. In some embodiments, at least one of the one or more network devices104-1, . . . ,104-N is a wired communications device that is compatible with at least one wired local area network (LAN) communications protocol, such as a wired router (e.g., an Ethernet router), a wired switch, a wired hub, or a wired bridge device (e.g., an Ethernet bridge). In some embodiments, at least one of the one or more network devices104-1, . . . ,104-N is a wireless access point (AP) that connects to a local area network (e.g., a LAN) and/or to a backbone network (e.g., the Internet) through a wired connection and that wirelessly connects to wireless stations (STAs), for example, through one or more WLAN communications protocols, such as an IEEE 802.11 protocol. In some embodiments, the network150includes at least one distribution switch (DS) or distribution layer switch that functions as a bridge between a core layer switch and an access layer switch, at least one head end (HE) or gateway, at least one access switch (AS) that can directly interact with a lower-level device (e.g., a wireless AP), at least one wireless AP, and/or at least one wireless sensor that wirelessly connects to a wireless AP. In some embodiments, at least one of the one or more network devices104-1, . . . ,104-N is a wireless station (STA) that wirelessly connects to a wireless AP. For example, at least one of the one or more network devices104-1, . . . ,104-N may be a laptop, a desktop personal computer (PC), a mobile phone, or other wireless device that supports at least one WLAN communications protocol (e.g., an IEEE 802.11 protocol)). FIG.2depicts an embodiment of a network device204of the communications system depicted inFIG.1. The network device204may be an embodiment of a network device that is included in the deployed network150depicted inFIG.1. However, network devices that can be included in the deployed network150depicted inFIG.1are not limited to the embodiment depicted inFIG.2. The network device204may be any suitable type of network device. For example, the network device204may be a distribution switch, a gateway, an access switch, a wireless access point, or a sensor, described in more detail with reference toFIG.3. In some embodiments, the network device204is a wired device. In some embodiments, the network device204is a wireless device. In some embodiments, the network device204is a wired device with wireless capability, for example, a wireless access point. In the embodiment depicted inFIG.2, a network device204includes a wired and/or wireless transceiver232, a controller234operably connected to the transceiver232, at least one optional antenna236operably connected to the transceiver232, and at least one network port238operably connected to the transceiver232. In some embodiments, the transceiver232includes a physical layer (PHY) device. In some embodiments, the at least one network port238is optional and is not included. The transceiver232may be any suitable type of transceiver. For example, the transceiver232may be a short-range communications transceiver (e.g., a Bluetooth transceiver) or a WLAN transceiver (e.g., a transceiver compatible with an IEEE 802.11 protocol). In some embodiments, the network device204includes multiple transceivers, for example, a short-range communications transceiver (e.g., a Bluetooth transceiver) and a WLAN transceiver (e.g., a transceiver compatible with an IEEE 802.11 protocol). In some embodiments, the network device (e.g., a wireless AP) includes multiple antennas and multiple wireless transceivers that share the antennas. In some embodiments, the controller234is configured to control the transceiver232to process packets received through the antenna236and/or the network port238and/or to generate outgoing packets to be transmitted through the antenna236and/or the network port238. In some embodiments, the controller234is configured to obtain and/or store network information relevant to the network device204. For example, the controller234may be configured to obtain and/or store network information (e.g., routing information such as a routing table) relevant to the network device204. The antenna236may be any suitable type of antenna. For example, the antenna236may be an induction type antenna such as a loop antenna or any other suitable type of induction type antenna. However, the antenna236is not limited to an induction type antenna. The network port238may be any suitable type of port. For example, the network port238may be a local area network (LAN) network port such as an Ethernet port. However, the network port238is not limited to LAN network ports. In some embodiments, the network device204is a DS, a HE or gateway, an AS, a wireless AP, or a wireless sensor that wirelessly connects to a wireless AP. FIG.3depicts an embodiment of a network350that can be deployed at the customer site114. The network350depicted inFIG.3is one possible embodiment of the deployed network150at the customer site114depicted inFIG.1. However, the deployed network150at the customer site114depicted inFIG.1is not limited to the embodiment shown inFIG.3. In some embodiments, the network350is a basic building block for providing connectivity as a service and is a replicable block that can be scaled (e.g., expanded) to meet any deployment need. In the embodiment depicted inFIG.3, the network350includes a pair of distribution switches (DSs) or distribution layer switches352-1,352-2that are aggregation switches functioning as a bridge between core layer switches and access layer switches, a pair of head ends (HEs) or gateways354-1,354-2, a number of optional access switches (ASs)356-1,356-2,356-3,356-4,356-5,356-6,356-7,356-8connected in rings358-1,358-2that can interact with lower level devices (e.g., wireless APs), a number of wireless APs360-1,360-2,360-3,360-4,360-5,360-6connected to the ASs, and a number of wireless sensors362-1,362-2,362-3that wirelessly connect to the wireless APs and are configured to measure and monitor network information at the customer site114. In some embodiments, the network350does not include access switches and the wireless APs are directly connected to the DS352-1and/or the DS352-2. In some embodiments, at least one of the DSs352-1,352-2, the HEs354-1,354-2, the ASs356-1,356-2,356-3,356-4,356-5,356-6,356-7,356-8, the wireless APs360-1,360-2,360-3,360-4,360-5,360-6, and the wireless sensors362-1,362-2,362-3depicted inFIG.3is implemented as the network device204depicted inFIG.2. In some embodiments, at least one additional network device, such as a laptop, a desktop PC, or a mobile phone, that can be used by at least one user (e.g., an employee, a guest, or a partner), is included in the network350. FIG.4depicts the network350depicted inFIG.3connected to other network elements, such as an authentication server (e.g., a Remote Authentication Dial-In User Service (RADIUS) server)440, a Dynamic Host Configuration Protocol (DHCP) server442, switches444-1,444-2, a firewall446, and a wide area network (WAN)448. In the embodiment depicted inFIG.4, the DSs352-1,352-2of the network350are connected to the switches444-1,444-2, which are connected to the authentication server440or the DHCP server442, the firewall446, and the WAN448. The firewall446may be connected to a public network, e.g., the Internet. In some embodiments, to perform network segmentation of a network deployed at a customer site, a tunnel is established between a network device of the network deployed at the customer site and a network port of a switch of the network deployed at the customer site, and when a wired device is plugged into the network port of the switch, network traffic is transmitted between the wired device and the network device through the tunnel. A security operation regarding the wired device is performed, for example, through the network device, and based on a result of the security operation, a network segmentation operation regarding the wired device is facilitated, for example, using the network device. Examples of the security operation include, without being limited to, an authentication operation and a verification operation, for example, by checking or matching the wired device with entries in a network segmentation database. In some embodiments, at least one of the security operation and the network segmentation operation is transmitted through the tunnel. The tunnel may include a Generic Routing Encapsulation (GRE) tunnel and/or a Virtual Extensible Local Area Network (VXLAN). In some embodiments, multiple tunnels are established between the network device and network ports of the switch, where the tunnels are separate from each other. In some embodiments, no tunnel is shared by multiple ports of the switch. Although GRE tunnels and VXLAN tunnels are described as two types of tunnels, other types of tunnels and/or tunneling protocols, including for example Network Virtualization using GRE (NVGRE) and IP Security (IPSec), may be used. In some embodiments, a system for network segmentation of a network deployed at a customer site includes memory and one or more processors configured to establish a tunnel between a network device of the network deployed at the customer site and a network port of a switch of the network deployed at the customer site, when a wired device is plugged into the network port of the switch, transmit network traffic between the wired device and the network device through the tunnel, facilitate a security operation (e.g., an authentication operation or a verification operation, for example, by checking or matching the wired device with entries in a network segmentation database) regarding the wired device, and based on a result of the security operation, perform a network segmentation operation regarding the wired device. In some embodiments, at least one of the security operation and the network segmentation operation is transmitted through the tunnel. In some embodiments, the tunnel includes a GRE tunnel and/or a VXLAN). In some embodiments, the system includes a HE or a gateway device, for example, one or more of the HEs354-1,354-2depicted inFIG.3. In some embodiments, the one or more processors are configured to establish multiple tunnels between the network device and network ports of the switch, where the tunnels are separate from each other. In some embodiments, no tunnel is shared by multiple ports of the switch. In some embodiments, the wired device supports an authentication protocol or standard, and the one or more processors are configured to, when the authentication server rejects an authentication request of the wired device, do not allow the wired device to join the network and receive a network segmentation configuration. In some embodiments, the one or more processors are configured to, when the authentication server does not reject an authentication request of the wired device, allow the wired device is to join the network and receive a network segmentation configuration. In some embodiments, the one or more processors are configured to, when the authentication server does not reject an authentication request of the wired device, determine whether or not the authentication server sends a network segment name for the wired device. In some embodiments, the one or more processors are configured to, when it is determined that the authentication server does not send the network segment name for the wired device, do not allow the wired device to join the network and receive a network segmentation configuration. In some embodiments, the one or more processors are configured to, when it is determined that the authentication server sends the network segment name for the wired device, determine whether or not the network segment name for the wired device is valid, for example. In some embodiments, the one or more processors are configured to, when it is determined that the network segment name for the wired device is valid, assign the wired device to a network segment of the network deployed at the customer site that corresponds to the network segment name. In some embodiments, the wired device does not support any authentication protocol or standard and the one or more processors are configured to check the wired device against a network segmentation database. In some embodiments, the one or more processors are configured to, when it is determined that the wired device matches an entry within the network segmentation database, assign the wired device to a corresponding network segment of the network deployed at the customer site, and when it is determined that the wired device does not match any entry within the network segmentation database, do not allow the wired device to join the network and receive a network segmentation configuration. FIG.5depicts an interaction of a switch556with at least one DS552, at least one HE554, a network element (e.g., a router544), an authentication server (e.g., a RADIUS server)540, a DHCP server542, and/or a cloud server502to perform network segmentation. The switch556, the at least one DS552, and the at least one HE554depicted inFIG.5may be similar to or the same as the ASs356-1,356-2,356-3,356-4,356-5,356-6,356-7,356-8, the DSs352-1,352-2, and the HEs354-1,354-2depicted inFIG.3, respectively. The authentication server540and the DHCP server542depicted inFIG.5may be similar to or the same as the authentication server440and the DHCP server442depicted inFIG.4, respectively. The cloud server502depicted inFIG.5may be similar to or the same as the cloud server102depicted inFIG.1. In some embodiments, at least one of the authentication server540and the DHCP server542is implemented within the cloud server502. As depicted inFIG.5, the switch556is connected to network devices504-1,504-2,504-3through wired connections, for example, at port 4, port 28, and port 44 of the switch556, respectively. The network devices504-1,504-2,504-3may be implemented as a camera, a network telephone, and a network printer, respectively. However, the network devices504-1,504-2,504-3are not limited the examples above. In addition, the number of network devices that are connected to the switch are not limited to the example shown inFIG.5. For example, the number of network devices that are connected to the switch may be less than three or greater than three (but equal to or smaller than the total number of ports (e.g., forty-eight in the example shown inFIG.5)). Although the switch556is shown inFIG.5as having forty-eight ports for connecting to downstream devices (e.g., cameras, network telephones, and/or network printers), in other embodiments, the switch556may have less than forty-eight downstream ports or more than forty-eight downstream ports. As depicted inFIG.5, the switch556is connected to the at least one DS552through a wired connection, for example, at port form-factor pluggable (SFP)-1 of the switch556. However, the upstream device to which the switch556is connected is not limited the DS552. In addition, the number of upstream network devices that may be connected to the switch is not limited to the example shown inFIG.5. Although the switch556is shown inFIG.5as having four ports for connecting to upstream devices, in other embodiments, the switch556may have less than four upstream ports or more than four upstream ports. In the embodiment depicted inFIG.5, network devices (e.g., the network devices504-1,504-2,504-3) can be plugged into any port (e.g., port 1 to port 48) of the switch556. The switch556does not need or has a specific port configuration for different devices. In some embodiments, network traffic from a wired device (e.g., the network device504-1,504-2, or504-3) is tunneled to the at least one HE554. In some embodiments, at least one of a security operation and a network segmentation operation is conducted through one or more tunnels. In some embodiments, a tunnel (e.g., a Generic Routing Encapsulation (GRE) tunnel and/or a VXLAN) is created from each port of the switch556. In some embodiments, a specific tunnel (e.g., a GRE tunnel and/or a VXLAN) is created between each port of the switch556and the at least one HE554. In an embodiment, no tunnel is shared by multiple (two or more ports) of the switch556. For example, a first tunnel (e.g., a GRE tunnel and/or a VXLAN)570-1is established between port 4 of the switch556and the at least one HE554, a second tunnel (e.g., a GRE tunnel and/or a VXLAN)570-2is established between port 28 of the switch556and the at least one HE554, and a third tunnel (e.g., a GRE tunnel and/or a VXLAN)570-3is established between port 44 of the switch556and the at least one HE554. In some embodiments, at least one of a security operation (e.g., a device authentication operation or a verification operation, for example, by checking or matching the wired device with entries in a network segmentation database) and a network segmentation operation is conducted through the tunnel570-1,570-2,570-3. In some embodiments, device authentication is performed, e.g., by the authentication server (e.g., a RADIUS server)540and/or the cloud server502. In some embodiments, when a wired device (e.g., the network device504-1,504-2, or504-3) supports an authentication protocol or standard (e.g., an IEEE 802.1X protocol or standard), the at least one HE554communicates with the authentication server540(e.g., a RADIUS server) for device authentication when the wired device is connected to a port (e.g., port 4, port 28 or port 44) of the switch556. In an embodiment, if device authentication is successful (e.g., the authentication server540(e.g., a RADIUS server) determines that the wired device has a corresponding access privilege, e.g., in response to an authentication request from the wired device and/or the at least one HE554), the authentication server540(e.g., a RADIUS server) provides a network segment name parameter, which may be included in a vendor specific attribute, for example, to the at least one HE554. In some embodiments, if the segment name matches a previously configured network segment name (e.g., for the wired device or a user of the wired device), the wired device or a user of the wired device is assigned to the corresponding network segment having the previously configured network segment name, for example, by the at least one HE554. The at least one HE554may act as a DHCP relay to relay data packets to the DHCP server542. In some embodiments, a wired device (e.g., the network device504-1,504-2, or504-3) does not support an authentication protocol or standard (e.g., an IEEE 802.1X protocol or standard), the at least one HE554communicates with the cloud server502to determine whether the wired device has a corresponding access privilege, when the wired device is connected to a port (e.g., port 4, port 28 or port 44) of the switch556. In some embodiments, the at least one HE554determines whether a network administrator (e.g., a human operator or a computer) authorizes a network address (e.g., the Media Access Control (MAC) address) of the wired device and has previously assigned a network segment to the wired device. If a network administrator (e.g., a human operator or a computer) authorizes a network address (e.g., the MAC address) of the wired device and has previously assigned a network segment to the wired device, the wired device is placed into that network segment, by, for example, at least one HE554. In some embodiments, a human operator enters the MAC address or organizationally unique identifier (OUI) of the wired device, defines a network segment name, and allows or do not allow the wired device to be placed into a corresponding network segment. Compared with network segmentation techniques in which a physical switch is configured with virtual local area networks (VLANs) on ports of the physical switch, the switch556does not need or has a specific port configuration for different network devices. In network segmentation techniques in which a physical switch is configured with VLANs on ports of the physical switch, network devices have to be plugged into the right wired ports of the physical switch that are mapped to the appropriate VLANs. In the embodiment depicted inFIG.5, network devices (e.g., the network devices504-1,504-2,504-3) can be plugged into any port (e.g., port 1 to port 48) of the switch556and nevertheless receive a corresponding network configuration (e.g., a network segmentation configuration (i.e., successfully be placed into a correspond network segment)). Because network traffic can be tunneled between network devices and the at least one HE554, network ports of the switch556do not need to be configured for corresponding VLANs. Consequently, there is no restriction as to which network port a network device must be plugged into, in order to receive a corresponding network segmentation configuration. Because network devices can be plugged into any available network port of the switch556while still being placed into a corresponding network segment, network deployment efficiency can be improved and network deployment mistakes can be reduced. FIG.6is a flow chart that illustrates an exemplary network segmentation operation that can be performed in the communications system100depicted inFIG.1. In the exemplary network segmentation operation, a network segmentation algorithm is implemented to place a wired device (e.g., the network device504-1,504-2, or504-3) into a corresponding network segment of a network (e.g., the network150depicted inFIG.1) and is executable in the communications system100. At step602, a wired device (e.g., the network device504-1,504-2, or504-3) is connected to a network (e.g., the network150depicted inFIG.1), for example, by plugging the wired device into a port of a switch in the network150depicted inFIG.1(e.g., the ASs356-1,356-2,356-3,356-4,356-5,356-6,356-7,356-8depicted inFIG.3and/or the switch556depicted inFIG.5). At step604, a determination regarding whether the wired device supports an authentication protocol or standard (e.g., an IEEE 802.1X protocol or standard) is made, for example, by the cloud server102depicted inFIG.1. If/when it is determined that the wired device supports an authentication protocol or standard (e.g., an IEEE 802.1X protocol or standard), the wired device is authenticated by an authentication server (e.g., a RADIUS server) at step606, for example, by sending an authentication request to the authentication server. The authentication server may be the same as or similar to the cloud server102depicted inFIG.1and/or the authentication server540(e.g., a RADIUS server) depicted inFIG.5. The authentication server (e.g., a RADIUS server) may reject or allow the authentication request of the wired device at step608. If/when the authentication server (e.g., a RADIUS server) rejects the authentication request of the wired device at step608, the wired device is not allowed (e.g., by the at least one HE554depicted inFIG.5) to join the network150and receive a network segmentation configuration. If/when the authentication server (e.g., a RADIUS server) does not reject the authentication request of the wired device at step608, it is determined (e.g., by the at least one HE554depicted inFIG.5) whether or not the authentication server (e.g., a RADIUS server) sends a network segment name for the wired device at step610. If/when it is determined that the authentication server (e.g., a RADIUS server) does not send a network segment name for the wired device, the wired device is not allowed to join the network and receive a network segmentation configuration at step618. If/when it is determined (e.g., by the at least one HE554depicted inFIG.5) that the authentication server (e.g., a RADIUS server) sends a network segment name of the wired device, it is determined whether or not the received network segment name for the wired device is valid at step612. If/when it is determined (e.g., by the at least one HE554depicted inFIG.5) that the received network segment name for the wired device is valid, the wired device is assigned to a corresponding network segment of the received network segment name at step614and the wired device is considered to be online at step616. If/when it is determined that the received network segment name for the wired device is not valid, the wired device may be not allowed to join the network and receive a network segmentation configuration at step618. Returning back to step604, if/when it is determined that the wired device does not support an authentication protocol or standard (e.g., an IEEE 802.1X protocol or standard), the wired device is authenticated (e.g., by the cloud server102depicted inFIG.1) at step620, for example, by checking a network segmentation database. If/when it is determined that the wired device matches an entry within the network segmentation database, the wired device is assigned to a corresponding network segment of the network at step614and the wired device is considered to be online at step616. If/when it is determined that the wired device does not match any entry within the network segmentation database, the wired device is not allowed to join the network and receive a network segmentation configuration, for example, by the authentication server (e.g., a RADIUS server) at step618. FIG.7depicts some exemplary network segments780-1,780-2,780-3of a network750, which may be result from the network segmentation operation depicted inFIG.6. In the network750depicted inFIG.7, the network segment780-1includes network devices704-1,704-2, the network segment780-2includes network devices704-3,704-4,704-5, and the network segment780-3includes network devices704-6,704-7. The network segments780-1,780-2,780-3can be used to improve network performance and/or or enhance network security. In some embodiments, each of the network segments780-1,780-2,780-3is a different subnet of the network750having a unique subnet mask. Each segment or subnet780-1,780-2, or780-3of the network750can act as its own small network, which allows flow of traffic between subnets to be controlled based on granular policies or rules. However, the number of network segments that can be included in a network is not limited to the examples shown inFIG.7. In addition, the number of network devices that can be included in a network segment is not limited to the examples shown inFIG.7. FIG.8is a process flow diagram of a method for network segmentation of a network deployed at a customer site accordance to an embodiment of the invention. According to the method, at block802, a tunnel is established between a network device of the network deployed at the customer site and a network port of a switch of the network deployed at the customer site. At block804, when a wired device is plugged into the network port of the switch, network traffic is transmitted between the wired device and the network device through the tunnel. At block806, a security operation regarding the wired device is facilitated. At block808, based on a result of the security operation, a network segmentation operation regarding the wired device is performed. In some embodiments, at least one of the security operation and the network segmentation operation is conducted through the tunnel. In some embodiments, the tunnel includes a GRE tunnel and/or a VXLAN. In some embodiments, establishing the tunnel between the network device of the network deployed at the customer site and the network port of the switch of the network deployed at the customer site includes establishing tunnels between the network device and a network port of the switch, where the tunnels are separate from each other. In some embodiments, no tunnel is shared by multiple ports of the switch. In some embodiments, facilitating the security operation regarding the wired device includes facilitating an authentication operation regarding the wired device through the network device. In some embodiments, facilitating the authentication operation regarding the wired device through the network device includes authenticating the wired device with an authentication server through the network device. In some embodiments, based on the result of the security operation, performing the network segmentation operation regarding the wired device includes when the authentication server rejects an authentication request of the wired device, not allowing the wired device to join the network and receive a network segmentation configuration. In some embodiments, based on the result of the security operation, performing the network segmentation operation regarding the wired device includes when the authentication server does not reject an authentication request of the wired device, allowing the wired device to join the network and receive a network segmentation configuration. In some embodiments, facilitating the authentication operation regarding the wired device through the network device further includes when the authentication server does not reject an authentication request of the wired device, determining whether or not the authentication server sends a network segment name for the wired device. In some embodiments, based on the result of the security operation, performing the network segmentation operation regarding the wired device includes when it is determined that the authentication server does not send the network segment name for the wired device, not allowing the wired device to join the network and receive a network segmentation configuration. In some embodiments, performing the authentication operation regarding the wired device through the network device further includes when it is determined that the authentication server sends the network segment name for the wired device, determining whether or not the network segment name for the wired device is valid. In some embodiments, based on the result of the security operation, performing the network segmentation operation regarding the wired device includes when it is determined that the network segment name for the wired device is valid, assigning the wired device to a network segment of the network deployed at the customer site that corresponds to the network segment name. In some embodiments, the wired device supports an authentication protocol or standard. In some embodiments, facilitating the authentication operation regarding the wired device through the network device includes checking the wired device against a network segmentation database. In some embodiments, based on the result of the security operation, performing the network segmentation operation regarding the wired device includes when it is determined that the wired device matches an entry within the network segmentation database, assigning the wired device to a corresponding network segment of the network deployed at the customer site, and when it is determined that the wired device does not match any entry within the network segmentation database, not allowing the wired device to join the network and receive a network segmentation configuration. In some embodiments, the wired device does not support any authentication protocol or standard. In some embodiments, the network device includes a HE or a gateway device. The network device may be similar to, the same as, or a component of the HEs354-1,354-2depicted inFIG.3, the at least one HE554depicted inFIG.5. The wired device may be similar to, the same as, or a component of the network devices104-1, . . . ,104-N depicted inFIG.1, the network device204depicted inFIG.2, the network devices504-1,504-2,504-3depicted inFIG.1, and/or the network devices704-1,704-2,704-3,704-4,704-5,704-6,704-7depicted inFIG.7. The network may be similar to, the same as, or a component of the network150depicted inFIG.1and/or the network750depicted inFIG.7. The customer site may be similar to, the same as, or a component of the customer site114depicted inFIG.1. FIG.9is a process flow diagram of a method for network segmentation of a network deployed at a customer site accordance to another embodiment of the invention. According to the method, at block902, GRE tunnels are established between a gateway device of the network deployed at the customer site and network ports of a switch of the network deployed at the customer site. At block904, when wired devices are plugged into the network ports of the switch, network traffic is transmitted between the wired devices and the gateway device through the GRE tunnels. At block906, a security operation regarding the wired devices is facilitated through the gateway device. At block908, based on a result of the security operation, a network segmentation operation regarding the wired devices is performed using the gateway device. In some embodiments, at least one of the security operation and the network segmentation operation is conducted through the GRE tunnels. The network device may be similar to, the same as, or a component of the HEs354-1,354-2depicted inFIG.3, the at least one HE554depicted inFIG.5. The wired device may be similar to, the same as, or a component of the network devices104-1, . . . ,104-N depicted inFIG.1, the network device204depicted inFIG.2, the network devices504-1,504-2,504-3depicted inFIG.1, and/or the network devices704-1,704-2,704-3,704-4,704-5,704-6,704-7depicted inFIG.7. The network may be similar to, the same as, or a component of the network150depicted inFIG.1and/or the network750depicted inFIG.7. The customer site may be similar to, the same as, or a component of the customer site114depicted inFIG.1. Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner. It should also be noted that at least some of the operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer useable storage medium to store a computer readable program. The computer-useable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of non-transitory computer-useable and computer-readable storage media include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD). Alternatively, embodiments of the invention may be implemented entirely in hardware or in an implementation containing both hardware and software elements. In embodiments which use software, the software may include but is not limited to firmware, resident software, microcode, etc. Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents. | 47,858 |
11863350 | DETAILED DESCRIPTION OF THE DISCLOSURE Again, the present disclosure relates to systems and methods for fast convergence of E-Tree (Ethernet Tree) with a dual homed root node. In the configuration described herein, the present disclosure proposes a new port between the cluster of two PEs, namely an inter-chassis port. The E-tree instance, in the cluster of two PEs, changes dynamically and instantly in data plane between root to leaf, or vice versa, based on the root port connected to a root CE node change state. For a given PE in the cluster, if the PE root port to the root CE node is up, then the inter-chassis port on this PE will be acting as a leaf, and if the PE root port is down, then the inter-chassis port will be acting as a root. The reversing of roles (leaf or root) on the two PEs will happen independently and faster than a control-plane driven mechanism. A benefit of this approach include operation with minimal or no routing control plane such as EVPN. Another benefit includes the use of existing Layer 2 (L2) data plane legacy transport and data plane mechanisms to tie the dual home PEs inter-chassis link root/leaf designation with the status of the access link of the root node to achieve Single-Active and Active-Active with milliseconds convergence on link or node failures. Acronyms The following acronyms, abbreviations, and definitions are utilized herein: A/AActive/Active; used synonymously with all-active whena CE is multi-homed to two or more PEsA/SActive/Standby used synonymously with single-activewhen CE is multi-homed to two or more PEsACAttachment CircuitARPAddress Resolution ProtocolBGPBorder Gateway ProtocolBUMBroadcast, Unknown, and MulticastCECustomer EdgeDFDesignated Forwarder; DF algorithm is used onMH (PE) peers to elect DF for each VLANDMACDestination MACDHDual-HomeDPData PlaneESEthernet Segment; when a CE is MH to PEs via a LAG, MH(PE) peers identify LAG interface as Ethernet SegmentE-TreeEthernet TreeEVPNEthernet VPNEVIEthernet VPN InstanceICCPInter-Control Center Communications ProtocolIMETInclusive Multicast Ethernet TagIGPInterior Gateway ProtocolIPInternet ProtocolLAGLink Aggregation GroupLANLocal Area NetworkMACMedia Access ControlMHMulti-homeMPLSMultiprotocol Label SwitchingPEProvider EdgePWPseudowireRTRoute Target; EVPN uses BGP RTs with import/exportpolicy to form EVI member groupSIDSegment IdentifierSMACSource MACUNIUser-Network InterfaceVLANVirtual Local Area NetworkVPLSVirtual Private LAN ServiceVPNVirtual Private NetworkVPWSVirtual Private Wire ServiceLeafA node in an E-Tree that is allowed tocommunicate only to Root nodesRootA node in an E-Tree that is allowed to communicateto other Root and Leaf nodes Network Configuration FIGS.1and2are network diagrams of a network10that is an E-Tree with one root CE node12dual-homed connected to two PE nodes14A,14B which in turn connect to various CE leaf nodes16A,16B,16C.FIG.1illustrates unicast and BUM traffic from a root, andFIG.2illustrates unicast and BUM traffic from a leaf.FIGS.1and2show a non-fault situation on dual-homed links18A,18B between the CE root node12and the PE nodes14A,14B. InFIGS.1and2, known unicast traffic, multicast and broadcast traffic, and unknown unicast traffic is shown. In the working state with no fault, traffic flows on the link18A.FIG.3is a network diagram of the network10illustrating a fault20affecting the link18A. Of note, the various connections inFIGS.1-3may include intermediate devices which are omitted for illustration purposes. As described herein, “connected to” may or may not be a direct connection. In the conventional approach, following the fault20, the control plane in EVPN exchanges route type messages for reconfiguration. Again, as noted herein, this process is slow. The present disclosure includes new ports between the two PE nodes14A,14B connected as a connection30that is part of the E-Tree instance. The connection30can be an Inclusive Multicast Ethernet Tag (IMET) route/tunnel. The connection30includes a port on each of the PE nodes14A,14B that is either a root or leaf in the E-Tree. Physically, the connection30is a connection between the PE nodes14A,14B, and it can be referred to as an inter-chassis link/inter-chassis port. This inter-chassis port associated with the E-Tree instance can be a VLAN interface on a physical link The key is to have this inter-chassis port between the two PE nodes14A,14B, in a cluster and in the E-tree instance, change dynamically and instantly in the data plane from root to leaf (or vice versa) based upon the state of the links18A,18B. If the PE root port to the root CE node12is up, then the inter-chassis port on this PE node14W will be acting as leaf, as illustrated inFIGS.1and2, and if the PE root port is down, then the inter-chassis port will be acting as root, as illustrated inFIG.3, due to the fault20. The role changing from root to leaf on the inter-chassis port associated with the E-Tree instance will change dynamically by the data plane with no control plane involvement as soon as the single root port on that PE change state, leading to the fastest convergence possible. Again, the current EVPN control plane mechanism will require control plane message exchange to setup the E-Tree and would require an MPLS transport between the PE nodes14A,14B too, as well would require egress filtering for BUM traffic, and will require control plane MAC learning for the E-Tree instance. This is all removed by tying the inter-chassis port role as leaf or root within the E-Tree instance, and changing it dynamically with no control plane involvement. Benefits Use of the inter-chassis port achieve in the order of milliseconds convergence for E-Tree network topologies on link failure with Active/Standby redundancy. This also is a simpler approach to support NA and A/S redundancy for E-Tree with no need for heavy control planes like EVPN. Further, This can work with legacy Layer 2 (L2) transport for active/standby redundancy, i.e. doesn't require any EVPN control plane which is a huge benefit and even for Active/Active redundancy only need to use EVPN for DF election for NA redundancy to send BUM traffic to only one of the active Port connected to one of the Dual home PE. Unique to this approach, the data plane changes the role of the inter-chassis port from leaf to root (or vice versa) based on the state of the customer root port. This is unique from EVPN, ICCP, etc. which do not allow the data plane to perform a control plane role to switch a port designation from root to leaf or vice versa. The benefit here is no control plane involvement at all after setting up the data plane, leading to no control plane involvement at all for switchover on failure or for recovery from failure. Process FIG.4is a flowchart of process50for fast convergence of an E-Tree (Ethernet Tree) with a dual homed root node. The process50can be implemented as a method with steps, via one (or both) or the PE nodes14A,14B which include circuitry or at least one processor configured to implement the steps, and as instructions stored in a non-transitory computer-readable medium for the steps. The process50is implemented in a Provider Edge (PE) node having a plurality of ports including an inter-chassis port to a second PE node, a port connected to a root node, and one or more ports connected to leaf nodes, wherein the plurality of ports are in an Ethernet Tree (E-Tree), and wherein the root node is dual-homed to the PE node and the second PE node. The steps in the process50include designating the inter-chassis port as one of a leaf node and a root node in the E-Tree instance (step52); and managing a designation of the inter-chassis port based on a status of the port connected to the root node (step54). The designation is one of a root or a leaf in the E-Tree. The designation is changed in a data plane instead of in a control plane. Responsive to a failure on the port connected to the root node, the designation of the inter-chassis port is changed. The dual-homed can be Active/Standby or Active/Active. The inter-chassis port can be a Virtual Local Area Network (VLAN) interface. The inter-chassis port can utilize an Inclusive Multicast Ethernet Tag (IMET). CONCLUSION It will be appreciated that some embodiments described herein may include or utilize one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field-Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application-Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured to,” “logic configured to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments. Moreover, some embodiments may include a non-transitory computer-readable medium having instructions stored thereon for programming a computer, server, appliance, device, at least one processor, circuit/circuitry, etc. to perform functions as described and claimed herein. Examples of such non-transitory computer-readable medium include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically EPROM (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by one or more processors (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause the one or more processors to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. Moreover, it is noted that the various elements, operations, steps, methods, processes, algorithms, functions, techniques, etc. described herein can be used in any and all combinations with each other. | 11,390 |
11863351 | DETAILED DESCRIPTION It will be readily understood that the components of the invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium. Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. In selected embodiments, a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Computer program code for carrying out operations of the invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, and may also use descriptive or markup languages such as HTML, XML, JSON, and the like. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). The invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The systems and methods disclosed herein relate to logical routers for computer data routing systems. Specifically, the systems and methods described herein relate to a logical router “chassis” that is formed from a set of disaggregated network elements that are not necessarily in the same chassis or coupled to the same backplane of a chassis. The logical router may include a single logical point of management and control, with a distributed data plane. The logical router also includes a control plane offloaded to an external computing system in order to reduce network topology size. This also allows the control plane to be migrated to a different computer system to take advantage of newer generations of central processing units (CPUs). The disaggregated network elements comprising the logical router may be implemented using dedicated network components incorporated into the systems and methods disclosed herein. In the embodiments disclosed below, the network elements include silicon devices such as the JERICHO 2 and the RAMON developed by BROADCOM. These are exemplary only and other network elements providing the basic network routing function of these devices may also be used in a like manner. FIG.1, illustrates an example architecture of a logical router100. As shown inFIG.1, the logical router100is comprised of multiple spine elements102, multiple leaf elements104, and fabric interfaces106that couple each spine element102to one or more leaf elements104. In the examples below, the spine elements102are RAMON-class silicon devices and the leaf elements104are a set of multiple JERICHO 2-class silicon devices. The fabric interfaces106of the devices102,104may be coupled to one another by means of network cables, such as 10G or 100G ethernet cables, fiber optic cables, or other type of network connection. In the logical router100, each spine element102functions as a fabric element of a self-routing fabric. This self-routing fabric implements all associated routing protocols in silicon, including handling link failures without requiring any software assistance. Each fabric element in the logical router is interfaced with one or more leaf elements104via fabric interfaces, as shown inFIG.1. A collection of leaf elements104may be used to implement a cell-based fabric in which the collection of leaf elements104splits data packets into cells. These cells are distributed across the cell-based fabric and reassembled on egress from the one of the leaf elements104. This implementation allows for more efficient utilization of the fabric. Each leaf element104may be also configured with a network interface108that allows the leaf element104to communicate with a network. FIG.2illustrates an example method200that may be implemented using the logical router100. In particular, the method200illustrates an implementation of end-to-end packet scheduling using the logical router100. The method200may be implemented by an external controller (see discussion of control element300below) or by code executing on a leaf element104, such as the leaf element104whose ingress port receives the packet being processed according to the method200. The method200may include queuing202, by the logical router100, a data packet on an ingress associated with the logical router100, such as on one of the leaf elements104on whose port the packet was received. Next, the ingress sends204a queue request to the logical router100, such as a to a second leaf element104corresponding to the destination address of the data packet. An egress (e.g., the second leaf element104) associated with the logical router100responds with a credit grant. Finally, the ingress sends the packet to the egress, such as over the fabric implemented by the spine elements102. Referring toFIG.3, the logical router100as disclosed herein provides desirable performance with respect to the following design considerations:System throughputLogical chassis provisioningChassis bootstrappingChassis scalingSystem state scalingDebugging and troubleshootingResiliency to account for fabric failure, software failure, and component failure In the embodiment ofFIG.3, the spine elements102are coupled to the leaf elements104to implement a one-stage Clos fabric. In particular, each leaf element104may be coupled to each spine element102. The system ofFIG.3may have the following attributes provides a 48 leaf element104interface scale with 480×400G or 1920×100G ports implemented by the leaf units104, which may be JERICHO 2 silicon devices. In an alternative scale, there may be 24 leaf elements providing 240×400G ports or 960×100G ports. For purposes of this disclosure, the notation “A×B” indicates A ports with a throughput of B. The configuration ofFIG.3is for illustrative purposes and other configurations of other devices may be used in a similar manner. In the illustrated embodiment, there are 13 spine elements102. The spine elements102in the logical router architecture ofFIG.3may each include one or multiple elements, such as one or multiple RAMON-class elements. In some implementations, a spine profile (i.e., a composition of a spine element102) may include a single 24-port Ramon-class element, and two 48-port Ramon class elements. The logical router100ofFIG.3also includes 48 leaf elements. Each spine element102may be interfaced with each leaf element104using communication links implementing the 400G QSFP-DD (quad small form-factor pluggable connection double density) optical connectors and 400G protocols. However, other connector types and protocols may be used. In some implementations, each leaf element104is comprised of a single J2-class silicon device including 10×400 or 40×100 interfaces, a BROADWELL (8 core) CPU, and 32 GB of RAM (random access memory). Each leaf element104may be configured with 40×100G interfaces for communicating with external networks. In some implementations, the logical router100may be managed by one or more control plane elements300that are implemented using computing systems (see, e.g., the example computing system ofFIG.19described below). The control plane elements are computer systems that are external to the logical router (i.e. the leaf elements104, spine elements102, and interconnecting fabric among these components of the logical router100). Each control plane element300may be interfaced with one or more leaf elements104using, for example, 10G communication links. A control plane element300may function as a configuration agent that performs the router state management in order to implement a chassis abstraction model with the logical router100such that the separate elements102,104of the logical router function as a single router as if in a common chassis and coupled to a common backplane. Referring toFIG.4, The logical router100may be managed by a single point of management and control. A management LAN (local area network) switch400that performs all the management and control functions for the logical router100and the associated control plane elements300. The logical router100comprising the plurality of spine elements102interfaced with a plurality of leaf elements104that are, in turn, interfaced with the control plane elements300may be managed by the management LAN switch400. The management LAN switch400may be interfaced with each of the spine elements102, leaf elements104, and control plane elements300. Referring toFIG.5, the LAN switch400may be interfaced with elements of the logical router100in the illustrated manner. For example, a leaf element104aand a leaf element104bmay each be independently interfaced with a control plane element300. Each of the leaf elements104a,104band the control plane element300is independently interfaced with the management LAN switch400. In some realizations, each of the interfaces with the management LAN switch is implemented via a 2×10G link, though other connection types may also be used. The interface between each leaf element104a,104band the control plane element300may be associated with an in-band network500and a host packet path. On the other hand, each interface with the management LAN switch400may be associated with an out-of-band (OOB) network502. The management LAN switch400may communicate over the OOB network502with the elements104a,104b,300to perform functions such as bootstrap/image download, system state distribution, and gathering system statistics and similar data. Referring toFIG.6, the software associated with the logical router100may include a route processor software600, a router state database602, and linecard software604(also referred to herein as linecard software module604). In some implementations of the logical router100, all software is deployed and managed as containers. The route processor software600may program the device on which it is loaded to bidirectionally share data about the system state and statistics with the router state database602. The router state database602may be programmed to bidirectionally share data about the system state and statistics with the linecard software604. In some implementations, the route processor software600implements following functions or data structures:System wide interface control (across the elements102,104of the logical router100)Routing protocols, ARP (address resolution protocol), IPv6 ND (internet protocol v6 neighbor discovery)Routing Information Base (RIB)North bound APIs (application programming interfaces)Configuration managementDatastoreLinux host pathTelemetryFeatures—ACL (access control list), QoS (quality of service), CoPP (control plane policing)Virtual chassis management In some realizations, the router state database602includes following functions or data structures:Router stateStatisticsShardedReplicatedClustered In some realizations, the linecard software604implements with the following functions or data structures:ASIC (application specific integrated circuits)/SDK (software development kit) programmingStatsLinecard offload (BFD (bidirectional forwarding detection), LLDP (link layer discovery protocol), SFlow (sampled flow), etc.) FIG.7depicts how the three software building blocks600,602,604are implemented in an actual logical router realization. As shown inFIG.7, a separate linecard software module604(i.e., instance of linecard software604) may be implemented in each spine element102and each leaf element104. Each of the linecard software modules604communicates with a router state database602in a primary control plane element300(“router state DB602a”). This primary control plane element300amay also execute an instance of the router processor software600(also referred to herein as the route processor module600). The primary control plane element300ashares data with a first secondary control plane element300aas shown inFIG.7. The first secondary control plane element300bshares data with a second secondary control plane element300cas shown inFIG.7. Each of the first secondary control plane element300aand the second secondary control plane element300bincludes a router state database602b,602c, respectively, to implement functions such as data redundancy. The first secondary control plane element300band second secondary control plane element300cmay each serve as backups in the event of a failure of the primary control plane element300a, as discussed herein. The logical router100together with the control elements300and management LAN switch400as described above with respect toFIGS.1through7may be used in various operational scenarios described below. FIG.8illustrates a scenario by which the logical router100generates interfaces. As seen inFIG.8, a control plane element300running on a LINUX computing system includes an element state database800and a route processor600. Although LINUX computing systems are described throughout, other operating systems may also be used, such as other variations of UNIX, MACOS, MICROSOFT WINDOWS, or other operating systems known in the art. The element state database800, which may be part of or equivalent to the router state database602, may be coupled to each spine element102and leaf element104forming part of the logical router100. The element state database800may store data associated with each spine element102and leaf element104, such as its configuration (ports, connections of ports to other elements102,104,300, addresses of elements102,104,300, etc.). This information may be discovered by the control plane element300using any of the fabric discovery techniques disclosed herein (e.g., LSoE, LLDP). The element state database800provides this data to the route processor. For each interface on each spine element102and leaf element104, the route processor600creates a unique interface (swp1/1 . . . swp1/40, swp2/1 . . . swp2/40 . . . swp48/1 . . . swp48/40 inFIG.8) on the route processor600itself, where the notation swpA/B indicates the interface on port B of element A (i.e., spine element102or leaf element104). The unique interface may be a Linux interface. Where another type of operating system is used, a network interface according to that operating system may be created. The route processor may create all interface states for all of the disaggregated elements102,104of the logical router100. A flow diagram illustrating the creation of the unique interfaces is shown inFIG.9. Referring toFIG.9, a control plane element300may execute the illustrated method900. The method900includes the element state database800of a control plane element300receiving902data associated with reach spine element102and leaf element104of a logical router100. The element database800notifies904the route processor600executing on the control plane element300about the data received at step902. The route processor then creates906a unique interface, such as a LINUX interface, for each spine element102and leaf element104referenced in the data received at step902. Once the interfaces have been created inside a LINUX (or other operating system) instance on the control element300executing the route processor600, the actual interface on the front panel of the individual leaf elements104may then be ‘stitched’ to the created interfaces corresponding to them. One way to do this is to allocate a unique VLAN (virtual LAN) tag to each front panel interface on each of the leaf elements104, each VLAN tag being further mapped to one of the interfaces created on the control element300. FIG.10illustrates an example of data packet routing using interfaces created according to the method900and associated with interfaces of leaf elements104. The software running on the leaf elements104areceives a packet1000and programs a rule in the data path that looks up the ingress interface corresponding to the destination of the packet1000and adds the corresponding VLAN tag to the packet to obtain a packet1002and forwards the packet1002to a leaf element104bconnected to the control plane element300along with a destination identifier identifying the egress port of the leaf element104b. The packet1002may be sent to the leaf element104bwithout performing a TTL (time to live) decrement. The packet1002is sent to the egress leaf element104bby way of one or more spine elements102. As is apparent inFIG.10, the packet1002may include information for routing the packet1002through the fabric106, e.g. “BCM Fabric Header, dest=2005” (BCM=BROADCOM). The egress leaf104bforwards the packet1002to the control plane element300upon receipt. The LINUX instance executing on the control plane element300then identifies the interface1004referenced by the VLAN tag of the packet1002, strips out the VLAN tag, and injects the stripped packet1006into the corresponding interface1004. From there on the packet1006flows through the Linux data path as usual and the applications, such as the border gateway protocol (BGP) module1008, see that packet as coming in on the interface1004. FIG.11shows transit in the reverse direction relative to that shown inFIG.10. The application1008injects a packet1100into the appropriate interface1004according to the destination of the packet and routing defined by the routing database602. A data path, such as a LINUX data path, may have been programmed to map each interface to a VLAN tag that uniquely identifies the egress front panel interface for the destination address of the packet. In particular, the ingress leaf104b(connected to the control plane element300) receives the packet1100from the application1008and looks up the VLAN tag for the appropriate egress leaf104a, i.e. the egress leaf to which the packet should be routed according to the programming according to the routing database602as described above. The ingress leaf104btags the packet1100with the VLAN tag and forwards the tagged packet1102to the egress leaf104athrough the elements102,104of the logical router100(see packet1104). The egress leaf104astrips off the VLAN tag and forwards the stripped packet1106out of the correct front panel port, i.e. the front panel port associated with the VLAN tag and corresponding to routing corresponding to the destination of the packet and the programming according to the routing database602. Referring toFIGS.12,13, and14, the logical router100and control plane elements300may be programmed to implement some or all of the following functions:Process-level restartRoute processor redundancyRoute state database redundancyFabric element, link failure The examples ofFIGS.12,13, and14and their corresponding discussion illustrate how an implementation including multiple control plane elements300may be used to provide a logical router100that is robust to failures. FIG.12illustrates configurations of control plane elements300for implementing a high-availability logical router100. A three-node control plane element cluster includes control plane elements300a,300b,300cas shown inFIG.12. Control plane element300aa primary control plane element that runs an instance600aof the route processor600that is designated as a primary route processor600a. Control plane element300bexecutes an instance600bof the route processor600that is designated as a secondary route processor600b. Control plane element300cdoes not execute an instance of the route processor600in this example. Each control plane element300a,300b,300cmay include an individual router state database602a,602b,602c, respectively. Each of route processor600a,600bruns health check diagnostics on the other route processor600b,600a(600bchecks600a,600achecks600b). The primary route processor600amay be interfaced with each router state database602a,602b,602cin each of the control plane elements300a,300b,300cas shown inFIG.12. The router state database602ain the control plane element300ashares health check replication data with the router state database in the control plane element300b. The router state database602bshares health check replication data with the router state database602cin the control plane element300c. In this way, data associated with the health of the primary and secondary route processors600a,600bis redundantly stored over multiple databases602a,602b,602c. In some implementations, the primary route processor600acheckpoints a required state in the router state databases602a,602b,602c. The router state databases602a,602b,602cmay be spawned on all cluster nodes, as illustrated inFIG.12. Furthermore, data shards of the router state databases602a,602b,602cmay be replicated internally for redundancy, and each route processor600a,600bmay perform internal health checks to detect failovers. In an event that a health check on the primary route processor600afails, the secondary route processor shown600bcan become the primary route processor and take over the functions of the primary route processor600a, as shown inFIG.13. FIG.13illustrates the failure of the primary router600aand transfer of primary status to the secondary route processor600b. As shown, the secondary route processor600bestablishes connections with each of the router state databases602a,602b, and602c, and reads checkpointed data to restore the system state (e.g., state of the secondary route processor600bper the checkpoint and/or states of the elements102,104of the logical router100. The secondary route processor600bthus takes over the role of the primary route processor600a. In this way, connections with neighboring control plane elements300a,300cmay be reestablished, and a graceful restart may be initiated. For example, the function of the new primary route processor600amay continue as described above with respect to the function of the route processor600aonce the system state has been restored. Referring toFIG.14, some implementations may also include a provision to account for a failure of a primary control plane element300a. An example scenario where the master control plane element fails is shown inFIG.14. In the case of failure of the primary control plane element300a, the control plane element300bhosting the secondary route processor600bmay assume the role of the master control plane element in response to detecting failure during one of its health checks on the primary route processor600a. The route processor600bwill then assume the role of the primary route processor and establishes connections with the healthy router state databases602b,602cas shown inFIG.14. The route state databases602b,602cmay be configured to internally handle any shard failovers associated with the failure of the primary control plane element300a. The embodiment described above with respect toFIGS.1through14may provide the following functions and benefits:A Clos based fabric based on existing silicon networking devices, such as JERICHO 2 and RAMON-class devices.Self-routing fabricCell based, efficient loadbalancingEnd-to-end schedulingControl plane runs on external serverLogical chassis managementSingle-box look and feelScalable, redundant route state databaseResiliency at all levels FIGS.15through18illustrate an alternative approach for implementing a logical router100. The alternative approach includes a routed backplane fabric that uses standalone switches as spine units for the backplane. The backplane itself is based on a Clos fabric stitched via front-panel ports. A routed backplane fabric is realized using the following main components:Layer 3 (L3) fabric portsLSoE (link state over ethernet) for fabric neighbor discoveryBorder Gateway Protocol shortest path first (BGP-SPF) control plane for inter-unit IP reachabilityBGP-SPF extensions for “switch-port” discoveryMultiprotocol Tunnel Switching (MPLS) tunnels setup to/from remote “switch-ports” Note that LSoE and BGP-SPF are standardized protocols leveraged in this design to build a routed backplane for a disaggregated chassis based logical router100. Design for such a routed backplane is discussed in more detail below. FIG.15illustrates the physical connectivity of a logical router100implemented using a standalone backplane structure. In this implementation, a centralized controller1500is interfaced with N spine units1502(SU-1to SU-N). Each of the front panel ports of each spine unit1502may be designated as a fabric port. The system also includes M line units1504(LU-N+1 to LU-N+M). The back panel ports of the line units1504may also be designated as fabric ports. The controller may likewise implement fabric ports coupled to the spine units1502. Each of the N spine units1502may be interfaced with each of the M line units1504using the fabric ports of the spine units1502and the fabric ports of line units1504. Furthermore, each of the M line units1504may include X front panel ports, each of which is designated as a switch port. InFIG.15, the following notation is used:LU: line unitSU: spine unitN: number of spine unitsM: number of line unitsX: number of switch ports on each line unitswpA/B: switch port number B on line unit AfpA/B: fabric port number B on unit A (controller, spine unit, or line unit number from 0 to N+M). The embodiment ofFIG.15may use the same Clos connectivity that is described above with respect toFIGS.1through14. The Clos connectivity may be used to distribute internal switch state resulting from user configuration and a routing control plane, as well as for runtime operational data that needs to flow across units1502,1504in order to implement the standalone backplane structure. The backplane fabric implemented by the interconnections between the fabric ports of the spine units1502and the line units1504provides data traffic packet transport across all line-units1504and controllers1500. An MPLS routed fabric may be used as a transport underlay across all line unit1504and controller fabric ports. The fabric may have some or all of the following properties:Each line unit1504fabric-port is auto-configured as a layer-3 routed port in an internal fabric-VRF (virtual routing and forwarding) with a private IP (internet protocol) address.BGP-SPF is used as internal fabric routing protocol to establish layer 3 reachability across all fabric ports within the fabric-VRF.Each line-unit1504, spine-unit1502, and controller node1500runs an instance of BGP-SPF routing protocol on its local fabric ports.LSoE is used as the discovery protocol to discover layer-3 fabric neighbors and corresponding encapsulations.LSoE learned neighbors are pushed into BGP to bring-up BGP-SPF sessions over directly connected layer-2 fabric ports.BGP-SPF peering is established on each leaf-spine connection in the fabric as a result.Fabric topology is learned on each node and fabric-VRF IP reachability is established to each routed fabric-port via BGP-SPF computation.An MPLS transport is setup further and is described in more detail later in this document. Most external facing control planes for the logical router100that include external BGP peerings, IGP (interior gateway protocol) routing protocols, ARP, and ND (neighbor discovery) may be hosted on the controller node1500. In other words, besides the backplane fabric control plane that is distributed across all nodes1500,1502,1504, most logical router control plane functions may be centralized on the controller node1500. The illustrated architecture will however allow specific functions (such as BFD (bidirectional forwarding detection), LLDP (link layer discovery protocol), VRRP (virtual router redundancy protocol), and LSoE) to be distributed across line units1504as needed. Data paths of the units1502,1504may be accordingly programmed to send locally bound packets to either the local CPU (for distributed functions) or to send them to controller node1500(to implement the centralized control plane). The centralized logical router control plane running on the controller node1500drives programming of a data-plane that is distributed across the line units1504. A one-stage forwarding model is defined as one in which (a) all layer 3 route look-ups are done on the ingress line-units1504and (b) resulting rewrites and egress port are resolved on ingress line-unit1504. All resulting encapsulation rewrites are put on the packet and packet is sent to egress line-unit1504over the backplane transport fabric with the resulting egress port information. All packet editing happens on the ingress line-unit1504. Egress line unit1504simply forwards the packet on the egress port1504. A one-stage forwarding model, as defined above is simulated across standalone line-units1504in this logical router100to accomplish layer-3 forwarding across line-units:L1 rewrites are resolved and written on the ingress line unit (LU)1504Packets are tunneled to egress-LU1504over MPLS tunnelMPLS label resolves egress-port on the egress-LU1504 In some embodiments, all line unit1504front panel ports (except for ports designated as fabric-ports) are designated as external switch-ports as noted above. Each of these switch-ports would be represented as an interface in the logical router100. All logical router interfaces would be represented in a data plane, a control plane, and a management plane on the controller1500, as well as in a data plane on all line-units1504. For example, an interface “swp3/2” representing port 2 on line-unit 3 would be programmed in the data plane on all the line-units1504. It would also be visible in the management plane hosted on the controller node1500and in the routing control plane hosted on the controller1500. In some embodiments, all router interfaces, including ones on remote line units1504are programmed in the data plane on each line unit1504in order to accomplish one-stage forwarding across line units1504as defined above. A local interface on a line unit1504simply resolves to a local port. However, a remote interface on a line unit1504is programmed in the data plane such that a packet egressing this remote interface is sent to the remote line unit1504to be egressed out of the corresponding router port on the remote line unit1504. An underlay fabric transport tunnel is setup to stitch the data path to the egress line unit1504for this purpose and an overlay encapsulation may be used to identify the router port on the egress line unit1504. There are a couple of choices with respect to transport tunnel and overlay encapsulation that may be used for this purpose:A pure IP fabric transport (IP tunnel) and VXLAN (virtual extensible LAN) overlay encapsulation (such as a virtual network identifier (VNID)) to identify the egress portAn MPLS fabric transport (such as label switched path (LSP)) and a MPLS overlay internal-label to identify the egress port An MPLS transport and overlay may be used in this architecture. However, overall architecture does not preclude using an IP transport with a VXLAN tunnel to accomplish the same. In order to improve or optimize the number of internal label encapsulations put on the packet, both the transport label and the interface label may be collapsed into a single label that both identifies a physical port and provides a transport LSP to or from the line unit1504hosting the physical interface. This overlay label identifies the egress interface for egress traffic switched towards the egress line unit1504(e.g., egress line card) and interface, as well as identifying an ingress interface for ingress traffic on the interface that needs to be punted to the controller1500that hosts routing protocols running on that interface. Two internal label allocations may be defined for this purpose:egress-label allocated per-local-(LC, port), used to tunnel from ingress-LU to remote-egress-port, identifies egress-port for switched trafficingress-label allocated per-(controller, port), used to tunnel from ingress-LU to controller, identifies ingress-port for host destined traffic Each of the above label contexts may be globally scoped across all nodes1500,1502,1504within the logical router100and identify both the physical port as well as a directed LSP. The above label allocation scheme essentially results in two global labels being allocated for each router-port within the logical router100. MPLS labels may be statically reserved and assigned for this purpose on switch-port interface discovery and these reserved labels would not available for external use in some embodiments. A globally scoped label (across all logical router nodes1500,1502,1504) that is allocated for each local router port of each line unit1504identifies both the egress router-port as well as a transport LSP from ingress line-unit to the egress line-unit that hosts the physical port. This label is programmed on logical router nodes1500,1502,15014as follows:On the ingress line-unit1504, this label is part of the tunnel encapsulation result to be rewritten on the packet to egress out of a remote interface.On the spine-unit1502, this label switches to egress line unit fabric-next-hop rewrite with the same egress label.On the egress line-unit, this label simply points to the egress interface (with no packet rewrite. This process is illustrated inFIG.16. The following notation is used inFIG.16:L(e, x, y): egress label for switch port x on LU-yL(I,x,y): ingress label for switch port x on LU-YMAC-x: router MAC (machine access code) of unit X A packet may be received by an ingress line unit1504(LU−(N+M)). Upon exiting the ingress line unit LU−(N+M), the packet is labeled according to the illustrated label table1600, which includes the egress interface (“[12.1.1.2,swp(N+2)/1]→MAC-A”) as well as the transport LSP, i.e. tunnel path, to the egress interface (“MAC-A→L(e,x,y)+MAC-1, port: fp(N+M)/1→L(e,x,y)+MAC-N, port: fp(N+M)/N”). The packet is sent to a spine unit1502(SU-N). The spine unit SU-N rewrites the packet according to the label table1602that includes the fabric-next-hop rewrite (“L(e,x,y)→MAC-N+2, port:fpN/2”) and the egress label. The spine unit SU-N forwards the rewritten packet to the egress line unit1504(LU(N+2)), which transforms the label of the packet according to the table1604that simply points to the egress interface (L(e,x,y)→swp(N+2)/1). Referring toFIG.17, a globally scoped label (across all logical router nodes1500,1502,1504) may be allocated per-(controller, router-port) and identifies both the ingress router-port as well as a transport LSP from ingress line-unit to the controller card. This label is programmed on logical router nodes as follows:On the ingress line unit1504, this label is part of the tunnel encapsulation result to be rewritten on the packet for packet punted to the controller (see table1700ofFIG.17on line unit LU−(N+M)On the spine unit1502, this label simply switches to controller fabric-next-hop rewrite with the same egress label (see table1702on spine unit SU-N)On the controller1500, this label identifies the ingress interface in the host stack (see table1704) Punted packets need to be injected into the LINUX kernel making it look as if they arrived on the Linux interface corresponding to the front panel port the packet arrived on. On a standalone system, the host path runs in the LINUX Kernel running on the local CPU of the switch, i.e. line unit1504, which would be the line unit LU−(N+M) in the example ofFIG.17. An ASIC on the line unit1504adds a system header that indicates which ingress interface the packet arrived on. A BCM Knet module in the kernel then maps the hardware ingress interface to the LINUX interface and injects the packet into the LINUX data path. In the illustrated architecture, the host data path runs in multiple places. On the line unit1504, packets may need to be punted to the BGP LSVR (link state vector routing) instance running on that line unit1504. If the packet is destined to a control plane protocol instance running on the controller1500, then the line unit1504needs to be able to deliver the packet to the controller. Since there is no system header in this path, the ingress interface needs to be identified and encapsulated within the packet itself. As mentioned in the earlier sections, this is achieved using a unique label that identifies the ingress interface. An ACL rule can be used to match on the ingress interface and supply the corresponding label and the subsequent forwarding chain. However, this result needs to be used only when the packet really needs to be sent to the controller1500. In other cases, the forwarding lookup should drive the encapsulations. FIG.18illustrates an approach for bringing up the standalone backplane fabric according to the approach ofFIGS.15through17. Bringing up the backplane fabric and programming happens automatically on boot-up without any explicit user configuration or intervention such that:layer-3 (L3) backplane reachability is established across all layer-3 enabled fabric ports within a fabric-VRFoverlay transport tunnels are setup to/from all router-ports across all logical router components: line units1504, spine units1502, and controller1500. As shown inFIG.18, a method1800for bringing up the backplane fabric may include downloading1802fabric configuration to each unit1500,1502,1504being managed. This may include IP addressing, card roles, port roles, and port-MPLS labels. The method1800further includes bringing up1804L3 addressing on the fabric ports of each unit1500,1502,1504. The method1800may further include bringing up1806LSoE on fabric ports, which includes discovering fabric neighbors and pushing each unit's1500,1502,1504neighbor database acquired in this manner to a BGP-LSVR on the controller1500. The method1800may further include performing1808, by a BGP-SPF instance on each unit1500,1502,1504: bringing up peerings, learning fabric topology, and installing fabric IP routes in the fabric VRF Auto-bring-up of layer-3 backplane fabric may be orchestrated according to the explanation below in which R0 refers to the controller1500. Auto-Configure R0 with a Startup Config: Assume R0 has been imaged and management Ethernet (mal) is up and addressed. R0 reads a start-up configuration file (packaged with the image) that has the following:The topology: spine-units, line-unitsPrivate addressing for its southbound fabric interfacesMPLS labels for overlay interface tunnelsManagement IP address pool for line-unit malsZTP (zero touch provisioning)/start-up config for line-units and spine-units Bring-Up Line-Units: R0 brings its southbound fabric interfaces up (spine units1502and line units1504in the topology ofFIGS.15through8) with addressing from the start-up configuration file. R0 runs dhcpd (dynamic host configuration protocol daeomon) so line units'1504and spine units'1502management ethernets mals can get addresses from a pool given in the startup configuration file. The line card numbers for the units1502,1504are assumed to be the R0 port to which they are wired. R0 runs a ZTP service to the units1502,1504. Push Startup Configuration to Line-Units: R0 pushes startup configuration to the line units1504and spine units1502. This configuration identifies a card role for each unit1502,1504; identifies each local port as “fabric-port” or “router-port,” specifies northbound fabric interface addressing, and provides MPLS labels for router-port overlay tunnels (two labels per port). The units1502,1504then run LSoE on fabric ports to make sure they are wired as expected from the startup configuration. LSoE discovers layer-3 fabric neighbors and corresponding encapsulations. The database of information learned by LSoE is exported into BGP-SPF, as per standard LSoE function. BGP-SPF peering is established on each line unit-to-spine unit fabric link. Fabric topology is learned on each unit1502,1504and fabric-VRF IP reachability is established to each routed fabric-port via BGP-SPF computation. BGP-SPF programs each local line-unit/spine-unit RIBs (router information base) with fabric routes within the fabric-VRF. At this point, there is IP reachability across all fabric port IP addresses. Switch-Port Discovery and Tunnel Bring-Up: Local router ports may be discovered on each line unit1504. Discovered router ports along with assigned MPLS labels are pushed into local BGP-LSVR instances on each line unit1504. BGP-SPF may be enhanced further to be able to carry ports+labels independent of IP addressing. Accordingly, BGP-SPF may be configured to compute shortest path first (SPF) SPF to each “switch-port” in the logical router. BGP-SPF, may also incorporate these external switch-ports into its fabric-VRF topology independent of the user VRF that they are configured in. BGP on each unit1504instantiates ingress/egress overlay MPLS tunnels for each interface that resolve via fabric-VRF next-hops. Tunnel reachability may be resolved via fabric-VRF next-hops and tunnels may be programmed as described earlier with assigned MPLS label on each unit1504. User configuration on R0 follows the bringing up of the backplane fabric and may be handled on the controller1500. Switch state computed as a result of this user configuration and control plane may be further distributed for programming across some or all of the line units1504. Example Packet Paths This section goes over how some common packet paths would work in the system using data path programming of the control node1500and units1502,1504described in earlier sections. ARP Resolution Glean Processing on a unit1502,1504is performed by an ingress L3 route lookup on destination IP address that resolves to an incomplete next-hop or subnet (glean) route that is programmed pointing to PUNT path. The PUNT path is pre-programmed pointing to ingress-interface-tunnel to the controller1500. An ingress layer-2 packet is encapsulated with ingress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units1502. The spine unit1502terminates outer layer-2. An MPLS in-label lookup on the spine unit1502points to ingress-interface-label+rewrite to fabric-controller-next-hop. This information is used to route the packet to the controller1500. The controller terminates outer layer-2. The controller1500is programmed to perform an MPLS in-label lookup action as POP (point of presence) and identifies the ingress interface context. The controller performs an L3 route lookup on the destination IP of the packet and resolves to an incomplete next-hop or subnet (glean) route. The controller1500then delivers the packet using the next-hop or subnet route for ARP resolution with the ingress interface. ARP Request The controller1500generates a broadcast ARP request on the ingress L3-interface. The controller L3-interface resolves to egress-interface-tunnel port. The ARP packet of the broadcast ARP request is encapsulated with egress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units1502. The spine unit1502terminates outer layer-2. An MPLS in-label lookup on the spine unit1502points to egress-interface-label+rewrite to fabric-line-unit-next-hop. The encapsulated packet is transmitted on the fabric port to the egress line unit1504according to the MPLES in-label lookup. The egress line-unit1504terminates outer layer-2. The egress line unit1504performs an MPLS in-label lookup, resulting in POP and forward on an egress interface of the egress line unit identified from the MPLS in-label look up. ARP Reply ARP reply packets may be programmed with a PUNT path to the controller1500. The PUNT path is pre-programmed and points to an ingress-interface-tunnel to the controller1500. An ingress L2 ARP packet from a line unit1504may be encapsulated with ingress-interface-label+rewrite to fabric-spine-next-hop according to the PUNT path. The encapsulated packet is transmitted on the fabric port to one of the spine units1502. The spine unit1502terminates the outer layer-2. An MPLS in-label lookup on the spine unit1502points to ingress-interface-label+rewrite to fabric-controller-next-hop. This information is used to forward the ARP packet to the controller1500. The controller1500terminates outer layer-2. The controller1500performs an MPLS in-label lookup action and is programmed as POP. The controller1500identifies the ingress interface context according to the lookup action. The inner packet encapsulated in the packet from the line unit1504is identified as an ARP packet and delivered to ARP module executing on the controller1500, which processes the ARP reply according to address resolution protocol (ARP). Ingress LC→Egress LC Routed Packet Walk The ingress line unit1504performs an ingress L3 route lookup on destination IP of a packet and resolves to next-hop rewrite, L3-egress-interface, L2-egress-interface-tunnel-port. The packet is re-written with next-hop rewrite result from the route lookup and VLAN editing derived from egress L3-interface and L2-port. The resulting layer-2 packet is encapsulated with egress-interface-label+rewrite to fabric-spine-next-hop. The encapsulated packet is transmitted on the fabric port to one of the spine units1504according to the fabric-spine-next-hop. The spine unit1504receives the encapsulated packet, terminates the outer layer-2, and performs an MPLS in-label lookup that points to egress-interface-label+rewrite to fabric-egress-line-unit-next-hop. The spine unit1504transmits the encapsulated packet to the egress line unit1504referenced by the fabric-egress-line-unit-next hope. The egress line unit1504terminates the outer layer-2, performs an MPLS in-label lookup result to obtain POP and forwards the encapsulated packet on an egress interface of the egress line unit1504referenced by the encapsulated packet. FIG.19is a block diagram illustrating an example computing device1900which can be used to implement the system and methods disclosed herein, such as a control plane element300, controller1500, or the various elements102,104,1502,1504of the logical router100. Computing device1900may be used to perform various procedures, such as those discussed herein. Computing device1900can function as a server, a client, or any other computing entity. Computing device can perform various monitoring functions as discussed herein, and can execute one or more application programs, such as the application programs described herein. Computing device1900can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like. Computing device1900includes one or more processor(s)1902, one or more memory device(s)1904, one or more interface(s)1906, one or more mass storage device(s)1908, one or more Input/Output (I/O) device(s)1910, and a display device1930all of which are coupled to a bus1912. Processor(s)1902include one or more processors or controllers that execute instructions stored in memory device(s)1904and/or mass storage device(s)1908. Processor(s)1902may also include various types of computer-readable media, such as cache memory. Memory device(s)1904include various computer-readable media, such as volatile memory (e.g., random access memory (RAM)1914) and/or nonvolatile memory (e.g., read-only memory (ROM)1916). Memory device(s)1904may also include rewritable ROM, such as Flash memory. Mass storage device(s)1908include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown inFIG.19, a particular mass storage device is a hard disk drive1924. Various drives may also be included in mass storage device(s)1908to enable reading from and/or writing to the various computer readable media. Mass storage device(s)1908include removable media1926and/or non-removable media. I/O device(s)1910include various devices that allow data and/or other information to be input to or retrieved from computing device1900. Example I/O device(s)1910include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like. Display device1930includes any type of device capable of displaying information to one or more users of computing device1900. Examples of display device1930include a monitor, display terminal, video projection device, and the like. Interface(s)1906include various interfaces that allow computing device1900to interact with other systems, devices, or computing environments. Example interface(s)1906include any number of different network interfaces1920, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface1918and peripheral device interface1922. The interface(s)1906may also include one or more user interface elements1918. The interface(s)1906may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like. Bus1912allows processor(s)1902, memory device(s)1904, interface(s)1906, mass storage device(s)1908, and I/O device(s)1910to communicate with one another, as well as other devices or components coupled to bus1912. Bus1912represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth. For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device1900, and are executed by processor(s)1902. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. | 53,630 |
11863352 | DETAILED DESCRIPTION In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. Some embodiments of the invention provide a novel network architecture for deploying guest clusters (GCs) including workload machines for a tenant (or other entity) within an availability zone (e.g., a datacenter or set of datacenters providing a set of hardware resources). The novel network architecture includes a virtual private cloud (VPC) deployed in the availability zone (AZ) that includes a centralized VPC gateway router that provides access to an AZ gateway router, or set of gateway routing elements, of the AZ. In some embodiments, the centralized VPC gateway router provides a set of services for packets traversing a boundary of the VPC. The services, in some embodiments, include load balancing, firewall, quality of service (QoS) and may be stateful or stateless. Guest clusters are deployed within the VPC and use the centralized VPC gateway router of the VPC to access the AZ gateway router. The deployed GCs, in some embodiments, include distributed routing elements that (1) provide access to the centralized VPC router for components of the GC and (2) execute on host computers along with workload machines of the GC. In some embodiments, automated processes are performed to define the virtual private cloud (VPC) connecting a set of machines to a logical network that segregates the set of machines from other machines in the AZ. In some embodiments, the set of machines include virtual machines and container Pods, the VPC is defined with a supervisor cluster namespace, and the API requests are provided as YAML files. In some embodiments, the Pods (container Pods) are hosted in lightweight VMs that in turn execute on a host computer. In other embodiments, the host computers (e.g., worker/master nodes) are lightweight VMs deployed to host Pods of the cluster or other cluster components. The automated processes in some embodiments use templates or preconfigured rules to identify and deploy network elements (e.g., forwarding elements) that implement the logical network without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received. In some embodiments, the deployed network elements include a gateway router for the VPC (called VPC gateway router) to connect the VPC to a network of the AZ and/or to a network external to the datacenter set. The VPC gateway router in some embodiments is implemented by one physical router. In other embodiments, the VPC gateway router is a logical gateway router that is implemented by more than one physical router. For instance, in some embodiments, the logical router is implemented with two physical routers in active/active or active/standby configurations. Also, in some embodiments, the logical router includes (1) a distributed router that is implemented by several router instances on host computers and edge appliances, and (2) a service router that is implemented by one or more service router instances executing on an edge appliance. In some embodiments, the service router is only implemented by the edge appliances and not on the other host computers of the VPC. In some embodiments, the service router provides routing operations and a set of stateful services, while the distributed router provides stateless routing and, in some embodiments, stateless services. In some embodiments, the edge appliances implementing the service router are configured in active/active or active/standby configurations. Active/active configurations, in some embodiments, include configurations in which the edge appliances are in an active/standby configuration for each of multiple GCs within the VPC, but each physical router is assigned to be an active service router that executes a service router instance that is assigned to be the active service router for at least one GC of the multiple GCs within the VPC while being a standby for a set of other GCs in the VPC. Because the service router is only implemented on a set of edge appliances and, in some embodiments, only a single service router instance is active for a given GC, the VPC gateway router is sometimes referred to as a centralized VPC gateway router. The VPC gateway router is configured to communicate with a datacenter gateway router to connect to external networks (e.g., other VPCs, or network accessible over the Internet). In some embodiments, the VPC gateway router is configured to perform source network address translation (SNAT) operation to translate internal network addresses used within the VPC to a set of one or more external source network addresses. In some embodiments, the VPC gateway router does not perform SNAT operations for traffic exchanged between the VPC and another VPC that is deployed in the AZ, while in other embodiments it performs such SNAT operations. The VPC gateway is configured to perform load balancing operations, or to work with one or more load balancers to perform load balancing operations, on ingress and/or egress traffic entering and/or exiting the VPC. The load balancing operations in some embodiments are Layer 4 (L4) and/or Layer 7 (L7) load balancing operations. In some embodiments, at least a subset of the deployed machines is deployed through Kubernetes, and the L4/L7 load balancing operations implement the load balancing and ingress services of Kubernetes. To deploy the network elements, the method of some embodiments uses one or more Custom Resource Definitions (CRDs) to define attributes of custom-specified network resources that are referred to by the received API requests. When these API requests are Kubernetes APIs, the CRDs define extensions to the Kubernetes networking requirements. To deploy the network elements, the network control system of some embodiments processes one or more CRDs that define attributes of custom-specified network resources that are referred to by the received API requests. When these API requests are Kubernetes API requests, the CRDs define extensions to the Kubernetes networking requirements. Some embodiments use the following CRDs: Virtual Network Interfaces (VIF) CRDs, Virtual Network CRDs, Endpoint Group CRDs, security CRDs, Virtual Service Object (VSO) CRDs, and Load Balancer CRD. A VIF CRD in some embodiments is used to define a virtual interface to connect a non-Kubernetes container Pod or VM to software forwarding elements (e.g., software switches) executing on host computers on which the non-Kubernetes Pods and VMs execute. A Virtual Network CRD in some embodiments is used to define the attributes of a logical sub-network that is to connect a subset of the deployed machines. An Endpoint Group CRD is used to define attributes for grouping heterogeneous or homogeneous sets of machines (i.e., machines of the same or different types). Endpoint Group CRD provides a simple mechanism for defining a group of machines for accessing a service or compute operation, and/or for providing a service or compute operation. Security CRDs are used to specify security policies for the VPC. For instance, some embodiments use Security Policy CRD to define security policies for traffic between VPC network endpoints, which can be defined with Endpoint Group CRDs. Another security CRD in some embodiments is an Admin Policy CRD, which can be used to define security policies for north/south traffic between the VPC and an external network (e.g., from another VPC, from an external IP block, or from outside of the datacenter set in which the VPC is deployed). A VSO CRD is used to expose a service (e.g., a middlebox service or an application tier, such as Web server, AppServer, database server) provided inside of the VPC to machines outside of the VPC or to machines inside of the VPC. In some embodiments, an API that refers to a VSO CRD map a set of one or more L4 ports and a protocol to an endpoint group of machines for providing the service. Some embodiments use a Load Balancer CRD to define the configuration for a load balancer service. In some embodiments, the API that refers to the VSO CRD also uses the Load Balancer CRD to specify a load balancer service to use for distributing the traffic load among the endpoint group of machines. Several more detailed examples of some embodiments will now be described. In these examples, several of the deployed logical networks are Kubernetes-based logical networks that define virtual private clouds (VPC) for corporate entities in one or more datacenters. In some embodiments, the VPC is a “supervisor” Kubernetes cluster with a namespace that provides the tenancy boundary for the entity. These embodiments use CRDs to define additional networking constructs and policies that complement the Kubernetes native resources. In some embodiments, the APIs define a cluster of nodes (e.g., a Kubernetes worker node cluster) that includes a set of components that represent a control plane for the cluster and a set of (worker) nodes. In some embodiments, the nodes are host computers that host components of the Kubernetes clusters. The host computers of the cluster, in some embodiments, are physical machines, virtual machines, or a combination of both. The host computers (i.e., nodes) execute a set of Pods that, in some embodiments, include a set of containers. In some embodiments, a Kubernetes worker node executes an agent that ensures that containers are running within Pods (e.g., a kubelet), a container runtime that is responsible for running containers, and a network proxy (e.g., a kube-proxy). A cluster, in some embodiments, is partitioned into a set of namespaces into which different Pods or containers are deployed. A namespace is further partitioned into separate clusters, in some embodiments, as will be described below. One of ordinary skill will realize that other embodiments define other types of networks for other types of entities, such as other business entities, non-profit organizations, educational entities, etc. In some of these other embodiments, neither Kubernetes nor Kubernetes-based Pods are used. For instance, some embodiments are used to deploy networks for only VMs and/or non-Kubernetes containers/Pods. Additional details of VPC and GC deployment using CRDs can be found in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020, now published as U.S. Patent Publication 2021/0314239, which is hereby incorporated by reference. As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message is used in this document to refer to various formatted collections of bits that are sent across a network. The formatting of these bits can be specified by standardized protocols or non-standardized protocols. Examples of data messages following standardized protocols include Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers (or layer 2, layer 3, layer 4, and layer 7) are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model. Some embodiments configure the logical network for the VPC to connect the deployed set of machines to each other. For instance, in some embodiments, the logical network includes one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, a logical forwarding element (LFE) is defined by configuring several physical forwarding elements (PFEs), some or all of which execute on host computers along with the deployed machines (e.g., VMs and Pods). The PFEs, in some embodiments, are configured to implement two or more LFEs to connect two or more different subsets of deployed machines. In some embodiments, two or more sub-networks are configured for the logical networks. In some embodiments, each sub-network has one or more segments (with each segment implemented by a logical switch), connects a different subset of deployed machines, and provides a set of network elements that satisfy a unique set of connectivity requirements for that subset of machines. For instance, in some embodiments, a first sub-network (e.g., a first logical switch) connects the Kubernetes Pods, while a second sub-network (e.g., a second logical switch) connects VMs and/or non-Kubernetes Pods. Another example is having one sub-network for machines (e.g., VMs, Pods, etc.) that need high-bandwidth, and another sub-network for machines that can tolerate less bandwidth. To deploy some or all of the unique sub-networks, some embodiments use CRDs to define the attributes of the sub-networks, so that these sub-networks can be referred to by the API requests. These CRDs are referred to, in some embodiments, as virtual network CRDs. An API that refers to a virtual-network CRD in some embodiments includes a network type value that can be used to define different types of virtual networks. FIG.1illustrates an exemplary virtual private cloud (VPC)110(e.g., a virtual hybrid cloud) configured to include a set of guest clusters (GCs)105that each use a set of service nodes145(e.g., VMs, appliances, containers, etc.) that provide a set of services for machines (master node142and worker nodes144) of the VPC and the set of GCs. The nodes of the VPC, in some embodiments, are connected by a VPC node segment146. Like different VPCs that can be defined for the same entity or different entities (different tenants) in an availability zone, different guest clusters can be defined for a VPC. The different guest clusters in some embodiments include different types of workloads (e.g., compute nodes, containers, etc.). As shown, the set of guest clusters105includes several Kubernetes nodes (e.g., host computers that are part of the guest cluster) on which Pods (not shown) for the cluster execute. The set of nodes includes a set of master nodes120and a set of worker nodes124. In some embodiments, the set of master nodes120includes a Kubernetes API server executing on each master node120to deploy Pods in the guest cluster. In this example, each guest cluster105includes a logical network (i.e., GC node segment126) for connecting the Kubernetes nodes. In some embodiments, the logical network includes multiple network segments defined by a logical switch. The logical network of each guest cluster105connects to the logical VPC gateway router140that connects to the logical (or physical) gateway router150of the availability zone. In some embodiments, the logical VPC gateway router140of the VPC110is similar to the gateway router1282ofFIG.12discussed below. As such, it includes distributed and centralized (service) routing components, with at least two redundant pairs of centralized routing components. In some embodiments, the nodes (e.g., host computers) executing machines of each guest cluster105implement the distributed router of logical VPC gateway router140. The VPC110includes a logical network with one or more logical sub-networks each of which has one or more network segments with each network segment defined by a logical switch. In some embodiments, the GC logical network is a sub-network of the VPC logical network. The networks and machines (e.g., VMs, Pods, etc.) of the GC, in some embodiments, use NSX-T native networking. In such embodiments, Pods are placed on NSX-T segments in the GC network. NSX-T container network interfaces (CNIs) are used, in such embodiments, to connect the Pods to the NSX-T native network. In the NSX-T native network, the machines (e.g., Pods and VMs) of the GCs can reach each other through NSX-T distributed switching and routing and GC machines can reach the machines of the VPC network through the NSX-T distributed switching and routing and, in some embodiments, through the centralized routing element of the VPC. GC subnets, in some embodiments, are not exposed outside the VPC. In some embodiments, all traffic forwarding, networking, and security services are implemented by an NSX-T dataplane in hypervisors of host computers hosting machines of the VPC and GC. The Kubernetes network policy, in some embodiments, is implemented by the NSX-T distributed firewall. FIG.2illustrates a guest cluster using NSX-T CNIs to connect service Pods228for a service executing in a set of worker nodes224to a network segment (SDN-created Pod segment232) either known to, or created by, an SDN manager. The SDN-created Pod segment232and the network addresses of the service Pods on the segment232are known to the SDN manager cluster (e.g., an NSX-T management cluster) and allows individual Pods to be directly addressed by a VPC load balancer245. Accordingly,FIG.2illustrates that packets of a set of packet flows270destined for the load balanced Pods (e.g., servers A) are processed by a load balancer (e.g., a service node)245of the VPC and with different subsets of packet flows (illustrated using different line styles) in the set of packet flow270distributed among any of the Pods228(i.e., Serves A1-An) using logical routing and forwarding operations that, in some embodiments, includes logical processing through the VPC T1 router240, the GC node segment226, and the SDN-created Pod segment232. In some embodiments, non-NSX-T CNIs are used to connect Pods over a virtual network implemented inside a set of worker nodes on which the service Pods (e.g., servers A1-n328) execute, the virtual network (e.g., non-native Pod segment332) will not be known to NSX-T.FIG.3illustrates a guest cluster using such non-NSX-T CNIs. Because the virtual network connecting the Pods (and the network addresses of the Pods on the virtual network) is unknown, some embodiments that populate the load balancer with information regarding load balanced instances by the SDN manager cluster (e.g., NSX-T network manager) identify worker nodes224(e.g., by using network addresses of the worker nodes) hosting service Pods328as the load-balanced service instances. However, because different worker nodes224, in some embodiments, host different numbers of service Pods for a particular service, the load balancing over worker nodes does not spread the traffic evenly (or with any other desired distribution function). Accordingly, the supervisor namespace (VPC) NCP, in some embodiments, configures the worker nodes224to implement load balancing at the worker nodes. In some embodiments, the VPC NCP configures worker nodes to implement the load by balancing using service iptables created, in some embodiments, by a kube-proxy in the worker node to forward the traffic to a particular backend Pod. The service iptables, or any other configured forwarding/load balancing component, is represented by load balancer336. Load balancer336, in some embodiments, is effectively a distributed load balancer that applies the same rules at each instance of the load balancer336. In other embodiments, different load balancers336executing in different worker nodes224are programmed with different policies or rules for load balancing. A set of packet flows370destined for the load balanced Pods (e.g., servers A) are processed by a load balancer (e.g., a service node)245of the VPC which performs a first load balancing operation to produce subsets of the packet flows371that are directed to the individual worker nodes (e.g., using the IP address of the worker node on the GC node segment226). Once the packets arrive at the worker nodes, the load balancer336(e.g., service iptables) performs a second load balancing operation to distribute the subset of packets received from the load balancer245among the individual service Pods328(e.g., as groups of packets372) based on their network addresses on the non-native Pod segment332that are known to the worker nodes. A load balancing operation performed by one load balancer336is shown for clarity, however, one of ordinary skill in the art will appreciate that each load balancer performs a similar load balancing operation. In some embodiments, a set of service nodes (e.g., service nodes145(e.g., VMs, appliances, containers, etc.)) are a resource shared by the VPC and the GCs within the VPC. In some embodiments, the service nodes are instances of virtual service objects (VSDs) that provide a set of services to the machines of the VPC and are inherited by GCs deployed in the VPC such that the machines of the GCs also receive the set of services from the service nodes145. In some embodiments, the VSOs are associated with endpoint groups for which they provide a service. Different service nodes are deployed or assigned, in some embodiments, to provide a service or set of services for a particular GC within the VPC. Details of deploying a VSO can be found in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020. In addition to inheriting the physical resources allocated to the VPC, in some embodiments, the guest clusters also inherit network policies and service definitions. The VPC110also includes a cluster of master nodes142, each of which is similar to the Kubernetes master node1135ofFIG.11. Referring to elements ofFIG.9, in some embodiments, a master node142connects through one of its VNICs to a management network960to communicate with a set of SDN managers962, which in turn communicates with a set of SDN controllers964. The SDN managers/controllers are for deploying and configuring forwarding and service elements for the VPC. Compute elements (e.g., VMs and non-Kubernetes Pods) are deployed through compute managers/controllers966. The NCP for a guest cluster, in some embodiments, creates a port for each pod on an NSX-T segment (i.e., a segment of the GC that uses NSX-T native networking) and reports all Kubernetes contexts (Namespace, Pod name, Namespace labels, Pod labels, Services) of the Pod to a management cluster of NSX-T. From NSX-T API/UI, any NSX-T feature could be enabled and Pod traffic statistics could be viewed on the segment port. More importantly an NSX-T administrator can create dynamic NSGroups (e.g., namespace group) or endpoint groups using the Kubernetes contexts and define security policies between NSGroups (or endpoint groups), and apply other services (IPFix, service insertion, etc.) to the NSGroup. The Kubernetes abstractions and contexts are also exposed to NSX-T Intelligence and Ops UI/API, which provide powerful networking visibility, troubleshooting, and analytic functionalities. FIG.4conceptually illustrates a process400for deploying a guest cluster in a virtual private cloud (VPC) namespace. In some embodiments, the process400is performed by a network management system including a compute manager/controller (e.g., compute manager/controller966), a software defined network controller (e.g., SDN controller964), and a software defined network manager (e.g., SDN manager962). The process400begins by deploying (at405) a VPC namespace (e.g., a namespace mapped to a virtual private cloud or a virtual hybrid cloud) in which NSX-T objects will be created. Deploying the VPC, includes deploying at least one centralized routing element (e.g., a VPC gateway router) that provides access to a gateway routing element of an availability zone (e.g., a datacenter gateway router). In some embodiments, each centralized routing element includes a centralized service routing element (e.g., a service router) that is implemented at a limited number of centralized gateway routing elements and a distributed routing component that is implemented at each centralized routing element and additional forwarding elements on host computers hosting machines of the VPC (or a guest cluster within the VPC as discussed below). The centralized service routing component, in some embodiments, provides stateful services (e.g., firewall, load balancing, quality of service (QoS), etc.) and is implemented in an active/standby or active/active configuration that ensures that each data message belonging to a particular data message flow is always processed by a same centralized service routing component (service router) instance that stores the state for the particular data message flow. In some embodiments, the centralized service routing component connects to service machines (e.g., service nodes145) that provide a stateful service and directs data messages that require the service (e.g., based on network policies specified for the VPC) to the service machines. The distributed routing component of the VPC, in some embodiments, performs a set of stateless routing operations. The set of stateless routing operations performed by the distributed routing component, in some embodiments, includes a distributed firewall operation that applies stateless firewall rules to data messages processed by the distributed routing element. The distributed routing element, in some embodiments, executes (is implemented) on each host computer that hosts a machine of the VPC namespace including any guest clusters within the VPC namespace. The firewall rules in some embodiments are defined by a security CRD as described above and in more detail in U.S. patent application Ser. No. 16/897,652. After deploying the namespace, the process400receives (at410) an instruction to deploy a guest cluster (e.g., guest cluster105) within the VPC namespace (e.g., supervisor namespace110). The instruction, in some embodiments, is received at a network manager cluster (e.g., SDN manager962) from a network control system such as the one described below in relation toFIG.11. In some embodiments, the instruction is received as an API request as described below. The API request, in some embodiments, is a portion of a hierarchical API request that included instructions to deploy the VPC namespace and then to deploy the guest cluster (or guest clusters) within the VPC namespace. The instruction to deploy the guest cluster, in some embodiments, includes instructions to deploy components of the guest cluster (e.g., network segments, service Pods, node virtual machines, other Pods, etc.) and to enable a set of services for the guest cluster such as a firewall or load balancer for the service Pods. After the instruction to deploy the guest cluster is received (at410) the process400selects (at415) resources of the VPC namespace to assign to the guest cluster. The resources assigned to the guest cluster, in some embodiments, include all or some of IP addresses, service machines, physical compute resources, network (e.g., bandwidth) resources, VPC gateway routing elements, etc. For example, in some embodiments, a particular centralized routing element is selected to be the active centralized routing element for a particular deployed guest cluster. Additionally, or alternatively, a particular set of load balancers or other service machines is selected, in some embodiments, to provide load balancing or other services to a particular deployed guest cluster. By selecting different centralized routing elements (e.g., VPC gateway routers) and sets of service machines for each guest cluster, the load from each guest cluster can be distributed among existing instances of the centralized routing elements and service machines without having to deploy a new centralized routing element and set of service machines each time a guest cluster is deployed. FIGS.5-7illustrate guest clusters using services of the VPC selected in operation410.FIG.5illustrates a supervisor namespace110including a set of guest clusters105. The guest clusters105include sets of worker nodes that host service Pods (not shown) that are serviced by service nodes145(e.g., load balancers, SNAT, Firewalls, etc.) of the VPC gateway router (centralized routing element). The guest clusters105each implement at least one instance of the distributed routing component596(e.g., one DR instance on each host computer hosting a machine of the guest cluster). The worker nodes (e.g., host computers), in some embodiments, also implement sets of logical switches (e.g., network segments) for different groups of machines (e.g., Pods, VMs, etc.) that connect to the DR component596executing in the same guest cluster which, in turn, connects to the logical switch594connecting the distributed routing component596of the VPC gateway router140to the centralized routing component597of the VPC gateway router140. The guest clusters thus inherit north-south firewall rules that are applied at the centralized routing component of the VPC gateway router140and the distributed firewall applied at the DR596. FIG.6illustrates a VPC610that includes a set of guest clusters605a-605mthat are each assigned a particular service machine (e.g., load balancers645aand645j) in a service machine cluster645. Machines in the VPC are not shown for clarity. Each guest cluster605a-605mofFIG.6accesses availability zone gateway router650through VPC gateway router640. Guest clusters605a-605mconnect to components of the other guest clusters and the VPC through the distributed router of the VPC. In some embodiments, each set of service Pods (e.g.,628) in a GC605has a particular load balancer645selected to load balance for the set of service Pods. InFIG.6, load balancer645ais selected for a set of service Pods (i.e., servers A1-A3628) in guest cluster605aand load balancer645jis selected for a set of service Pods (i.e., servers B1-B3629) in guest cluster605mand a set of service nodes in the VPC. One of ordinary skill in the art will appreciate that, in some embodiments, a set of multiple service machines in the service machine cluster645is selected for at least one GC605and that different sets of service machines in the service machine cluster645are selected for different GCs605. A service machine cluster for only one service (i.e., load balancing645) is illustrated for clarity, but one of ordinary skill in the art will appreciate that multiple such service machine clusters may exist in some embodiments and that the selection of a particular service machine in each service machine cluster, in some embodiments, is independent of the selection of a particular service machine in a different service machine cluster. FIG.7illustrates a VPC710that includes a set of multiple VPC gateway routers740a-740kthat are configured in active/standby configuration for each guest cluster705a-705msuch that the set of VPC gateway routers740a-740kis effectively configured in an active/active configuration. VPC gateway router740ais selected for guest cluster705aand the VPC gateway router740kis selected for guest cluster705m. In some embodiments, each VPC gateway router740connects to a same set of service machines, while in other embodiments, each VPC gateway router connects to a different set of service machines. The set of service machines for each VPC gateway router, in some embodiments, is based on the services required for guest clusters for which the VPC gateway router have been selected. FIG.8illustrates a VPC710that includes a set of multiple VPC gateway routers840a-840kthat perform gateway routing for the set of guest clusters705a-705mand the VPC710. For each guest cluster705a set of VPC gateway routers840is selected and configured in active/active configuration. VPC gateway routers840aand840bare selected as the active/active gateway routers840for guest cluster705a, and the VPC gateway routers840b,840j, and840kare selected as the active/active gateway routers840for guest cluster705m. In some embodiments, gateway routers840configured as active/active gateway routers exchange any of (1) state information related to stateful services provided at the gateway routers840or (2) information allowing a particular gateway router (e.g.,840b) that receives a packet to identify the gateway router that maintains the state information needed to process the packet. For example, in some embodiments, a consistent hash of header values that are constant for the life of a packet flow are used to identify a (backup) gateway router that stores state information. In other embodiments, stateful services provided by a same service node called by each gateway router840for a particular guest cluster maintains the state information and the gateway routers do not have to account for the location of the state information. In some embodiments, each VPC gateway router840(or set of gateway routers) connects to a same set of service machines, while in other embodiments, each VPC gateway router connects to a different set of service machines. The set of service machines for each VPC gateway router, in some embodiments, is based on the services required for guest clusters for which the VPC gateway router have been selected. In addition to selecting (at415) resources of the VPC namespace to assign to the guest cluster, the process400updates (at420) policies (e.g., security and network policies) of the VPC namespace based on the addition of the guest cluster. In some embodiments, updating the policies includes adding policies defined for the guest cluster to existing policies of the VPC namespace. For example, based on a set of service pods implemented in the guest cluster and assigned a virtual IP (VIP) address (e.g., by selecting an available VIP of the VPC namespace in operation415), a network policy requiring load balancing for data messages destined to the VIP associated with the set of service pods is added to the set of existing network policies. In addition to updating a network policy, a firewall based on a security policy may need to be updated based on the addition of the guest cluster. For example, a firewall policy that generates firewall rules for each machine in the VPC based on a source and/or destination address of a data message updates the set of firewall rules with firewall rules for the addresses of the machines in the added guest cluster. If a firewall rule specifies a group of machines, some embodiments add the machines of the guest cluster to the group definition (e.g., either a machine identifier or a VIF of the machine at which the rule should be applied). For north-south firewall rules, new rules are added, in some embodiments, based on an external IP address used by the guest cluster (e.g., based on a source network address translation operation at the edge of the guest cluster or at the centralized routing element of the VPC). Finally, the components of the VPC and the guest cluster(s) within the VPC namespace are configured (at425) to apply the updated policies. In some embodiments, configuring the VPC components includes updating a rule set or group definition as described above. Configuring the guest clusters, in some embodiments, includes identifying the host computers hosting machines of the guest cluster and updating an existing distributed routing component instance to apply the updated rules and implement the network segments of the added guest cluster. Alternatively, in some embodiments, or for host computers that previously did not host components of the VPC, configuring components of the VPC to apply the updated policies includes configuring a forwarding element of a host computer on which a machine of the guest cluster executes to implement the network segments to which the guest cluster machines connect as well as the distributed routing component which applies a set of updated distributed firewall rules. Additional details of deploying VPC namespaces and guest clusters are discussed below. In some embodiments, the supervisor cluster (VPC) resources (e.g., network and Kubernetes services, Pods, VMs, worker nodes, etc.) are accessible by the guest cluster machines (e.g., VMs and Pods). This is because the IP addresses of the VPC machines are reachable from the machines of the guest clusters. In some embodiments, the guest cluster network is opaque to the supervisor cluster (VPC) such that the VPC machines cannot address the machines in the GC networks.FIG.9illustrates a more complete logical view of the supervisor namespace (VPC)910and guest clusters905. The VPC910includes a logical VPC gateway router940and a set of service nodes945that provide edge services for VPC910and guest clusters905. The logical gateway router940, in some embodiments, is implemented by multiple physical routing elements as discussed above in relation toFIGS.7and8and service nodes945represent different sets of service nodes945that provide different services. VPC910also includes multiple network segments (e.g., logical switches)947and946that may be scaled out (e.g., by an auto-scaling operation performed by an NCP of a master node942) based on the availability of addresses in the network segment. In some embodiments, multiple different segments are deployed to logically separate machines (Pods, VMs, etc.) with different functions or that belong to different entities of a tenant for which the VPC910is deployed. Each network segment of the VPC910is logically connected to logical gateway router940. The master node942, in some embodiments, is connected to a management network to communicate with the compute manager/controller966to deploy machines and to communicate with the SDN manager962to identify machines in the VPC910(or guest cluster905) network that need to be connected to the SDN network (e.g., an NSX-T network). The SDN manager962can communicate with the SDN controller964as described in more detail below in regard toFIG.11. Each guest cluster905includes at least one network segment that connects to the logical gateway router940. As for the VPC network segments946and947, the network segments of the guest cluster may be scaled out (e.g., by an auto-scaling operation performed by an NCP of a master node942) based on the availability of addresses in the network segment. In some embodiments, multiple different segments are deployed to logically separate machines (Pods, VMs, etc.) with different functions or that belong to different entities of a tenant for which the guest cluster905is deployed. FIG.10illustrates a set of physical host computers1015A-E on which machines (e.g., VMs1021and Pods1022) of a VPC1010and machines (VMs1031,1041, and1051and Pods1032,1042, and1052) of GC1-GC3execute. The host computers1015A-E each execute a managed forwarding element (MFE1025A-E) that implement logical switches for logical networks (segments) that span the host computer and execute the distributed router1096. The MFE1025A is the only MFE that executes the centralized routing component in the illustrated embodiment. As can be seen, different sets of host computers1015execute machines (VMs and Pods) of different guest clusters (GC1-GC3) and of different segments (1046,1047,1026a-c,1027a-c, and1028c) of the guest clusters. One of ordinary skill in the art will understand thatFIG.10is merely for illustrative purposes and that many more host computers with more complicated configurations are used in some embodiments. Additionally, although service nodes have been omitted fromFIG.10they are understood to execute on a set of host computers or appliances and are omitted only for clarity. FIG.11illustrates an example of a control system1100of some embodiments of the invention. This system1100processes APIs that use the Kubernetes-based declarative model to describe the desired state of (1) the machines to deploy, and (2) the connectivity, security and service operations that are to be performed for the deployed machines (e.g., private and public IP addresses connectivity, load balancing, security policies, etc.). To process these API, the control system1100uses one or more CRDs to define some of the resources referenced in the APIs. The system1100performs automated processes to deploy a logical network that connects the deployed machines and segregates these machines from other machines in the datacenter set. The machines are connected to the deployed logical network of a VPC in some embodiments. As shown, the control system1100includes an API processing cluster1105, a software defined network (SDN) manager cluster1110, an SDN controller cluster1115, and compute managers and controllers1117. The API processing cluster1105includes two or more API processing nodes1135, with each node comprising an API processing server1140and a network controller plugin (NCP)1145. The API processing server receives intent-based API calls and parses these calls. In some embodiments, the received API calls are in a declarative, hierarchical Kubernetes format, and may contain multiple different requests. The API processing server1140parses each received intent-based API request into one or more individual requests. When the requests relate to the deployment of machines, the API server provides these requests directly to compute managers and controllers1117, or indirectly provide these requests to the compute managers and controllers1117through an agent running on the Kubernetes master node1135. The compute managers and controllers1117then deploy VMs and/or Pods on host computers in the availability zone. The API calls can also include requests that require network elements to be deployed. In some embodiments, these requests explicitly identify the network elements to deploy, while in other embodiments the requests can also implicitly identify these network elements by requesting the deployment of compute constructs (e.g., compute clusters, containers, etc.) for which network elements have to be defined by default. As further described below, the control system1100uses the NCP1145to identify the network elements that need to be deployed, and to direct the deployment of these network elements. In some embodiments, the API calls refer to extended resources that are not defined per se by Kubernetes. For these references, the API processing server1140uses one or more CRDs1120to interpret the references in the API calls to the extended resources. As mentioned above, the CRDs in some embodiments include the VIF, Virtual Network, Endpoint Group, Security Policy, Admin Policy, and Load Balancer and VSO CRDs. In some embodiments, the CRDs are provided to the API processing server in one stream with the API calls. NCP1145is the interface between the API server1140and the SDN manager cluster1110that manages the network elements that serve as the forwarding elements (e.g., switches, routers, bridges, etc.) and service elements (e.g., firewalls, load balancers, etc.) in an availability zone. The SDN manager cluster1110directs the SDN controller cluster1115to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks. As further described below, the SDN controller cluster interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments. In some embodiments, NCP1145registers for event notifications with the API server1140, e.g., sets up a long-pull session with the API server to receive all CRUD (Create, Read, Update and Delete) events for various CRDs that are defined for networking. In some embodiments, the API server1140is a Kubernetes master node, and the NCP1145runs in this node as a Pod. NCP1145in some embodiments collects realization data from the SDN resources for the CRDs and provide this realization data as it relates to the CRD status. In some embodiments, NCP1145processes the parsed API requests relating to VIFs, virtual networks, load balancers, endpoint groups, security policies, and VSOs, to direct the SDN manager cluster1110to implement (1) the VIFs needed to connect VMs and Pods to forwarding elements on host computers, (2) virtual networks to implement different segments of a logical network of the VPC (or of GCs within the VPC), (3) load balancers to distribute the traffic load to endpoint machines, (4) firewalls to implement security and admin policies, and (5) exposed ports to access services provided by a set of machines in the VPC to machines outside and inside of the VPC. The API server provides the CRDs that have been defined for these extended network constructs to the NCP for it to process the APIs that refer to the corresponding network constructs. The API server also provides configuration data from the configuration storage1125to the NCP1145. The configuration data in some embodiments include parameters that adjust the pre-defined template rules that the NCP follows to perform its automated processes. The NCP performs these automated processes to execute the received API requests in order to direct the SDN manager cluster1110to deploy the network elements for the VPC. For a received API, the control system1100performs one or more automated processes to identify and deploy one or more network elements that are used to implement the logical network for a VPC. The control system performs these automated processes without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received. The SDN managers1110and controllers1115can be any SDN managers and controllers available today. In some embodiments, these managers and controllers are the NSX-T managers and controllers licensed by VMware Inc. In such embodiments, NCP1145detects network events by processing the data supplied by its corresponding API server1140, and uses NSX-T APIs to direct the NSX-T manager1110to deploy and/or modify NSX-T network constructs needed to implement the network state expressed by the API calls. The communication between the NCP and NSX-T manager1110is asynchronous communication, in which NCP provides the desired state to NSX-T managers, which then relay the desired state to the NSX-T controllers to compute and disseminate the state asynchronously to the host computer, forwarding elements and service nodes in the availability zone (i.e., to the SDDC set controlled by the controllers1115). After receiving the APIs from the NCPs1145, the SDN managers1110in some embodiments direct the SDN controllers1115to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers serve as the central control plane (CCP) of the control system1100.FIG.12depicts the SDN controllers1115acting as the CCP computing high level configuration data (e.g., port configuration, policies, forwarding tables, service tables, etc.). In such capacity, the SDN controllers1115push the high-level configuration data to the local control plane (LCP) agents1220on host computers1205, LCP agents1225on edge appliances1210and TOR (top-of-rack) agents1230of TOR switches1215. Based on the received configuration data, the LCP agents1220on the host computers1205configure one or more software switches1250and software routers1255to implement distributed logical switches, routers, bridges and/or service nodes (e.g., service VMs or hypervisor service engines) of one or more logical networks with the corresponding switches and routers on other host computers1205, edge appliances1210, and TOR switches1215. On the edge appliances, the LCP agents1225configure packet processing stages1270of these appliances to implement the logical switches, routers, bridges and/or service nodes of one or more logical networks along with the corresponding switches and routers on other host computers1205, edge appliances1210, and TOR switches1215. For the TORs1215, the TOR agents1230configure one or more configuration tables1275of TOR switches1215through an OVSdb server1240. The data in the configuration tables then is used to configure the hardware ASIC packet-processing pipelines1280to perform the desired forwarding operations to implement the desired logical switching, routing, bridging and service operations. U.S. Pat. Nos. 10,554,484, 10,250,553, 9,847,938, and 9,178,833 describe CCPs, LCPs and TOR agents in more detail, and are incorporated herein by reference. After the host computers1205are configured along with the edge appliances1210and/or TOR switches1215, they can implement one or more logical networks, with each logical network segregating the machines and network traffic of the entity for which it is deployed from the machines and network traffic of other entities in the same availability zone.FIG.12illustrates an example of a logical network1295that defines a VPC for one entity, such as one corporation in a multi-tenant public datacenter, or one department of one corporation in a private datacenter. As shown, the logical network1295includes multiple logical switches1284with each logical switch connecting different sets of machines and serving as a different network segment. In some embodiments, the different logical switches belong to different guest clusters. Each logical switch has a port1252that connects with (i.e., is associated with) a virtual interface1265of a machine1260. The machines1260in some embodiments include VMs and Pods, with each Pod having one or more containers. The logical network1295also includes a logical router1282that connects the different network segments defined by the different logical switches1284. In some embodiments, the logical router1282serves as a gateway for the deployed VPC inFIG.12. In some embodiments, the logical router1282includes distributed routing components1296and centralize routing components1297. The distributed routing components in some embodiments are implemented by the routing instances that execute on the host computers and edge appliances, while the central routing components1297are implemented by the edge appliances1210. Each centralized routing component performs one or more services1291or are associated with one or more middlebox service nodes that perform one or more services. As such, the centralized routing component are referred to as service routers in some embodiments. In some embodiments, the centralized and distributed routing components connect through a logical switch1294defined on the host computers1205and the edge appliances1210. Also, in some embodiments, the logical router is implemented by a pair of logical nodes1299, with each node having centralized and distributed components. The pair of nodes can be configured to perform in active/active or active/standby modes in some embodiments. U.S. Pat. No. 9,787,605 describes the gateway implementation of some embodiments in more detail and are incorporated herein by reference. FIG.13conceptually illustrates a process1300for deploying a VPC for an entity. In some embodiments, the NCP1145directs the SDN managers and controllers to perform this process. In some embodiments, the process1300starts when the NCP1145receives an API request that requires a new VPC to be deployed. Such an API request in some embodiments might be a request to create a new logical network for a new or existing entity in an availability zone. As shown, the process1300initially allocates (at1305) an IP subnet for the VPC. In some embodiments, the VPC is part of a supervisor cluster (or namespace) that is a single routing domain with a corresponding IP CIDR (Classless Inter-Domain Routing) that specifies a range of IP addresses internal to the availability zone. The allocated IP subnet in some embodiments is a subnet from this IP CIDR. In conjunction with the allocated IP addresses, the process in some embodiments allocates MAC addresses for virtual interfaces of the VPC. In some embodiments, the VPC is a virtual hybrid cloud (VHC) implemented in a single namespace in the supervisor cluster. Next, at1310, the process defines a gateway router for the VPC, and associates this gateway router with one or more of the allocated internal IP addresses. These associated addresses are addresses used by VPC switches and routers to reach the gateway.FIG.14illustrates an example of a VPC1400with a gateway router1282. In some embodiments, the gateway router1282is a logical router that has distributed and centralized components, and/or is implemented as a pair of active/active or active/standby routers, as described above. For example, the VPC gateway router1282, in some embodiments, is a NSX-T Tier 1 (T1) router that provides centralized SNAT and load balancing services, and a north-south firewall service. In some embodiments, the VPC gateway router1282is configured to connect the VPC with one or more gateway routers1405of the availability zone (i.e., of the SDDC set that contains the VPC), in order to connect to a network external to the availability zone. Also, in some embodiments, the VPC gateway router1282is configured to communicate with a datacenter gateway router1405to connect the VPC gateway1282to another VPC gateway of another VPC in order to connect the two VPCs to each other. In some embodiments, the VPC gateway router1282is configured to forward packets directly to the gateway routers (not shown) of the other VPCs. In some embodiments, the VPC gateway router1282is traversed for cross-namespace traffic and firewall rules (including admin policies and Kubernetes network policies on the namespace) are applied to the cross-namespace traffic. However, since Kubernetes expects a single routing domain for the whole cluster (supervisor namespace, or VPC), SNAT will not be applied to cross-namespace traffic, but only to the traffic to the external network. At1315, the process defines a segment of a logical network that it defines for the VPC and allocates a range of IP addresses to this segment. In some embodiments, this allocated range is a contiguous range, while in other embodiments it is not (i.e., the allocated IP addresses in these embodiments are not necessarily sequential). In some embodiments, the defined logical network segment includes a logical switch that is defined to connect a particular set of machines (e.g., VMs and/or Pods).FIG.14illustrates an example of a logical switch1284that belongs to one logical network segment. As mentioned above, the VPC logical network in some embodiments includes one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, the SDN controller1115implements the logical network by configuring several physical forwarding elements (such as software and hardware switches, routers, bridges, etc.) on host computers, edge appliances, and TOR switches to implement one or more logical forwarding elements (LFEs). As further described below, the control system in some embodiments configures the PFEs to implement two or more LFEs to connect two or more different subsets of deployed machines that are in two or more sub-networks of the logical networks. In some embodiments, each sub-network can have one or more segments (with each segment implemented by a logical switch), connects a different subset of deployed machines, and provides a set of network elements that satisfy a unique set of connectivity requirements for that subset of machines. For instance, in some embodiments, a first sub-network (e.g., a first logical switch) connects the Kubernetes Pods, while a second sub-network (e.g., a second logical switch) connects VMs. In other embodiments, one sub-network is for VMs needing high-bandwidth, while another sub-network is for regular VMs. Additional examples are provided in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020. Some sub-networks of a VPC's logical network in some embodiments can have their own sub-network gateway router. If the sub-network for the segment defined at1315has such a sub-network router, the process1300defines (at1320) the sub-network router for the logical network segment. As further described below, the sub-network routers in some embodiments can be configured to forward packets to the VPC gateway router (e.g., router1282) or the availability-zone router (e.g., router1405). FIG.14illustrates an example of a sub-network router1410with which the logical switch1284and the VPC gateway router1282are configured to communicate. In some embodiments, the sub-network router1410is a distributed router implemented by software router1255executed on host computers.FIG.14uses dash lines to illustrate the sub-network router1410and its connections to the logical switch1284and the VPC gateway1282, in order to signify that the sub-network router1410might not be deployed for each sub-network of the VPC logical network. This point is further described in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020. When a sub-network router is used for a sub-network, all logical switches within the sub-network are connected to the sub-network router (e.g., router1410) and not the VPC router (e.g., router1282) in some embodiments. At1325, the process1300configures the VPC gateway to connect to the availability-zone gateway and to perform source network address translation (SNAT) operations. For instance, in some embodiments, the process configures the VPC gateway1282with forwarding rules for the gateway to use to forward certain data message flows to the availability-zone gateway1405. Also, in some embodiments, the VPC gateway router1282is configured to perform SNAT operations to translate internal network addresses used within the VPC to a set of one or more external source network addresses, and to perform the reverse SNAT operations. The external source network addresses in some embodiments are addresses within the availability zone. In some embodiments, the VPC gateway router1282does not perform SNAT operations for traffic exchanged between its VPC and another VPC that is deployed in the same availability zone, while in other embodiments, it performs such SNAT operations for some or all of the other VPCs. In some embodiments, the VPC gateway1282is configured to perform other service operations or to use service engines/appliances to perform such other service operations. For such embodiments, the process1300configures (at1330) the VPC gateway to perform other service operations (e.g., load balancing operations, firewall operations, etc.) or to forward data messages to service engines/appliances to perform such other service operations. In some embodiments, the VPC gateway is configured to perform service operations and/or forward data messages to service engines/appliances to perform such service operations, but this configuration, in some embodiments, is not part of the process1300when the VPC gateway is deployed and instead is part of another process that is performed subsequently (e.g., upon deployment of machines in the VPC that perform certain services or applications). InFIG.14, the VPC gateway1282is configured to forward data message flows to a cluster of one or more load balancers1415to perform load balancing operations, on ingress and/or egress traffic entering and/or exiting the VPC. The load balancing operations in some embodiments are L4 and/or L7 load balancing operations. In some embodiments, at least a subset of the deployed machines is deployed through Kubernetes, and the L4/L7 load balancing operations implement the load balancing and ingress services of Kubernetes. The VPC gateway in some embodiments performs some or all of such load balancing operations itself. Examples of gateways with load balancing ability are described in U.S. Pat. Nos. 9,787,605 and 10,084,726, which are incorporated herein by reference. The process1300ends after1330. Resources allocated to the VPC, in some embodiments, are inherited by the guest clusters such that the guest clusters use the resources allocated to the VPC. In some embodiments, the resources include processing resources, storage resources, and network resources (e.g., IP addresses assigned to the VPC, bandwidth allocated to the centralized routing element of the VPC, etc.). Sharing resources, in some embodiments, allows for more efficient use of allocated resources of the VPC and the GCs within the VPC by avoiding overallocation of resources to the individual GCs or the VPC. Resources can be allocated based on an average utilization of the set of VPC and GC resources where the variability of the resource needs are reduced based on the greater number of clusters such that the total load is more likely to be within a smaller range of the average and, accordingly, a smaller percentage of overallocation is expected to provide sufficient resources for most situations. Additionally, the automated deployment described herein and in U.S. patent application Ser. No. 16/897,652 simplifies the work of a system administrator that does not need to allocate resources to each workload machine or guest cluster separately. FIG.15illustrates an example of firewall rules1505and load balancing rules1510that are defined in terms of endpoint groups. These rules are processed by a firewall engine1520and load balancing engine1525executing on a host computer and/or edge appliance. In this example, the endpoint groups are used to define one or more match classification attributes of some or all of the firewall rules1505(e.g., the destination IP field of the firewall rule). As further described in U.S. patent application Ser. No. 16/897,652, some embodiments define each member of an endpoint group in terms of a port address as well as an IP address. In such embodiments, the endpoint group's associated IP and port addresses can be used to define source and/or destination IP and port values of service rules (e.g., firewall rules or other middlebox service rules) that are processed by middlebox service engines to perform middlebox service operations. As new guest clusters are added to a VPC, some embodiments add guest cluster machines as members of the endpoint groups (e.g., add the IP addresses of the GC machines to the endpoint group definition) based on the security or network policies defined for the VPC, the guest cluster, or both the VPC and the guest cluster. Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections. In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs. FIG.16conceptually illustrates a computer system1600with which some embodiments of the invention are implemented. The computer system1600can be used to implement any of the above-described hosts, controllers, and managers. As such, it can be used to execute any of the above described processes. This computer system includes various types of non-transitory machine readable media and interfaces for various other types of machine readable media. Computer system1600includes a bus1605, processing unit(s)1610, a system memory1625, a read-only memory1630, a permanent storage device1635, input devices1640, and output devices1645. The bus1605collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system1600. For instance, the bus1605communicatively connects the processing unit(s)1610with the read-only memory1630, the system memory1625, and the permanent storage device1635. From these various memory units, the processing unit(s)1610retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM)1630stores static data and instructions that are needed by the processing unit(s)1610and other modules of the computer system. The permanent storage device1635, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system1600is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device1635. Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device1635, the system memory1625is a read-and-write memory device. However, unlike storage device1635, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory1625, the permanent storage device1635, and/or the read-only memory1630. From these various memory units, the processing unit(s)1610retrieve instructions to execute and data to process in order to execute the processes of some embodiments. The bus1605also connects to the input and output devices1640and1645. The input devices enable the user to communicate information and select requests to the computer system. The input devices1640include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices1645display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices. Finally, as shown inFIG.16, bus1605also couples computer system1600to a network1665through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system1600may be used in conjunction with the invention. Some embodiments include electronic components, such as microprocessors, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD−RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Several embodiments were described above that use certain CRDs. One of ordinary skill will realize that other embodiments use other types of CRDs. For instance, some embodiments use LB monitor CRD so that load balancing monitors can be created through APIs that refer to such a CRD. LB monitors in some embodiments provide statistics to reflect the usage and overall health of the load balancers. Also, while several examples above refer to container Pods, other embodiments use containers outside of Pods. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims. | 70,636 |
11863353 | Throughout the drawings, the same or similar reference numerals represent the same or similar element. DETAILED DESCRIPTION Principle of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below. In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs. As used herein, the term “transmitter” refers to a device capable of transmitting a signal. As used herein, the term “receiver” refers to a device capable of receiving a signal. The transmitter or receiver may be implemented by or as a part of any suitable device, including, for example, a network device or a terminal device. As used herein, the term “network device” refers to any suitable device at a network side of a communication network. The network device may include any suitable device in an access network of the communication network, for example, including a base station (BS), a relay, an access point (AP), a node B (NodeB or NB), an evolved NodeB (eNodeB or eNB), a gigabit NodeB (gNB), a Remote Radio Module (RRU), a radio header (RH), a remote radio head (RRH), a low power node such as a femto, a pico, and the like. As used herein, the term “terminal device” refers to a device capable of, configured for, arranged for, and/or operable for communications with a network device or a further terminal device in a communication network. The communications may involve transmitting and/or receiving wireless signals using electromagnetic signals, radio waves, infrared signals, and/or other types of signals suitable for conveying information over air. In some embodiments, the terminal device may be configured to transmit and/or receive information without direct human interaction. For example, the terminal device may transmit information to the network device on predetermined schedules, when triggered by an internal or external event, or in response to requests from the network side. Examples of the terminal device include, but are not limited to, user equipment (UE) such as smart phones, wireless-enabled tablet computers, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), and/or wireless customer-premises equipment (CPE). For the purpose of discussion, some embodiments will be described with reference to UEs as examples of the terminal devices, and the terms “terminal device” and “user equipment” (UE) may be used interchangeably in the context of the present disclosure. As used herein, the term “circuitry” may refer to one or more or all of the following:a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) andb) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) andc) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term “includes” and its variants are to be read as open terms that mean “includes, but is not limited to”. The term “based on” is to be read as “based at least in part on”. The term “one embodiment” and “an embodiment” are to be read as “at least one embodiment”. The term “another embodiment” is to be read as “at least one other embodiment”. Other definitions, explicit and implicit, may be included below. As used herein, the terms “first”, “second” and the like may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms. As described above, in the mMIMO systems, the accuracy of the channel estimation becomes more important. A very long pilot sequence may be required to improve the quality of channel estimates such as estimates of the channel coefficients. Semi-blind channel estimation may be used to improve the accuracy of the channel estimation while achieving the spectral high efficiency, thereby to allow 5G critical use cases. In the semi-blind channel estimation, in addition to the pilot symbols, unknown data symbols may also be used. As such, the number of required pilot symbols may be reduced while the accuracy can still be higher. As a result, the accuracy of the channel estimation and the length of the pilot sequence can be balanced. An example process of the semi-blind channel estimation will be discussed below. In the MIMO or mMIMO systems, the channel estimation may be described in general using the equation (1) as follows: y=Hx+z(1) where:y represents received symbols on each receive antenna, which is a complex matrix with a size (Nr, Num) where Nr represents the number of receive antennas and Num represents the number of received signals (or the number of transmitted symbols) in a time domain.H represents a channel estimation matrix, which is a complex matrix with a size (Nr, Nt) where Nt represents the number of transmit antennas.x represents the transmitted symbols on each transmit antenna, which is a complex matrix with a size (Nt, Num).z represents received noise on each receive antenna, which is a complex vector with a size (Nr, Num). In the semi-blind channel estimation, Num=pNum+dNum, where pNum represents the number of pilot symbols which are known by a receiver, and dNum represents the number of data symbols which are unknown by the receiver. It is supposed that y=(ypyd) and x=(xpxd), the equation (1) can be rewritten as: (ypyd)=H(xpxd)+z(2) Accordingly, the channel estimation matrix H is estimated by using known y and xp. Compared with the pure pilot-based estimation, a major concern is that unknown xdneeds be estimated to achieve a relatively accurate channel estimation matrix and reduce the required pilot symbol length. Conventionally, an expectation Maximize (EM) process may be used for the semi-blind channel estimation in the mMIMO systems. The EM process is an iteration process where each iteration involves two steps, respectively referred to as Step E and Step M. In Step E, the expectation of the data symbols is calculated. In Step M, the estimated values of the data symbols are used to update the channel estimation matrix. An example EM process will be discussed below. Input: y: received signals, which has a size (Nr, Num), including (Nr, pNum) pilot signals and (Nr, dNum) data signals.x: transmitted pilot and data symbols, which has a size (Nt, Num), including (Nt, pNum) pilot symbols and (Nt, dNum) data symbols.σ2: noise power where the pilot and data symbols are normalized as a discrete gaussian random variable, N(0,1). Output:H: the estimated channel estimation matrix, which has a size (Nr, Nt). Initialization:ite: the iteration number, for example, ite=4.H: the channel estimation matrix H is initialized by the minimum mean square error (MMSE) algorithm based on pilot symbol information. EM Process Main Routine: For i=0 to ite: Step E: calculate the expectation udand covariance values Σ of the data symbols using the following equations (3) and (4): ud=(HHH+σ2I)−1HHyd(3) Σ=σ2(HHH+σ2I)−1(4) Step M: calculate the channel estimation matrix using the equation (5): H=(ypxpH+ydudH)(xpxpH+ududH+dNum*Σ)−1(5) Output: H #End of EM Algorithm Main Routine In the above equations (3), H represents the channel estimation matrix derived in the last iteration, and I represents a diagonal unit matrix. The EM process involves only a frequency-domain process of the channel estimation. However, the accuracy of the channel estimation is not good enough for the mMIMO systems. Specially in the case of using high order modulation such as 16 Quadrature Amplitude Modulation (16-QAM) and 64-QAM, a gain achieved by the EM-based channel estimation is smaller compared with the MMSE algorithm based on only the pilot symbols. In addition, the EM process is a closed form based solution, and this process is based on the assumption that a data symbol is a continuous Gaussian random variable. In reality, however, a data symbol can only be assigned to one of several discrete modulation values. This inappropriate assumption induces unavoidable estimation errors in the EM process. Moreover, the EM process will become very complex under considering a data symbol as a discrete value. Embodiments of the present disclosure provide an anchor process of data symbols in the semi-blinding channel estimation. This anchor process is a post process for detected symbols. With this process, in a MIMO or Multi-User MIMO (MU-MIMO) system, a receiver configured with a plurality of receiving antennas receives signals from one or more transmitters each configured with a plurality of transmitting antennas, and then a plurality of data symbols are detected from these signals based on channel estimation. The data symbols are adjusted based on a set of constellation points for a modulation mode or scheme. The modulation mode or scheme is used at the transmitter and associated with the received signals. For example, if a detected data symbol is already close enough with one constellation value associated with the modulation mode, this constellation value may be assigned to that detected data symbol. Accordingly, the channel estimation can be updated based on the adjusted data symbols. This anchor scheme allows a data symbol with a discrete value to be used in the semi-blinding channel estimation, but not limited to a Gaussian random variable as used by the convention EM process. Based on the adjusted data symbol, the accuracy of the channel estimation may be improved. FIG.1shows an example environment100in which embodiments of the present disclosure can be implemented. The environment100, which is a part of a MIMO or mMIMO system, includes a transmitter110and a receiver120. It is to be understood that one transmitter and one receiver are shown only for the purpose of illustration without suggesting any limitation to the scope of the present disclosure. The environment100may include any suitable number of transmitters and receivers adapted for implementing embodiments of the present disclosure. The transmitter110and the receiver120can be implemented by or as a part of any suitable device. In some embodiments, the transmitter110may be implemented at a network device, and the receiver120may be implemented at a terminal device. In the embodiments where the environment100is a part of a relay communication network. In this example, the transmitter110may be implemented at a network device, and the receiver120may be at a relay, and vice versa. In some other embodiments, the transmitter110and the receiver120may be both implemented at terminal devices in device-to-device (D2D) communications, which may be alternatively referred to as sidelink, or vehicle to everything (V2X). The communication between the transmitter110an the receiver120may follow any suitable communication standards or protocols such as Universal Mobile Telecommunications System (UMTS), long term evolution (LTE), LTE-Advanced (LTE-A), the fifth generation (5G) NR, Wireless Fidelity (Wi-Fi) and Worldwide Interoperability for Microwave Access (WiMAX) standards, and employs any suitable communication technologies, including, for example, Multiple-Input Multiple-Output (MIMO), Orthogonal Frequency Division Multiplexing (OFDM), time division multiplexing (TDM), frequency division multiplexing (FDM), code division multiplexing (CDM), Bluetooth, ZigBee, and machine type communication (MTC), enhanced mobile broadband (eMBB), massive machine type communication (mMTC) and ultra-reliable low latency communication (uRLLC) technologies. In some embodiments, the MIMO technology, such as single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) technologies, may be used for the communications in the environment100. The transmitter110is provided with a plurality of transmitting antennas. The receiver120is provided with a plurality of receiving antennas. The receiver120can perform the channel estimation based on signals received by the plurality of receiving antennas from one or more transmitters each configured with a plurality of transmitting antennas. In various example embodiments of the present disclosures, a plurality of data symbols are detected by the receiver120from the received signals based on channel estimation. For one of the detected data symbols, the receiver120adjusts the detected data symbol based on a set of constellation points for a modulation mode that is used at the transmitter110and associated with the received signals. Then, the channel estimation is updated based on the adjusted data symbols. FIG.2shows a flowchart of an example method200in accordance with some embodiments of the present disclosure. The method200can be implemented at the receiver120as shown inFIG.1. For the purpose of discussion, the method200will be described with reference toFIG.1. At block205, the receiver120detects, from a plurality of signals received by the plurality of receiving antennas from the plurality of transmitting antennas of the transmitter110, a plurality of data symbols based on channel estimation. The channel estimation may be current channel estimation obtained using any suitable approach. This detection can utilize any suitable approach of detecting data symbols that already exists or will be developed in the future. The scope of the present disclosure will not be limited in this regard. As an example, the transmitter110may transmit the plurality of data symbols with the plurality of transmitting antennas at the same. Accordingly, the receiver120may use the MMSE algorithm or other algorithms to detect these data symbols. At block210, the receiver120adjusts the detected data symbols based on a set of constellation points for a modulation mode that is used by the transmitter110and associated with the received signals. In some embodiments, the data symbol may be implemented by an OFDM symbol. The receiver120may be aware of the modulation mode by an explicit or implicit indication transmitted by the transmitted110. For example, the transmitter110may inform the receiver120of the modulation mode that is used for the transmitted signals. The modulation mode may also be defined in advance and known to both the transmitter110and the receiver120. In some example embodiments, the receiver120may adjust the data symbols one by one. For example, in the case where Quadrature Phase Shift Keying (QPSK) modulation is used by the transmitter110for the modulation, each constellation point may be represented by a complex value with a real part and an imaginary part, both selected from a value set {−1,1}. Each of the detected data symbols may be adjusted based on at least one comparison with a constellation point in the set of constellation points. In some example embodiments, the detected data symbol may be adjusted based on a nearer constellation point of the modulation mode. For example, if when the detected data symbol is mapped onto the constellation of the modulation mode, the detected data symbol is very close to a constellation point, the detected data symbol may be adjusted to correspond to the constellation point. In some example embodiments, both the detected data symbol and the constellation point may comprise real and imaginary parts. In the context of the present disclosure, a symbol assigned to a real value may be considered to have an imaginary part that is zero. For the purpose of discussion, the real part of the constellation point will be referred to as a reference real part, and the imaginary part of the constellation point will be referred to as a reference imaginary part. The data symbol may be adjusted based on at least one of a distance (referred to as a first distance) between the real part of the modulated data symbol and the reference real part of the constellation point and a distance (referred to as a second distance) between the imaginary part of the modulated data symbol and the reference imaginary part of the constellation point. In some example embodiments, the detected data symbol may be adjusted based on comparison of the first and/or second distances with a threshold distance. For example, if it is determined that the first or second distance is below the threshold distance, the real or imaginary part of the detected data symbol may be adjusted to be the reference real or imaginary part. In some example embodiments, the detected data symbol may be adjusted to be limited within a predetermined range. For example, the value of detected data symbol may be limited to the maximum value (or a larger value) and the minimum value (or a smaller value) of the set of constellation points. In some example embodiment, if the reference real part of the constellation point has a value (referred to as a first reference value) greater than a threshold value, it is determined whether a value (referred to as a first value) of the real part of the detected data symbol is greater than the first reference value. If so, the first value may be adjusted to be the first reference value. If the reference imaginary part of the constellation point has a value (referred to as a second reference value) greater than the threshold value, it is determined whether a value (referred to as a second value) of the imaginary part of the detected data symbol is greater than the second reference value. If so, the second value may be adjusted to be the second reference value. In some example embodiment, if the first reference value of the reference real part of the constellation point is below than a threshold value, it is determined whether the first value of the real part of the detected data symbol is below the first reference value. If so, the first value may be adjusted to be the first reference value. If the second reference value of the reference imaginary part of the constellation point is below the threshold value, it is determined whether the second value of the imaginary part of the detected data symbol is below the second reference value. If so, the second value may be adjusted to be the second reference value. It is to be understood that the threshold values for selecting a larger value of real or imaginary parts of the set of constellation points and a smaller value of real or imaginary parts of the set of constellation points may be determined according to actual needs. Based on the plurality of adjusted data symbols, the receiver120updates the channel estimation at block215. The channel estimation may be implemented in any suitable approach that utilizes the data symbols. For example, the semi-blind channel estimation using the EM algorithm may be used. In order to further improve the accuracy of the channel estimation, in some example embodiments, the detection of the data symbols at block205, the adjustment of the data symbols at block210and the updating of the channel estimation at block215may be performed iteratively. For the first time in the iterative operations, the data symbol may be detected using channel estimation initialized in any suitable way. The anchor process according to embodiments of the present disclosure may be added into and thereby improve the conventional EM process as described above. An example EM process with an additional anchor process will be described below. In this example, the anchor process is performed in a complex space. The detected symbol may be represented as a complex value. Accordingly, the anchor process is applied onto the real and imaginary parts, separately. A value set (referred to as “SMV”) of the constellation points for a modulation mode may be defined as follows, for example: For QPSK, SMV can be: SMV={−1,1} (6) For 16QAM, SMV can be: SMV={−3,−1,1,3} (7) The transmitted data symbols may use different modulation modes. This means that different data symbols may have different SMVs. This example anchor process is illustrated as below. Input: xd: the detected data symbols, which has a size (Nt, dNum).SMV: the value set for the constellation points. Output:xd: the adjusted data symbols, which has a size (Nt, dNum). Initialization:anRate: an anchor rate, for example, anRate=0.25. stepvalue: calculate stepvalue for SMV, which is the nearest distance between neighboring constellation points. For example, in the above equations (6) and (7), stepvalue=2. Anchor Process Main Routine: For each x in xd:If real(x) is bigger or less than the maximum or minimum value in SMV, then real(x) is set as the maximum or minimum value in SMV, where real(x) represents a value of a real part of x.If imag(x) is bigger or less than the maximum or minimum value in SMV, then imag(x) is set as the maximum or minimum value in SMV where imag(x) represents a value of an imaginary part of x. For each x in xd:Find the nearest value in SMV for x: realV=argminv∈SMV❘"\[LeftBracketingBar]"v-real(x)❘"\[RightBracketingBar]"(8)imagV=argminv∈SMV❘"\[LeftBracketingBar]"v-imag(x)❘"\[RightBracketingBar]"If❘"\[LeftBracketingBar]"realV-real(x)❘"\[RightBracketingBar]"<anRate*stepValue,set:real(x)=realV.If❘"\[LeftBracketingBar]"imagV-imag(x)❘"\[RightBracketingBar]"<anRate*stepValue,set:imag(x)=imagV.(9) Output: the adjusted xd #End of Anchor Process Main Routine In this example anchor process, anRate is assigned to 0.25 only for the purpose of illustration. Other values of anRate are also possible. The EM process with the additional anchor process is referred to as an EAM process herein. The example EAM algorithm main routine is illustrated as follows: For i=0 to ite:Step E: the expectation and covariance of the data symbols based on the equations (3) and (4).Step A: apply the anchor process on the detected data symbols.Step M: calculate the channel estimation based on the equation (5). The anchor process can improve the performance of the channel estimation compared with the EM process.FIGS.3(a)-3(f)illustrate graphs of the performance comparisons of the EAM and EM processes in different testing cases in accordance with some example embodiments of the present disclosure. These testing cases are based on the following normalization:The real value based simulation is used for BPSK modulation cases. For complex modulation cases, the adjustment of the data symbol is decoupled as the adjustment of the real part and the adjustment of the imaginary part. The size of the real value based channel estimation matrix is (2Nr, 2Nt). The size of the transmitted data symbol matrix is (2Nt, Num).The channel estimation matrix H is generated by normal distribution N(0,1), normalized by σ2=1/sqrt(Nt) for the BPSK modulation cases or σ2=1/sqrt(2Nt) for the complex modulation cases.The transmitted symbol x is uniformly randomly selected from SMV, the elements in SMV is normalized as N(0,1) discrete gaussian variables. Noise is generated by normal distribution N(0,1), and normalized by σ2=1/SNR.For each case, 10000 sample data is simulated.The channel estimation matrix is firstly initialized by the MMSE algorithm based on pilot information.For the BPSK modulation cases, total 4 iterations are used, and the channel estimation matrix results of 2 and 4 iterations are comparedFor 16QAM modulation cases, total 8 iterations are used, and the channel estimation matrix results of 4 and 8 iterations are compared.In the pilot symbol simulation, the matrix organized by all pilot symbols for all users (which has a size pNum*Nt) is a full rank matrix. The pilot symbols may be any suitable sequence that is known at both the transmitter110and the receiver120. For example, demodulation reference signals (DMRSs) may be used. FIG.3(a)shows a graph in Case 1 where Nr=20, Nt=8, Modulation=BPSK, Num=48 (pNum=8, dNum=40).FIG.3(b)shows a graph in Case 2 where Nr=20, Nt=8, Modulation=BPSK, Num=52 (pNum=12, dNum=40).FIG.3(c)shows a graph in Case 3 where Nr=20, Nt=8, Modulation=BPSK, Num=56 (pNum=16, dNum=40).FIG.3(d)shows a graph in Case 4 where Nr=30, Nt=8, Modulation=16QAM, Num=56 (pNum=16, dNum=40).FIG.3(e)shows a graph in Case 5 where Nr=30, Nt=8, Modulation=16QAM, Num=60 (pNum=20, dNum=40).FIG.3(f)shows a graph in Case 6 where Nr=30, Nt=8, Modulation=16QAM, Num=60 (pNum=24, dNum=40). Table 1 shows MIMO detection gains of the EM and EAM processes compared with the MMSE algorithm. TABLE 1CaseEM gainEAM gain1~1.5dB>5dB2~1.5dB>4dB3~1dB~3dB4~1dB~5dB5<1dB~4dB6<1dB>3dB As shown, the EAM process has far more better detection accuracy compared with the EM process. In the higher order modulation cases, for example for 16QAM cases, the EM process cannot get much gain compared with the MMSE algorithm, but the EAM process can achieve higher gain compared with the MMSE algorithm. In addition, compared with the EM process, the EAM process can achieve more extra gain by running more iterations. However, the anchor process will not cause much computing resources, which means the EAM process has equivalent level of computing complexity as the EM process. FIG.4shows the performance of a 5Gmax implementation of the EAM process simplified to meet product requirements. In this implementation, verizone 28 Ghz (cmWave), an Extended Pedestrian A (EPA) channel with 50 km/h is used, and only 1 iteration is performed. If it is extend to more realistic channels and modulation and coding schemes (MCSs) and ranks, the overall summary of the performance of the product simplified EAM by a 5Gmax simulator is listed in Table 2: TABLE 2MaxpossibleMCS forrefEAM (1 iteration)90 & TPrank1rank2TxDivrank1rank2TxDivEAM VS refTDL1505041009No lose or gainfor all MCSSMI150525949No lose or gainfor all MCSEPA75101718No lose or gainfor all MCSEPA50213519No lose or gainfor all MCS It can be seen that the EAM process may achieve significant gain in realistic moving channels in the product simulator. In some embodiments, an apparatus capable of performing the method200may comprise means for performing the respective steps of the method200. The means may be implemented in any suitable form. For example, the means may be implemented in a circuitry or software module. In some example embodiments, the apparatus comprises: means for detecting, from a plurality of signals received by a plurality of receiving antennas of a receiver from a plurality of transmitting antennas of a transmitter, a plurality of data symbols based on channel estimation; means for adjusting the plurality of detected data symbols based on a set of constellation points for a modulation mode used by the transmitter and associated with the plurality of signals; and means for updating the channel estimation based on the plurality of adjusted data symbols. In some example embodiments, the means for adjusting the plurality of detected data symbols may comprise means for adjusting, for a detected data symbol of the plurality of detected data symbol, the detected data symbol based on at least one comparison of the detected data symbol and a constellation point in the set of constellation points. In some example embodiments, the detected data symbol may comprise a real part and an imaginary part, and the constellation point may comprise a reference real part and a reference imaginary part. In some example embodiments, the means for adjusting the detected data symbol based on the at least one comparison may comprise: means for determining at least one of a first distance between the real part of the detected data symbol and the reference real part of the constellation point, and a second distance between the imaginary part of the detected data symbol and the reference imaginary part of the constellation point; and means for adjusting the detected data symbol based on the at least one comparison of the at least one of the first and second distances with a threshold distance. In some example embodiments, the means for adjusting the detected data symbol based on the at least one comparison of the at least one of the first and second distances may comprise: means for in response to the first distance being determined, determining whether the first distance is below the threshold distance; and means for in response to determining that the first distance is below the threshold distance, adjusting the real part of the detected data symbol to be the reference real part of the constellation point. In some example embodiments, the means for adjusting the detected data symbol based on the at least one comparison of the at least one of the first and second distances may comprise: means for in response to the second distance being determined, determining whether the second distance is below the threshold distance; and means for in response to determining that the second distance is below the threshold distance, adjusting the imaginary part of the detected data symbol to be the imaginary real part of the constellation point. In some example embodiments, a first reference value of the reference real part of the constellation point may be greater than a threshold value. In these embodiments, the means for adjusting the detected data symbol based on the at least one comparison may comprise: means for determining whether a first value of the real part of the detected data symbol is greater than the first reference value; and means for in response to determining that the first value is greater than the first reference value, adjusting the first value to be the first reference value. In some example embodiments, a second reference value of the reference imaginary part of the constellation point may be greater than a threshold value. In these embodiments, the means for adjusting the detected data symbol based on the at least one comparison may comprise: means for determining whether a second value of the imaginary part of the detected data symbol is greater than the second reference value; and means for in response to determining that the second value is greater than the second reference value, adjusting the second value of the imaginary part of the detected data symbol to be the second reference value. In some example embodiments, a first reference value of the reference real part of the constellation point may be below a threshold value. In these embodiments, the means for adjusting the detected data symbol based on the at least one comparison may comprise: means for determining whether a first value of the real part of the detected data symbol is below the first reference value; and means for in response to determining that the first value is below the first reference value, adjusting the first value to be the first reference value. In some example embodiments, a second reference value of the reference imaginary part of the constellation point may be below a threshold value. In these embodiments, the means for adjusting the detected data symbol based on the at least one comparison may comprise: means for determining whether a second value of the imaginary part of the detected data symbol is below the second reference value; and means for in response to determining that the second value is greater than the second reference value, adjusting the second value of the imaginary part of the detected data symbol to be the second reference value. In some example embodiments, the detecting of the plurality of data symbols, the adjusting of the plurality of detected data symbols and the updating of the channel estimation may be performed iteratively. In some example embodiments, the apparatus may further comprise means for receiving, at the receiver, an indication of the modulation mode from the transmitter. FIG.5is a simplified block diagram of a device500that is suitable for implementing embodiments of the present disclosure. The device500can be implemented at the receiver120as shown inFIG.1. As shown, the device500includes a processor510, a memory520coupled to the processor510, a communication module530coupled to the processor510, and a communication interface (not shown) coupled to the communication module530. The memory520stores at least a program540. The communication module530is for bidirectional communications, for example, via multiple antennas. The communication interface may represent any interface that is necessary for communication. The program540is assumed to include program instructions that, when executed by the associated processor510, enable the device500to operate in accordance with the embodiments of the present disclosure, as discussed herein with reference toFIGS.1,2,3(a)-3(f) and4. The embodiments herein may be implemented by computer software executable by the processor510of the device500, or by hardware, or by a combination of software and hardware. The processor510may be configured to implement various embodiments of the present disclosure. The memory520may be of any type suitable to the local technical network and may be implemented using any suitable data storage technology, such as a non-transitory computer readable storage medium, semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, as non-limiting examples. While only one memory520is shown in the device500, there may be several physically distinct memory modules in the device500. The processor510may be of any type suitable to the local technical network, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multicore processor architecture, as non-limiting examples. The device500may have multiple processors, such as an application specific integrated circuit chip that is slaved in time to a clock which synchronizes the main processor. All operations and features as described above with reference toFIGS.1,2,3(a)-3(f) and4are likewise applicable to the device500and have similar effects. For the purpose of simplification, the details will be omitted. Generally, various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer readable storage medium. The computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the method200as described above with reference toFIGS.1,2,3(a)-3(f) and4. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media. Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server. In the context of the present disclosure, the computer program codes or related data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above. Examples of the carrier include a signal, computer readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), Digital Versatile Disc (DVD), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the present disclosure, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination. Although the present disclosure has been described in languages specific to structural features and/or methodological acts, it is to be understood that the present disclosure defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Various embodiments of the techniques have been described. In addition to or as an alternative to the above, the following examples are described. The features described in any of the following examples may be utilized with any of the other examples described herein. | 41,040 |
11863354 | DETAILED DESCRIPTION FIG.1is a block diagram of a wireless network130according to an example embodiment. In the wireless network130ofFIG.1, user devices131,132,133and135, which may also be referred to as mobile stations (MSs) or user equipment (UEs), may be connected (and in communication) with a base station (BS)134, which may also be referred to as an access point (AP), an enhanced Node B (eNB), a gNB or a network node. The terms user device and user equipment (UE) may be used interchangeably. A BS may also include or may be referred to as a RAN (radio access network) node, and may include a portion of a BS or a portion of a RAN node, such as (e.g., such as a centralized unit (CU) and/or a distributed unit (DU) in the case of a split BS or split gNB). At least part of the functionalities of a BS (e.g., access point (AP), base station (BS) or (e)Node B (eNB), gNB, RAN node) may also be carried out by any node, server or host which may be operably coupled to a transceiver, such as a remote radio head. BS (or AP)134provides wireless coverage within a cell136, including to user devices (or UEs)131,132,133and135. Although only four user devices (or UEs) are shown as being connected or attached to BS134, any number of user devices may be provided. BS134is also connected to a core network150via a S1 interface151. This is merely one simple example of a wireless network, and others may be used. A base station (e.g., such as BS134) is an example of a radio access network (RAN) node within a wireless network. A BS (or a RAN node) may be or may include (or may alternatively be referred to as), e.g., an access point (AP), a gNB, an eNB, or portion thereof (such as a/centralized unit (CU) and/or a distributed unit (DU) in the case of a split BS or split gNB), or other network node. According to an illustrative example, a BS node (e.g., BS, eNB, gNB, CU/DU, . . . ) or a radio access network (RAN) may be part of a mobile telecommunication system. A RAN (radio access network) may include one or more BSs or RAN nodes that implement a radio access technology, e.g., to allow one or more UEs to have access to a network or core network. Thus, for example, the RAN (RAN nodes, such as BSs or gNBs) may reside between one or more user devices or UEs and a core network. According to an example embodiment, each RAN node (e.g., BS, eNB, gNB, CU/DU, . . . ) or BS may provide one or more wireless communication services for one or more UEs or user devices, e.g., to allow the UEs to have wireless access to a network, via the RAN node. Each RAN node or BS may perform or provide wireless communication services, e.g., such as allowing UEs or user devices to establish a wireless connection to the RAN node, and sending data to and/or receiving data from one or more of the UEs. For example, after establishing a connection to a UE, a RAN node or network node (e.g., BS, eNB, gNB, CU/DU, . . . ) may forward data to the UE that is received from a network or the core network, and/or forward data received from the UE to the network or core network. RAN nodes or network nodes (e.g., BS, eNB, gNB, CU/DU, . . . ) may perform a wide variety of other wireless functions or services, e.g., such as broadcasting control information (e.g., such as system information or on-demand system information) to UEs, paging UEs when there is data to be delivered to the UE, assisting in handover of a UE between cells, scheduling of resources for uplink data transmission from the UE(s) and downlink data transmission to UE(s), sending control information to configure one or more UEs, and the like. These are a few examples of one or more functions that a RAN node or BS may perform. A user device (user terminal, user equipment (UE), mobile terminal, handheld wireless device, etc.) may refer to a portable computing device that includes wireless mobile communication devices operating either with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (MS), a mobile phone, a cell phone, a smartphone, a personal digital assistant (PDA), a handset, a device using a wireless modem (alarm or measurement device, etc.), a laptop and/or touch screen computer, a tablet, a phablet, a game console, a notebook, a vehicle, a sensor, and a multimedia device, as examples, or any other wireless device. It should be appreciated that a user device may also be (or may include) a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. In LTE (as an illustrative example), core network150may be referred to as Evolved Packet Core (EPC), which may include a mobility management entity (MME) which may handle or assist with mobility/handover of user devices between BSs, one or more gateways that may forward data and control signals between the BSs and packet data networks or the Internet, and other control functions or blocks. Other types of wireless networks, such as 5G (which may be referred to as New Radio (NR)) may also include a core network. In addition, the techniques described herein may be applied to various types of user devices or data service types, or may apply to user devices that may have multiple applications running thereon that may be of different data service types. New Radio (5G) development may support a number of different applications or a number of different data service types, such as for example: machine type communications (MTC), enhanced machine type communication (eMTC), Internet of Things (IoT), and/or narrowband IoT user devices, enhanced mobile broadband (eMBB), and ultra-reliable and low-latency communications (URLLC). Many of these new 5G (NR)-related applications may require generally higher performance than previous wireless networks. IoT may refer to an ever-growing group of objects that may have Internet or network connectivity, so that these objects may send information to and receive information from other network devices. For example, many sensor type applications or devices may monitor a physical condition or a status, and may send a report to a server or other network device, e.g., when an event occurs. Machine Type Communications (MTC, or Machine to Machine communications) may, for example, be characterized by fully automatic data generation, exchange, processing and actuation among intelligent machines, with or without intervention of humans. Enhanced mobile broadband (eMBB) may support much higher data rates than currently available in LTE. Ultra-reliable and low-latency communications (URLLC) is a new data service type, or new usage scenario, which may be supported for New Radio (5G) systems. This enables emerging new applications and services, such as industrial automations, autonomous driving, vehicular safety, e-health services, and so on. 3GPP targets in providing connectivity with reliability corresponding to block error rate (BLER) of 10−5and up to 1 ms U-Plane (user/data plane) latency, by way of illustrative example. Thus, for example, URLLC user devices/UEs may require a significantly lower block error rate than other types of user devices/UEs as well as low latency (with or without requirement for simultaneous high reliability). Thus, for example, a URLLC UE (or URLLC application on a UE) may require much shorter latency, as compared to a eMBB UE (or an eMBB application running on a UE). The techniques described herein may be applied to a wide variety of wireless technologies or wireless networks, such as LTE, LTE-A, 5G (New Radio (NR)), cmWave, and/or mmWave band networks, IoT, MTC, eMTC, eMBB, URLLC, etc., or any other wireless network or wireless technology. These example networks, technologies or data service types are provided only as illustrative examples. Also, for example, a UE (or user device or receiver device) may be configured to send measurement reports (e.g., channel state information measurement reports) to a BS (or other network node). For example, a BS may configure a UE to measure one or more quantities (e.g., reference signal receive power (RSRP), determine channel state information (CSI), or determine or measure other information or quantity) for one or more resources or beams. Thus, the measurement report configuration may indicate the quantity or quantities to be measured and for one or more specific resources or beams. For example, a UE may be configured to measure and report one or more quantities, e.g., CSI and/or RSRP for channel state information-reference signal (CSI-RS) beams and/or synchronization signal blocks (SSBs) beams. As an illustrative example, a UE may measure one or more signal parameters (e.g., link quality) of reference signals received from a BS, and may send a channel state information (CSI) report to the BS. An example CSI report, may include, for example, one or more of: a RSRP (reference signal receive power); a Rank Indicator (RI), which is a suitable number of transmission layers for a downlink (DL) transmission; a Precoder Matrix Indicator (PMI), which may indicate what a device (e.g., UE) estimates as a suitable precoder matrix based on the selected rank; and a Channel Quality Indication (or channel quality indicator) (CQI), which may express or indicate the BS-UE channel or link quality, as measured by the UE. The CQI may indicate what the UE estimates as a suitable channel coding rate and modulation scheme (also referred to as modulation and coding scheme (MCS)) based on the selected precoder matrix. In general, precoding may include a UE (or other node) applying a set of precoding weights (each weight including amplitude and/or phase) to a signal or to an antenna (e.g., in order to change the amplitude and/or phase of a transmitted signal), for example, based on the qualities of a channel between the UE and the BS or network node. For example, the gNB or BS may then adjust its UL transmission parameters (e.g., precoder, MCS, and/or a number of transmission layers, etc.) for transmitting to the UE, based on the received CSI report. According to an example embodiment, one or more nodes (e.g., BS, gNB, eNB, RAN node, UE, user device, relay node, or other node) within a wireless network may use or employ a model, e.g., such as, for example a neural network model (e.g., which may be referred to as a neural network, an artificial intelligence (AI) neural network, an AI neural network model, an AI model, a machine learning model or algorithm, or other term) to perform, or assist in performing, one or more functions. Other types of models may also be used. According to an example embodiment, neural networks may be or may include computational models used in machine learning made up of nodes organized in layers. The nodes are also referred to as artificial neurons, or simply neurons, and perform a function on provided input to produce some output value. A neural network requires a training period to learn the parameters, i.e., weights, used to map the input to a desired output. The mapping occurs via the function. Thus, the weights are weights for the mapping function of the neural network. Each neural network model may be trained for a specific task. To provide the output given the input, the neural network model must be trained, which may involve learning the proper value for a large number of parameters (e.g., weights) for the mapping function. The parameters are also commonly referred to as weights as they are used to weight terms in the mapping function. This training may be an iterative process, with the values of the weights being tweaked over many (e.g., thousands) of rounds of training until arriving at the optimal, or most accurate, values (or weights). In the context of neural networks (neural network models), the parameters may be initialized, often with random values, and a training optimizer iteratively updates the parameters (weights) of the neural network to minimize error in the mapping function. In other words, during each round, or step, of iterative training the network updates the values of the parameters so that the values of the parameters eventually converge on the optimal values. According to an example embodiment, neural network models can be trained in either a supervised or unsupervised manner. In supervised learning, training examples are provided to the neural network model or other machine learning algorithm. A training example includes the inputs and a desired or previously observed output. Training examples are also referred to as labeled data because the input is labeled with the desired or observed output. In the case of a neural network, the network learns the values for the weights used in the mapping function that most often result in the desired output when given the training inputs. In unsupervised training, the neural network model learns to identify a structure or pattern in the provided input. In other words, the model identifies implicit relationships in the data. Unsupervised learning is used in many machine learning problems and typically requires a large set of unlabeled data. According to an example embodiment, the learning or training of a neural network model may be classified into (or may include) two broad categories (supervised and unsupervised), depending on whether there is a learning “signal” or “feedback” available to a model. Thus, for example, within the field of machine learning, there may be two main types of learning or training of a model: supervised, and unsupervised. The main difference between the two types is that supervised learning is done using known or prior knowledge of what the output values for certain samples of data should be. Therefore, a goal of supervised learning may be to learn a function that, given a sample of data and desired outputs, best approximates the relationship between input and output observable in the data. Unsupervised learning, on the other hand, does not have labeled outputs, so its goal is to infer the natural structure present within a set of data points. Supervised learning: The computer is presented with example inputs and their desired outputs, and the goal may be to learn a general rule that maps inputs to outputs. Supervised learning may, for example, be performed in the context of classification, where a computer or learning algorithm attempts to map input to output labels, or regression, where the computer or algorithm may map input(s) to a continuous output(s). Common algorithms in supervised learning may include, e.g., logistic regression, naive Bayes, support vector machines, artificial neural networks, and random forests. In both regression and classification, a goal may include to find specific relationships or structure in the input data that allow us to effectively produce correct output data. As special cases, the input signal can be only partially available, or restricted to special feedback: Semi-supervised learning: the computer is given only an incomplete training signal: a training set with some (often many) of the target outputs missing. Active learning: the computer can only obtain training labels for a limited set of instances (based on a budget), and also has to optimize its choice of objects to acquire labels for. When used interactively, these can be presented to the user for labeling. Reinforcement learning: training data (in form of rewards and punishments) is given only as feedback to the program's actions in a dynamic environment, e.g., using live data. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Some example tasks within unsupervised learning may include clustering, representation learning, and density estimation. In these cases, the computer or learning algorithm is attempting to learn the inherent structure of the data without using explicitly-provided labels. Some common algorithms include k-means clustering, principal component analysis, and auto-encoders. Since no labels are provided, there is no specific way to compare model performance in most unsupervised learning methods. In order to provide a CSI report to the gNB, the UE may obtain or calculate a channel estimate based on received reference signals, for example. Channel estimation techniques may recover channel parameters by solving estimation error minimization problems. Machine learning (ML) techniques have been applied to solve estimation problems in various areas from image processing to economics. According to an example embodiment, ML-based estimation techniques (or neural network models) may be applied to the channel estimation problems by using the channel observation samples as input data. Channel estimation with ML techniques (neural network models) may be useful for a variety of applications or circumstances, e.g., such as to offload the channel estimation to a neural network model. Or, at least in some cases, a channel estimation based on a neural network model may be more accurate in a situation where there are relatively few (or relatively infrequent) received reference signals or pilot signal by the UE. In such a case, for example, conventional channel estimation with a sophisticated channel model including many channel parameters may be less accurate than using a trained neural network model, e.g., due to insufficient number of reference signal samples. Other situations may exist in which more accurate channel estimation may be obtained by the UE via use of a trained neural network model. Also, in some cases, less processing overhead and/or less power consumption may be required by a UE to perform channel estimation using a neural network model (ML-based channel estimation), as compared to conventional channel estimation. Thus, according to an example embodiment, to overcome some limitations of the conventional channel estimation, ML estimation techniques (neural network models) may use a neural network (NN) model as an approximator of (or to estimate) a channel or channel environment (e.g., to estimate a change in amplitude and/or phase for the UE-gNB channel, to obtain a CSI (channel state information) report, or other channel estimate). Also, the designed NN model should be efficiently trained by using the dataset of channel observation samples (e.g., based on the set of reference signals). However, the learning process to train and optimize a NN (neural network) model may require a relatively long time period. The computation time period of NN model training increases with the dimension of a NN model (e.g., number of layers), e.g., a deep NN model may be employed that includes a large or significant number of layers. Also, if a NN model is designed with a high complexity (and thus, typically providing higher accuracy/performance), the training time period is extended. Moreover, training a NN model may require a high computing power, and it may be advantageous for the system to use a dedicated NN processor or high-performance GPU (graphics processor unit) for NN model training. If the device does not have enough computing capability, training a NN on a local device may not be feasible. Furthermore, a NN model training process may cause or require a large energy consumption. If the computing node, such as a UE, is relying on a limited power source (e.g., battery-powered UE), it may be challenging to effectively and/or efficiently train the NN model on the device. Thus, implementing a NN model on a UE or other limited power source device may be challenging. Therefore, according to an example embodiment, a UE may take advantage of transfer learning, wherein the UE may obtain information relating to a pre-trained neural network model, e.g., which may have been trained by another UE or by a gNB, and then may be used or re-used) by the UE (e.g., as a starting point) to perform channel estimation, rather than training a full NN model from scratch (or starting from a random set of weights). Transfer learning has been proposed to reuse a pretrained NN model for another ML task. The conventional ML (or neural network model) starts a learning process (performs NN model training) from scratch (e.g., which may be a set of random weights, or an initial set of weights), thus, requiring large resources of computing, time, and energy to perform NN model training. In transfer learning, reusing a pre-trained NN model can significantly reduce the training time and computing resource usage. Therefore, according to an example embodiment, to improve UE efficiency, ML techniques (NN models) may use transfer learning to take advantage of other NN models that have been previously trained (pre-trained) by another node (e.g., by another UE, gNB, or other node). For UEs that are within a cell (e.g., communicating with the same gNB), and/or within proximity or a limited distance apart, such UEs may have UE-gNB channels that are very similar, or at least somewhat similar. Thus, for example, transfer learning may allow a trained NN model that estimates a UE1-gNB channel of a first UE (UE1) to be used (e.g., at least as a starting point) by a second UE (UE2) as a NN model to estimate a UE2-gNB channel of the second UE (UE2). However, to enable the ML-based channel estimation using transfer learning, the information relating to the pretrained NN models must be communicated among UEs and/or with the gNB. There are currently no techniques for communicating or indicating information relating to pre-trained NN models within a wireless network. Therefore, according to an example embodiment, transfer learning is used to allow transfer (e.g., either directly via sidelink communications and/or indirectly from or via a gNB) of information relating to a pre-trained NN model from a first UE to one or more other UEs. FIG.2is a diagram illustrating a transfer of a pre-trained neural network (NN) model to one or more UEs according to an example embodiment. Several UEs, including UE1, UE2and/or UE3, may be in communication with a network node (e.g., gNB210). UE1may have trained, and/or at least has and/or uses, a trained NN model220for channel estimation. NN model220may include a plurality of nodes organized in layers. The nodes are also referred to as artificial neurons, or simply neurons, and perform a function on provided input to produce some output value. A neural network requires a training period to learn or adjust the parameters, i.e., weights, used to map the input to a desired output. The mapping occurs via one or more activation functions. Some example activation functions may include Sigmoid, reLU and Tanh, as examples. Each neural network model may be trained for a specific task. In this example, the NN model220is trained to estimate a channel between UE1and gNB210. Thus, for example, NN model may output channel estimation information based on reference signal inputs. Thus, the NN model220may output channel estimation information (e.g., amplitude, phase, CSI, or other channel estimation information) based on reference signals received by UE1from gNB210. In some cases, a pre-trained NN model220used for channel estimation for UE1-gNB channel may also be useful in at least approximating (or being used as a starting point to estimate) a channel of one or more other UEs. Therefore, various techniques are described herein that allow for transfer of information relating to a pre-trained NN model220to one or more other UEs, such as to UE2and/or UE3. Therefore, at224, information relating to the NN model220may be transferred to UE2(either via gNB210, or directly from UE1via sidelink communications). At226, at least a portion222of NN model220may be transferred to UE3(either via gNB210, or directly from UE1via sidelink communications). An indication of availability of a pre-trained NN model that estimates a channel may be provided to one or more UEs and/or to gNB210. For example, UE2and/or UE3may receive a dedicated message or system information (e.g., system information block (SIB)) from either gNB210, or directly from UE1via sidelink communications, that includes an indication of availability of a NN model (220) that estimates a channel between UE1and gNB210. Also, gNB210may determine that a pre-trained NN model220that estimates a UE-gNB channel for UE1is available at the gNB210or is available at UE1. This may include the gNB210either (for example): generating a pre-trained NN model (e.g., based on reference signals, such as sounding reference signals, received by the gNB210), which estimates a channel between UE1and gNB210; receiving, by gNB210from UE1, a pre-trained NN model, which estimates the channel between UE1and gNB210; or receiving, by gNB210from UE1, an indication that UE1has available a pre-trained NN model (220), which estimates the channel between UE1and gNB210. gNB210may then notify UE2, UE3(and/or other UEs), e.g., via transmission or broadcasting of system information block (SIB), or via transmission of a dedicated message, of an availability of the pre-trained NN model220. For example, after receiving an indication of the availability of a pre-trained NN model, UE2and/or UE3may send a request for, and then may receive, at least a portion of the pre-trained NN model220. For example, UE2may send a request for the pre-trained NN model to either gNB210, or directly to UE1via sidelink communications. For example, the request may include a full/partial indication that indicates a request for either a full pre-trained neural network model, or a partial amount or portion of the pre-trained neural network model. Or, the request may indicate a portion, percentage, or a number of layers that is requested of a pre-trained NN model, (e.g., a request for only 50% of the layers of the pre-trained NN model). The requesting UE (e.g., UE2or UE3in this example) may then receive information relating to the pre-trained NN model, e.g., which may include, for example, at least a portion of the pre-trained NN model210, either from gNB210, or directly from UE1via sidelink communications, and/or NN model configuration information, and/or other information. For example, UE2may receive a NN model transfer message that includes information relating to the pre-trained NN model, e.g., which may include parameters (e.g., weights) for a plurality of (shared) layers of the pre-trained NN model, or a plurality of weights or compressed weights of the pre-trained NN model210(or of at least the portion of the requested NN model210). The NN model transfer message, e.g., including information relating to the pre-trained NN model, may also include NN model configuration information, e.g., which may indicate one or more types of activation functions that are used by the pre-trained NN model, a number of layers that is provided, a location of UE1, and/or a time stamp (e.g., indicating a time of storage and/or creation) for the NN model, and/or other NN model configuration information. The time or timestamp for the NN model may indicate a time the NN model was created or stored. The NN model may be accurate (e.g., accurately models the UE-gNB channel) for a period of time, but channel conditions for the UE-BS may frequently change, such as based on changes in the environment, movement of the UE, etc. Thus, the time or timestamp of the NN model may be used by a to confirm that it is a latest NN model, and not stale or expired. Also, the location information may indicate a location of the measuring/source UE (for which the NN model models the UE-gNB channel). Any localization or positioning technique can be used to determine and/or indicate a location of a source UE (e.g., UE1), and location may be provided in any format, such as in X,Y coordinates, GPS coordinates, etc. For example, UE2may compare its location to the location of UE1to determine that UE1and UE2are within some maximum distance, e.g., to increase the likelihood of the UE1-gNB channel being similar to the UE2-gNB channel. UE2may also compare the current time to the time stamp for the pre-trained NN model to determine whether the pre-trained NN model is still current, and is not stale (e.g., since channels may change relatively quickly). The shared layers of the pre-trained NN model210refer to layers of the NN model that are transferred to UE2and/or UE3(and thus, shared layers are shared with other UEs), whereas unshared layers refer to those layer or portion of the pre-trained NN model that are not provided or shared with UE2or UE3, for example. A source UE (or source node) may refer to a UE that created or determined or trained the NN model for a channel (e.g., for its UE-gNB channel) and/or the UE that provides or forwards (or information relating to, such as a portion of) its pre-trained NN model to another node. A destination UE (or destination node) is a UE that obtains (or receives) the forwarded information relating to the pre-trained NN model, such as the pre-trained NN model (e.g., weights and/or NN model configuration information) (e.g., either via a gNB, or directly from the source UE via sidelink communications). Thus, in this example, UE1is a source UE (since it creates, trains, and/or provides the NN model for channel estimation), while UE2and UE3are destination UEs (since they receive the pre-trained NN model). Sidelink communications may refer to, or may include, communications that occur directly between two UEs (or other nodes), without the use of a gNB to forward information between the UEs. Thus, sidelink communications may refer to or may include direct UE-to-UE communication. UE2and/or UE3may then use the received (at least a portion of the) pre-trained NN model210to obtain a channel estimate for the UE2-gNB channel or UE3-gNB channel, respectively. UE2and/or UE3may also further train its NN model (e.g., to train its NN model, including the unshared layers of its NN model, using the shared layers as a starting point for training). UE2and/or UE3may also transmit a report to gNB210indicating the channel estimation information, e.g., that was obtained as an output from its NN model (or based on the output of the NN model), which generates outputs based on inputs that may include reference signals received from the gNB. Also, UE2and/or UE3may receive data from the gNB210based on the channel estimation information (e.g., gNB210may select and use one or more transmission parameters for downlink transmission, e.g., such as modulation and coding scheme (MCS), rank, perform precoding, etc., based on the channel estimation information received from UE2or UE3, respectively). FIG.3is a flow chart illustrating operation of a user device (UE) according to an example embodiment. Operation310includes receiving, by a first user device in a wireless network, an indication of availability of a pre-trained model that estimates a channel between a second user device and a network node. Operation320includes receiving, by the first user device, information relating to the pre-trained model. Operation330includes determining, by the first user device, channel estimation information based at least on the portion of the pre-trained model. And, operation340includes performing at least one of the following: transmitting, by the first user device, a report to the network node including the channel estimation information; or receiving data, by the first user device from the network node, based on the channel estimation information. The method ofFIG.3may include one or more additional features or functions as follows: In an example embodiment, the channel estimation information may include at least one of: information estimating an amplitude and/or phase, or a change in amplitude and/or phase, associated with a channel; or a channel state information (CSI), including one or more of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and/or a rank indicator (RI). In an example embodiment, the receiving an indication of availability may include at least one of: receiving, by the first user device from the network node via a system information block (SIB) transmission, an indication of availability of a pre-trained neural network model; or receiving, by the first user device from the network node, a dedicated message with an indication of availability of the pre-trained neural network model. In an example embodiment, the receiving an indication of availability of a pre-trained neural network model may include: receiving, by the first user device, an indication of availability of a pre-trained neural network model that estimates a channel between the second user device and the network node, wherein the indication indicates at least one of a time of creating or a time of storage of the pre-trained neural network model and/or a location of the second user device. In an example embodiment, the receiving information relating to the pre-trained model may include at least one of: receiving at least a portion of a pre-trained neural network model from the network node; receiving at least a portion of the pre-trained neural network model via sidelink communications or higher layer from the second user device; or receiving at least a portion of the pre-trained neural network model via inter-radio access technology (inter-RAT) communications. Or, the receiving information relating to the pre-trained model may include receiving at least one of: a plurality of weights or compressed weights for at least a portion of the pre-trained neural network model; and/or neural network model configuration information, e.g., including information indicating one or more types of activation functions of the pre-trained neural network model. In an example embodiment, the receiving information relating to the pre-trained neural network model may include: transmitting, by the first user device, to either the second user device or the network node, a request for the pre-trained neural network model; and receiving, by the first user device, information relating to the pre-trained neural network model based on transmitting the request. In an example embodiment, the transmitting a request may include transmitting, by the first user device, to either the second user device or the network node, a request for the pre-trained neural network model, wherein the request includes a full/partial indication that indicates a request for either a full pre-trained neural network model, or a partial amount or portion of the pre-trained neural network model. The request may indicate a requested portion or a number or amount of layers of the pre-trained neural network that is being requested. The receiving the information relating to the pre-trained model, may include, e.g., receiving, by the first user device, at least a portion of the pre-trained neural network model (e.g., weights of one or more layers of the pre-trained NN model) during a procedure of the first user device to establish a connection to the network node. Also, in an example embodiment, the receiving information relating to the pre-trained neural network model may include: determining, by the first user device, at least one of a location of the first user device, or a current time; comparing, by the first user device, the location of the first user device and/or the current time to the time and/or location associated with the pre-trained neural network model; making a determination, based on the comparing, to obtain at least a portion of the pre-trained neural network model; transmitting a request to receive the pre-trained neural network model; and receiving, by the first user device, information relating to the pre-trained neural network model or at least a portion of the pre-trained neural network model from either the network node or the second user device. FIG.4is a flow chart illustrating operation of a network node according to an example embodiment. Operation410includes determining, by a network node, that a pre-trained model, which estimates a channel between a first user device and the network node, is available at the network node or at a first user device. Operation420includes transmitting, by the network node, to one or more user devices, an indication of availability of a pre-trained model that estimates the channel between the first user device and the network node. Operation430includes receiving, by the network node from a second user device, a request for the pre-trained model. And, operation440includes transmitting, by the network node to the second user device, information relating to the pre-trained neural network model. The method ofFIG.4may include one or more additional features or functions as follows: The determining that a pre-trained model is available may include at least one of: generating, by the network node, a pre-trained neural network model, which estimates a channel between the first user device and the network node; receiving, by the network node from the first user device, information relating to a pre-trained neural network model (e.g., such as receiving at least a portion of the pre-trained NN model and/or NN model configuration information), which estimates a channel between the first user device and the network node; or receiving, by the network node from the first user device, an indication that the first user device has available a pre-trained neural network model, which estimates a channel between the first user device and the network node. The transmitting an indication of availability may include at least one of: transmitting, by the network node to one or more user devices, including at least the second user device, via a system information block (SIB) transmission, an indication of availability of a pre-trained neural network model; or transmitting, by the network node to one or more user devices including at least the second user device, a dedicated message with an indication of availability of the pre-trained neural network model. The transmitting, by the network node to the second user device, at least a portion of the pre-trained neural network model, may include transmitting a plurality of weights for at least a portion of the pre-trained neural network model, and/or neural network configuration information including information indicating one or more types of activation functions of the pre-trained neural network model. The request for the pre-trained model may include a full/partial indication that indicates a request for either a full pre-trained neural network model, or a partial amount or portion of the pre-trained neural network model; wherein the transmitting at least a portion of the pre-trained neural network model may include at least one of: transmitting, by the network node to the second user device, only a portion of the pre-trained neural network model (e.g., including weights of only a subset of layers of the pre-trained NN model) if the full/partial indication indicates a request for a partial amount or portion of the pre-trained neural network model; or transmitting, by the network node to the second user device, the full pre-trained neural network model (e.g., including weights of all layers of the pre-trained NN model) if the full/partial indication indicates a request for a full pre-trained neural network model. FIG.5is a flow chart illustrating operation of a user device (UE) according to another example embodiment. Operation510includes transmitting, by a first user device to either a network node or a second user device, an indication of availability of a pre-trained model that estimates a channel between the first user device and the network node. Operation520includes receiving, by the first user device from either the network node or the second user device, a request for the pre-trained model. And, operation530includes transmitting, by the first user device to the network node (e.g., to be forwarded to the second user device from the network node) via uplink communication or to the second user device via sidelink communications, in response to the request, information relating to the pre-trained model. As noted, the pre-trained model may be, for example, a pre-trained NN model. Further illustrative examples are provided below. As noted, a model request (e.g., a NN model request) that requests a pre-trained model, may be sent by a destination UE (or destination node) to a gNB or a source UE (or a cluster head (CH) that is a source of a NN model). In some cases, the NN model request may indicate one or more parameters (or criteria) of a requested or preferred NN model, e.g., a full/partial indication to indicate whether a full or partial NN model is being requested, an amount or portion or % of the NN model that is being requested, or a number of layers of the NN model that is being requested, a location or range of locations of interest (e.g., within some maximum range around the destination UE), a time window or range of time stamps for the NN model that are acceptable, a type of activation function(s) for the NN model that are requested or preferred, a purpose or application of the NN model (e.g., indicating channel estimation in this case, in the event that NN models may be requested or obtained for other purposes, applications or radio functions), and/or other NN model criteria. Thus, in some cases, the source UE or gNB may perform some filtering or selection among one or more available NN models, e.g., to select the NN model that fits or best fits the requested criteria. The source UE and/or gNB (which forwards the requested NN model) may select a NN model that meets such requested criteria, or may determine whether such a NN model is available (e.g., which meets the criteria of the requested NN model). In the event that the requested NN model is available, the source UE and/or gNB may then send or forward the NN model (e.g., either full or partial NN model) to the destination (or may forward information relating to the pre-trained NN model, which may include NN model weights and/or NN model configuration information, or other NN model information). In some cases, even though requested criteria are received in the NN model request, the source UE and/or gNB may simply forward a NN model that may or may not meet such criteria, or may forward any NN model that it has. Thus, in some cases, the criteria may be strictly followed by the source UE or gNB to select and forward the requested NN model if it fits the criteria, while in other cases the source UE and/or gNB may apply a best efforts to find and send a NN model that best fits or satisfies the requested criteria (and which may or may not satisfy one or more or all of the criteria), while in other cases the source UE and/or gNB may simply ignore any requested NN model criteria and may forward a (or any) available NN model (or information relating to the requested NN model) (which may or may not fit the requested criteria). Also, the destination UE (which will receive the requested NN model from the gNB or via direction communication from the source UE/CH via sidelink communications) may also check or confirm one or more NN model criteria (such as those listed above) to determine if the received NN model meets its required or preferred criteria (e.g., amount or portion of NN Model, a number of layers, a type of activation function, a time stamp that indicates the NN model is recent and not stale, a location that indicates that the NN model is from a source UE that is nearby or within some maximum distance or range of the destination UE, and/or a purpose or application for which the NN model has been trained (e.g., channel estimation in this example). The destination UE may then determine whether to use the received NN model (e.g., if it meets one or more or all of the criteria), or may send a request for another NN model, e.g., in the case where the received NN model is unacceptable or does not satisfy one or more criteria. If the NN model is acceptable, the destination UE may then use the received NN model to train one or more unshared layers of its NN model (e.g., using the received partial NN model as an initial state of its NN model), determine channel estimation information based on received reference signals and the NN model, receive data from the gNB or UE based on the NN model (e.g., where one or more transmission parameters used by transmitting UE or gNB may be selected based on the NN model, for transmission to the destination UE), and/or the destination UE may send a report to the gNB that provides the channel estimation information, e.g., a CSI report, amplitude and/or phase, or other channel state information or channel estimation information. FIG.6is a diagram illustrating a body, or at least a part, of a neural network model transfer message that may be used to transfer (or transmit) information relating to a pre-trained neural network model to a destination UE according to an example embodiment. As noted, a requested NN model (or information relating to a requested pre-trained NN model) may be sent or delivered to a destination UE via a neural network model transfer message610, or other message. NN model transfer message610may include a plurality of weights630of the plurality of layers for the NN model (or partial NN model) that is being transferred or forwarded to the destination UE. In addition, NN model transfer message610may include a NN model configuration information620, e.g., which may indicate, for example, one or more of: a full/partial indication indicating whether a full or partial NN model is being provided; a number or amount or percentage of layers that are being provided; a location of the source UE or source CH (e.g., at the time of NN model was last trained or updated, before being forwarded), a timestamp of the NN model (e.g., which may indicate a time associated with the NN model, such as indicating a time of either: creating, updating, most recent training, storing or forwarding of the NN model); a purpose of the NN model (e.g., indicating a purpose or application (for the NN model) of channel estimation in this example, but the destination UE may request and/or receive NN models for other radio purposes or applications, so these different purposes or applications may be indicated for the received NN model); and/or other parameter or NN model configuration information (as these are merely illustrative examples). In some cases, the NN model configuration may be preconfigured (and thus known by destination UE in advance), and for such case, there may be no NN model configuration information620included or provided, for example. Table 1 illustrates that a multi-bit value (which may be included in NN model configuration information620,FIG.6) that may be used to encode information indicating the type of activation function(s) that are used for the pre-trained NN model. For example, a value of 00 may indicate that sigmoid activation function is used; a value of 01 may indicate that a ReLU activation function is used; and/or a value of 10 may indicate that a Tanh activation function is used for the NN model. This is merely an example, and other values and/or activation functions may be used. TABLE 1Activation functions used in NN modelmay be indicated using multi-bit value.Types (indication ofactivation functionsin NN model)0 (00)1 (01)2 (10)Activation functionsSigmoidReLUTanh FIG.7is a diagram illustrating a process of pre-trained neural network model transfer according to an example embodiment. The process ofFIG.7may be performed by the destination UE. At710, it is determined that the destination UE needs, or would like to use, a NN model, such as for channel estimation in this case. At712, it is determined that there is a pre-trained NN model that is available (e.g., NN model availability is true). At714, it is determined whether gNB/BS has a recently trained NN model. At718, if the gNB has a recently trained NN model (for channel estimation), then at718, the gNB/BS may forward information relating to the pre-trained NN model, such as (e.g., weights of) the pre-trained NN model (or partial NN model) to the destination UE, and may provide a full/partial indication for such forwarded NN model, to indicate whether a full or partial NN model is being provided, and/or may provide NN model configuration information. At716, it is determined whether the source UE/cluster head (CH) will directly transfer the requested NN model to the destination UE. If direct forwarding will be performed, then at722, the source UE directly forwards the information relating to the pre-trained NN model, such as weights and/or configuration information for the pre-trained NN model, e.g., with full/partial indication, to destination UE via sidelink communications. If direct forwarding will not be performed, then at720, the source UE/source CH forwards the NN model to the gNB, which then forwards such pre-trained NN model to the destination UE, e.g., with full/partial indication. At724, the destination UE may then use the received pre-trained NN model, which may include, e.g.: training the NN model, which may include training one or more unshared layers and/or shared layers of its NN model using the received NN model as a starting point or initial state; receiving data from a UE and/or gNB based on the NN model (e.g., where one or more transmit parameters and/or receive parameters may be configured based on the channel estimate output or provided by the pre-trained NN model; and/or determining channel estimation information based on received reference signals and based on at least the pre-trained NN model (e.g., which may be further trained by the destination UE). FIG.8is a diagram illustrating a process that may be performed to set a flag indicating availability of a pre-trained neural network model for channel estimation. This flag may be set by a network node, e.g., gNB or BS. At810, the gNB may initialize, as a default, the pre-trained NN model availability to false. Thereafter, a NN model for channel estimation may be trained by a source UE or by the gNB. At812, the gNB may determine that the gNB has received a pre-trained NN model from a UE, or that the gNB has generated or trained a NN model, or that the gNB has received an indication of availability of a NN model from a source UE. In any of such case(s) (meaning that a pre-trained NN model is available), at816, the gNB may set the pre-trained NN model availability to True, including adding any additional information or parameters associated with the pre-trained NN model. At818, the gNB may transmit (e.g., via transmission of SIB or dedicated message(s)) an indication of availability of a pre-trained NN model for channel estimation, which may be or may include a message with the pre-trained NN model availability set to True, and may include other information or parameters associated with the available pre-trained NN model, such as location information (location of source UE where NN model was trained), a UE ID that identifies the source UE for the NN model, a timestamp for the NN model (e.g., indicating a time of training, a time of storage, and/or a time of transmission or receipt of such pre-trained NN model), a purpose or application for which the NN model was trained (e.g., indicating channel estimation in this case), etc. FIG.9is a diagram that illustrates a process to update and share the pre-trained neural network (NN) model availability flag with one or more UEs (e.g., possible destination UEs that may be interested in receiving the pre-trained NN model for channel estimation). As shown inFIG.9, a source UE912and a destination UE914may be in communication with each other via sidelink communications, and may be in communication with gNB914. Different cases and different options may be possible for generating a pre-trained NN model, and communicating the availability of such pre-trained NN model to UEs. At case1, a trained NN model is generated (e.g., trained) by gNB914, e.g., based on reference signals received from UE912. Thus, at916, the gNB914generates the trained NN model. At case2, the NN model is generated by source (or a neighbor) UE912. Thus, at918, the source UE912generates a trained NN model. In this example case, the neighboring UEs may locally form a cluster while selecting one of the UE as cluster head (CH). Since CH communicates with all UEs including the destination UE, CH collects and stores the NNs trained by the UEs in the cluster. Therefore, if CH has a new pretrained NN, the event of NN generation is reported from CH to the base station, and the base station sets the flag to True. And, at920, the source UE912sends a message to gNB914indicating that a NN model is available at the source UE912. At922, the gNB914sets the “pre-trained NN model availability flag” to True. At option 1, the gNB914sends the NN model availability flag or indication periodically. For example, at924, the gNB914transmits or broadcasts system information block (SIB) signaling including the “pre-trained NN model availability flag.” At option 2, the NN model availability indication or flag may be transmitted by the gNB upon request from a UE. Thus, for example, at926, the gNB914receives a request for NN model availability from a destination UE910. And, at928, in response to receiving the request at926, the gNB914transmits to destination UE910(e.g., via physical downlink control channel (PDCCH) signaling or other message) the pre-trained NN model availability indication, such as the “pre-trained NN model availability flag” to inform the destination UE that a pre-trained NN model for channel estimation is available. Therefore, after the flag is set to True, the gNB914notifies the availability flag change to the destination UE912by using two options. In Option 1 ofFIG.9, SIB signaling is used periodically send the updated flag to the destination UE (and possibly other UEs). Also, it is possible that the destination UE wants to check the latest availability of NN model. In this case, Option 2 inFIG.9shows that the destination UE910sends a request message to the gNB914. Then, the gNB914uses a dedicated DCI field in PDCCH signaling to send an updated flag to the destination UE910. Therefore, in this manner, the destination UE910may obtain the current flag value (NN model availability indication or flag). Additionally, in both Options 1 and 2, the gNB914may send to the destination UE910the flag with additional information associated with the NN model that is available, such as a timestamp, location information, etc. InFIG.7, after checking the flag to determine whether a NN model is available, the destination UE may request the gNB to send a pre-trained NN model (or request that the gNB send information relating to the pre-trained NN model). Also, the request sent by the destination UE to the gNB may include a full/partial indication that indicates whether a full NN model is requested, or whether a partial NN model is being requested. It is possible that the machine learning (ML) algorithm running on the destination UE may not need a full NN model. Also, in some cases, using a full pre-trained NN model (trained at another UE) may result in over-fitting (e.g., where some of the shared layers of full NN model may not accurately reflect channel of destination UE), and also sending the full NN model may consume additional resources. Thus, it may be advantageous, at least in some cases, for the destination UE to request and/or receive only a partial pre-trained NN model, rather than the full pre-trained NN model. Therefore, the destination UE may decide to download (request and receive) only a partial NN model. To do this, the destination UE may include a full/partial indication within its request for pre-trained NN model that is sent to the gNB or source node. As shown inFIG.10, the destination UE may communicate to the gNB an indication of whether a full or partial NN model is requested.FIG.10is a diagram illustrating a full/partial indication transfer or transmission according to an example embodiment. The destination UE may request a partial pre-trained NN model, which may cause the source node (gNB or source UE/CH) to transmit the partial NN model. On the other hand, the destination UE may request a full pre-trained NN model, which causes the source node (e.g., gNB or source UE/CH) to transmit the full pre-trained NN model to the destination UE. Moreover, destination UE may request a recent version of the pre-trained NN model. For instance, the request message of the UE can include a threshold value in terms of the age (or a maximum allowed age) of pre-trained NN model (such as a maximum age of the pre-trained NN model). In such case, for example, only pre-trained NN models (or information relating to such pre-trained NN models) that meet such maximum allowed age should be sent to the destination UE, e.g., to ensure that the UE receives a current or most recent pre-trained NN model, since channel conditions may change quickly within the wireless network. When the gNB receives a request from the destination UE, the gNB may check where the pre-trained NN model is stored on the network. Also, the gNB may calculate the age of the stored or available pre-trained NN model by using the timestamp of the pre-trained NN model. By comparing the calculated age of pre-trained NN model to the maximum allowed age indicated by destination UE, the gNB can decide whether the pre-trained NN model stored at the gNB is current (age of pre-trained NN model is within the maximum age threshold indicated by destination UE) and thus may be transmitted to the destination UE, or is too old or stale, and thus, will (or should) not be transmitted to the UE. For data transmission, pre-trained NN model may have a large size, and thus, may typically be transmitted via user plane or data channel, such as PDSCH (physical downlink shared channel). Also, it is possible that the gNB does not have a suitable version of a pre-trained NN model. In this case, the destination UE must receive the latest pretrained NN stored at CH. CH is able to maintain the most recently updated version of trained NNs since UEs in the cluster regularly update a new version of trained NNs to CH. In general, when the pre-trained NN model is stored at CH (or at another UE), the NN model may be sent from CH (or source UE) to the destination UE via two paths. First, CH can directly send the information to the requesting UE by using device-to-device (D2D) or sidelink communications. Second, CH can upload (or send) the new or updated pre-trained NN model to the gNB. In this case, the destination UE downloads (or requests and receives) the pre-trained NN model from the gNB. When the gNB is used as a relay, the signaling overhead increases compared to the first option using a D2D or sidelink communications. However, by using the gNB to relay the NN model transmission, the gNB's outdated NN model may be replaced or updated with the new pre-trained NN model. Therefore, in such case of forwarding the NN model via the gNB, this allows the gNB to receive and/or maintain the latest version of the pre-trained NN model(s). Thus, when another UE requests the same information, or requests a pre-trained NN model for channel estimation in the future, the gNB sends its updated or latest version of the pre-trained NN model to the requesting UE without (necessarily) communicating to CH (e.g., gNB has the current or updated version of NN model, and does not need to request or obtain the current version of NN model from CH or source UE). FIGS.11-13illustrate some examples or use cases where a pre-trained NN model may be transferred to a destination UE. A RRCSetupRequest may typically be used by a UE to request transition to a connected state. However, the RRCSetupRequest and RRCSetup messages may, in some of these examples, instead be used to request and obtain NN model availability. FIG.11is a diagram illustrating transfer of a pre-trained NN model to a UE that may be in an idle or inactive state. In this case, the inactive or idle UE may use RRCSetupRequest to request transmission of NN model availability indication, so that idle or inactive UE can determine whether it can obtain a pre-trained NN model from the gNB. At1110, the gNB914may periodically broadcast the NN model availability indication or flag, e.g., via SIB1. This may inform the idle or inactive state UE whether or not a pre-trained NN model is available at the gNB. At1112, a UE910in idle or inactive state may send a RRCSetupRequest to the gNB (with a request for the pre-trained NN model). Thus, for example, this use of RRCSetupRequest may be used by an idle or inactive state UE to request and then obtain a pre-trained NN model. At1114, the UE receives an RRCSetup message. At1116, the UE receives the pre-trained NN model from the gNB. FIG.12. is a diagram illustrating transfer of a pre-trained NN model to a UE according to another example embodiment. At1210, the UE may send a RRCSetupRequest, including a request of NN model availability and/or a request for the NN model. At1212, the gNB may send the NN model availability indication or flag within a RRCSetup message. And, at1214, the gNB may send the requested pre-trained NN model to the UE. FIG.13is a diagram illustrating transfer of a pre-trained NN model to a UE according to another example embodiment. A destination UE910, a source gNB1310and a target gNB1312are shown. In a handover procedure (to perform handover of UE from source gNB1310to target gNB1312), RRC (radio resource control) connection is modified. NN availability flag and NN model request can be included in RRCReconfiguration. Optionally UE may also signal support for NN model training as part of UE capabilities signaling to the network. Also, the gNB may signal or indicate UE ML capabilities as part of the handover request/UE context information message. At1320, the UE may send a RRCReestablishmentRequest to target gNB1312, requesting NN model availability. At1330, the source gNB1310may inform the target gNB1312about ML capabilities of UE910(e.g., capabilities of UE910to use a pre-trained NN model). Also, in some cases, UE capabilities, which may include ML/NN capabilities of the UE, may be exchanged before RRCReestablishment message is sent. At1340, the UE910may receive from target gNB1312a RRCReestablishment (including NN availability flag or indication). At1350, the UE910may obtain (e.g., as part of a handover procedure to target gNB) the pre-trained NN model from the target gNB1312. Example 1. A method comprising: receiving, by a first user device in a wireless network, an indication of availability of a pre-trained model that estimates a channel between a second user device and a network node; receiving, by the first user device, information relating to the pre-trained model; determining, by the first user device, channel estimation information based at least on the information relating to the pre-trained model; and performing at least one of the following: transmitting, by the first user device, a report to the network node including the channel estimation information; or receiving data, by the first user device from the network node, based on the channel estimation information. Example 2. The method of example 1, wherein the pre-trained model comprises a pre-trained neural network model. Example 3. The method of any of examples 1-2, wherein the performing comprises: transmitting, by the first user device, a report to the network node including the channel estimation information. Example 4. The method of any of examples 1-3, wherein the performing comprises: receiving data, by the first user device from the network node, wherein one or more transmission parameters for the data are based on the channel estimation information. Example 5. The method of any of examples 1-4, wherein the channel estimation information comprises at least one of: information estimating an amplitude and/or phase, or a change in amplitude and/or phase, associated with a channel; or a channel state information (CSI), including one or more of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and/or a rank indicator (RI). Example 6. The method of any of examples 2-5, wherein the receiving an indication of availability comprises at least one of: receiving, by the first user device from the network node via a system information block (SIB) transmission, an indication of availability of a pre-trained neural network model; or receiving, by the first user device from the network node, a dedicated message with an indication of availability of the pre-trained neural network model. Example 7. The method of any of examples 1-6, wherein the receiving an indication of availability of a pre-trained model comprises: receiving, by the first user device, an indication of availability of a pre-trained neural network model that estimates a channel between the second user device and the network node, wherein the indication indicates at least one of a time of creating or a time of storage of the pre trained neural network model and/or a location of the second user device. Example 8. The method of any of examples 1-7 wherein the receiving information relating to the pre-trained model comprises at least one of: receiving at least a portion of a pre-trained neural network model from the network node; receiving at least a portion of the pre-trained neural network model via sidelink communications or higher layer from the second user device; or receiving at least a portion of the pre-trained neural network model via inter-radio access technology (inter-RAT) communications. Example 9. The method of any of examples 1-8, wherein the receiving information relating to the pre-trained model comprises: receiving, by the first user device, at least a plurality of weights or compressed weights for at least a portion of a pre-trained neural network model. Example 10. The method of any of examples 1-9, wherein the receiving information relating to the pre-trained model comprises: receiving, by the first user device at least: a plurality of weights or compressed weights for at least a portion of a pre-trained neural network model; and neural network model configuration information, including information indicating one or more types of activation functions of the pre-trained neural network model. Example 11. The method of any of examples 1-10, wherein the receiving information relating to the pre-trained model comprises: transmitting, by the first user device, to either the second user device or the network node, a request for a pre-trained neural network model; and receiving, by the first user device, at least a portion of the pre-trained neural network model based on transmitting the request. Example 12. The method of example 11, wherein the transmitting a request comprises: transmitting, by the first user device, to either the second user device or the network node, a request for the pre-trained neural network model, wherein the request includes a full/partial indication that indicates a request for either a full pre-trained neural network model, or a partial amount or portion of the pre-trained neural network model. Example 13. The method of example 12, wherein the request indicates a requested portion or number or amount of layers of the pre-trained neural network model that is being requested. Example 14. The method of any of examples 2-13, wherein the receiving the pre-trained neural network model comprises: receiving, by the first user device, at least a portion of the pre-trained neural network model during a procedure of the first user device to establish a connection to the network node. Example 15. The method of example 14, wherein the receiving information relating to the pre-trained neural network model comprises: determining, by the first user device, at least one of a location of the first user device, or a current time; comparing, by the first user device, the location of the first user device and/or the current time to the time and/or location associated with the pre-trained neural network model; making a determination, based on the comparing, to obtain at least a portion of the pre trained neural network model or information relating to the pre-trained neural network model; transmitting a request to receive the pre-trained neural network model; and receiving, by the first user device, at least a portion of the pre-trained neural network model from either the network node or the second user device. Example 16. A non-transitory computer-readable storage medium comprising instructions stored thereon that, when executed by at least one processor, are configured to cause a computing system to perform the method of any of examples 1-15. Example 17. An apparatus comprising means for performing the method of any of examples 1-15. Example 18. An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 1-15. Example 19. An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured at least to: receive, by a first user device in a wireless network, an indication of availability of a pre-trained model that estimates a channel between a second user device and a network node; receive, by the first user device, information relating to the pre-trained model; determine, by the first user device, channel estimation information based at least on the information relating to the pre-trained model; and perform at least one of the following: transmit, by the first user device, a report to the network node including the channel estimation information; or receive data, by the first user device from the network node, based on the channel estimation information. Example 20. A method comprising: determining, by a network node, that a pre-trained model, which estimates a channel between a first user device and the network node, is available at the network node or at a first user device; transmitting, by the network node, to one or more user devices, an indication of availability of the pre-trained model that estimates the channel between the first user device and the network node; receiving, by the network node from a second user device, a request for the pre-trained model; and transmitting, by the network node to the second user device, information relating to the pre-trained model. Example 21. The method of example 20, wherein the pre-trained model comprises a pre-trained neural network model. Example 22. The method of any of examples 20-21, wherein the determining that a pre-trained model is available comprises at least one of: generating, by the network node, a pre-trained neural network model, which estimates a channel between the first user device and the network node; receiving, by the network node from the first user device, at least a portion of a pre-trained neural network model, which estimates a channel between the first user device and the network node; or receiving, by the network node from the first user device, an indication that the first user device has available a pre-trained neural network model, which estimates a channel between the first user device and the network node. Example 23. The method of any of examples 20-22, wherein the transmitting an indication of availability comprises at least one of: transmitting, by the network node to one or more user devices, including at least the second user device, via a system information block (SIB) transmission, an indication of availability of a pre-trained neural network model; or transmitting, by the network node to one or more user devices including at least the second user device, a dedicated message with an indication of availability of the pre-trained neural network model. Example 24. The method of any of examples 20-23 wherein the transmitting, by the network node to the second user device, information relating to the pre-trained model, comprises: transmitting at least a plurality of weights for at least a portion of a pre-trained neural network model. Example 25. The method of any of examples 20-24 wherein the transmitting, by the network node to the second user device, information relating to the pre-trained model, comprises: transmitting at least a plurality of weights for at least a portion of a pre-trained neural network model, and neural network model configuration information including information indicating one or more types of activation functions of the pre-trained neural network model. Example 26. The method of any of examples 20-25, wherein the request for the pre-trained model includes a full/partial indication that indicates a request for either a full pre-trained neural network model, or a partial amount or portion of the pre-trained neural network model; wherein the transmitting information relating to the pre-trained model comprises at least one of: transmitting, by the network node to the second user device, only a portion of the pre-trained neural network model if the full/partial indication indicates a request for a partial amount or portion of the pre-trained neural network model; or transmitting, by the network node to the second user device, the full pre-trained neural network model if the full/partial indication indicates a request for a full pre-trained neural network model. Example 27. The method of any of examples 20-26, wherein the transmitting at least a portion of the pre-trained model to the second user device is performed during a procedure of the second user device to establish a connection to the network node. Example 28. A non-transitory computer-readable storage medium comprising instructions stored thereon that, when executed by at least one processor, are configured to cause a computing system to perform the method of any of examples 20-26. Example 29. An apparatus comprising means for performing the method of any of examples 20-26. Example 30. An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform the method of any of examples 20-26. Example 31. An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured at least to: determine, by a network node, that a pre-trained model, which estimates a channel between a first user device and the network node, is available at the network node or at a first user device; transmit, by the network node, to one or more user devices, an indication of availability of a pre-trained model that estimates the channel between the first user device and the network node; receive, by the network node from a second user device, a request for the pre-trained model; and transmit, by the network node to the second user device, information relating to the pre-trained neural network model. Example 32. A method comprising: transmitting, by a first user device to either a network node or a second user device, an indication of availability of a pre-trained model that estimates a channel between the first user device and the network node; receiving, by the first user device from either the network node or the second user device, a request for the pre-trained model; and transmitting, by the first user device to the network node via uplink communication or to the second user device via sidelink communications, in response to the request, information relating to the pre-trained model. Example 33. The method of example 32, wherein the pre-trained model comprises a pre-trained neural network model. Example 34. An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured at least to: transmit, by a first user device to either a network node or a second user device, an indication of availability of a pre-trained neural network model that estimates a channel between the first user device and the network node; receive, by the first user device from either the network node or the second user device, a request for the pre-trained neural network model; and transmit, by the first user device to the network node via uplink communication or to the second user device via sidelink communications, in response to the request, information relating to the pre-trained neural network model. FIG.14is a block diagram of a wireless station (e.g., AP, BS or user device/UE, or other network node)1500according to an example embodiment. The wireless station1500may include, for example, one or more (e.g., two as shown inFIG.14) RF (radio frequency) or wireless transceivers1502A,1502B, where each wireless transceiver includes a transmitter to transmit signals and a receiver to receive signals. The wireless station also includes a processor or control unit/entity (controller)1504to execute instructions or software and control transmission and receptions of signals, and a memory1506to store data and/or instructions. Processor1504may also make decisions or determinations, generate frames, packets or messages for transmission, decode received frames or messages for further processing, and other tasks or functions described herein. Processor1504, which may be a baseband processor, for example, may generate messages, packets, frames or other signals for transmission via wireless transceiver1502(1502A or1502B). Processor1504may control transmission of signals or messages over a wireless network, and may control the reception of signals or messages, etc., via a wireless network (e.g., after being down-converted by wireless transceiver1502, for example). Processor1504may be programmable and capable of executing software or other instructions stored in memory or on other computer media to perform the various tasks and functions described above, such as one or more of the tasks or methods described above. Processor1504may be (or may include), for example, hardware, programmable logic, a programmable processor that executes software or firmware, and/or any combination of these. Using other terminology, processor1504and transceiver1502together may be considered as a wireless transmitter/receiver system, for example. In addition, referring toFIG.14, a controller (or processor)1508may execute software and instructions, and may provide overall control for the station1500, and may provide control for other systems not shown inFIG.14, such as controlling input/output devices (e.g., display, keypad), and/or may execute software for one or more applications that may be provided on wireless station1500, such as, for example, an email program, audio/video applications, a word processor, a Voice over IP application, or other application or software. In addition, a storage medium may be provided that includes stored instructions, which when executed by a controller or processor may result in the processor1504, or other controller or processor, performing one or more of the functions or tasks described above. According to another example embodiment, RF or wireless transceiver(s)1502A/1502B may receive signals or data and/or transmit or send signals or data. Processor1504(and possibly transceivers1502A/1502B) may control the RF or wireless transceiver1502A or1502B to receive, send, broadcast or transmit signals or data. The embodiments are not, however, restricted to the system that is given as an example, but a person skilled in the art may apply the solution to other communication systems. Another example of a suitable communications system is the 5G concept. It is assumed that network architecture in 5G will be quite similar to that of the LTE-advanced. 5G is likely to use multiple input—multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept), including macro sites operating in co-operation with smaller stations and perhaps also employing a variety of radio technologies for better coverage and enhanced data rates. It should be appreciated that future networks will most probably utilise network functions virtualization (NFV) which is a network architecture concept that proposes virtualizing network node functions into “building blocks” or entities that may be operationally connected or linked together to provide services. A virtualized network function (VNF) may comprise one or more virtual machines running computer program codes using standard or general type servers instead of customized hardware. Cloud computing or data storage may also be utilized. In radio communications this may mean node operations may be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent. Embodiments of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Embodiments may be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. Embodiments may also be provided on a computer readable medium or computer readable storage medium, which may be a non-transitory medium. Embodiments of the various techniques may also include embodiments provided via transitory signals or media, and/or programs and/or software embodiments that are downloadable via the Internet or other network(s), either wired networks and/or wireless networks. In addition, embodiments may be provided via machine type communications (MTC), and also via an Internet of Things (IOT). The computer program may be in source code form, object code form, or in some intermediate form, and it may be stored in some sort of carrier, distribution medium, or computer readable medium, which may be any entity or device capable of carrying the program. Such carriers include a record medium, computer memory, read-only memory, photoelectrical and/or electrical carrier signal, telecommunications signal, and software distribution package, for example. Depending on the processing power needed, the computer program may be executed in a single electronic digital computer or it may be distributed amongst a number of computers. Furthermore, embodiments of the various techniques described herein may use a cyber-physical system (CPS) (a system of collaborating computational elements controlling physical entities). CPS may enable the embodiment and exploitation of massive amounts of interconnected ICT devices (sensors, actuators, processors microcontrollers, . . . ) embedded in physical objects at different locations. Mobile cyber physical systems, in which the physical system in question has inherent mobility, are a subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals. The rise in popularity of smartphones has increased interest in the area of mobile cyber-physical systems. Therefore, various embodiments of techniques described herein may be provided via one or more of these technologies. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit or part of it suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. Method steps may be performed by one or more programmable processors executing a computer program or computer program portions to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer, chip or chipset. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a user interface, such as a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. Embodiments may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an embodiment, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet. While certain features of the described embodiments have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the various embodiments. | 88,007 |
11863355 | The detailed description explains the preferred embodiments of the invention, together with advantages and features, by way of example with reference to the drawings. DETAILED DESCRIPTION OF THE INVENTION Electronic systems such as networking equipment often transmit signals over cables. Although the cables may be only a few meters in length, a transmission-line effect degrades data quality and transmission rate. Large signal swings can also increase electromagnetic interference (EMI) and system noise. To send signals over these cables reliably often requires special line drivers and receivers. Attempts to mitigate issues related to signal transmission reliability by techniques such as reduced voltage swings, and utilizing a pair of physical signals driven to opposite states used together to transmit a single logical signal. Such differential signaling has been used with Emitter-coupled logic (ECL) for many years and in low-voltage differential signaling (LVDS) drivers and receivers. In operation, LVDS drivers have a pair of outputs that are driven to opposite states. The two outputs are sent separately down the cable to the LVDS receiver. At the far (receiver) end of the cable, the lines are connected by a terminating resistor. A current loop exists from one transmitter output, down the cable, through the terminating resistor, and back through the cable to the other transmitter line output. A voltage drop occurs across the terminating resistor that is sensed by the receiver. However, the voltage difference across the terminating resistor between the two signals is small, perhaps only a few hundred millivolts. Sensitive receivers are needed to detect such a small voltage difference between the two signal lines. In real-life systems, cables can become disconnected, such as by a system/network technician when networks are modified, or when a cable fails due to continuous flexing operation. The transmitter can also fail or be in a high-impedance output state. At these times, neither output line is driven, and the voltage across the terminating resistor drops to near zero. Noise can be coupled into the cable from various sources, and this noise can be picked up by the receiver's differential inputs and amplified. The output of the receiver can oscillate as the noise is amplified, and false triggering of receiver logic can occur. In short, spurious noise can cause reading/signal processing errors. The present invention relates to differential receivers, and more particularly to a fail-safe circuit for low-voltage differential signaling (LVDS) receivers having single differential input disconnect detection with a latchable control signal interrupt capability. Turning now to the drawings in greater detail, it will be seen that inFIG.1there is illustrated one example of a device that comprises a fail-safe differential receiver having single differential input disconnect detection with a latchable control signal interrupt capability configured for use in a fetal heart rate (FHR) monitor application. In an exemplary embodiment, an FHR transducer336can be secured with a belt364to the lower abdomen of a pregnant patient502. Additionally, a Toco transducer304to measure contractions can be secured by a belt364to the patient and interconnect by way of a cable334with the FHR monitor302to display a Toco count332and associated waveform328. The FHR transducer336is interconnected by way of a cable330with the FHR monitor302. The FHR monitor302provides a waveform that is received at the FHR transducer336by way of the fail-safe differential receiver100. The waveform is used to operate a plurality of ultrasound PZT (Lead Zirconate Titanate) discs, within the FHR transducer336, broadcast the waveform by way of ultrasound, and receive a return ultrasound Doppler echo signal316related to the heartbeat rate of the unborn baby. The return ultrasound Doppler echo signal is processed, communicated back to the FHR monitor302, and displayed326/324on the FHR monitor302as the fetus heart rate (FHR) waveform. The FHR transducer336can be a Philips Fetal Ultrasound Transducer models M1356A, 15245A, Avalon M2736A, M2736AA, Ref #867246, Avalon CTS M2726A, Avalon CL Ref #866076, GE Corometrics Nautilus ultrasound models 5700AAX, 5700BAX, 5700LAX, 5700HAX, REF #2108346-001, or other suitable FHR transducers. A shortcoming in each of these FHR transducers336is that they lack a fail-safe differential receiver having single differential input disconnect detection with a latchable control signal interrupt capability as taught in the present invention. This means that the prior FHR transducers336that use a prior LVDS cannot detect the first occurrence of a cable signal fault and latch the signal off to prevent erroneous FHR readings326generated by spurious noise from being processed and displayed on the FHR monitor302. In short, prior LVDS cannot, upon the first detection occurrence of an intermittent or permanent cable330fault, trigger a latch that blocks the signal disabling the FHR transducer336until the source of the fault is corrected and the LVDS is reset. In this regard, the Philips Avalon ultrasound transducer models M2736A, M2736AA, and Ref #867246 employ an LVDS line receiver (Maxim/Analog Devices MAX9111/MAX9130) to extract the 1 MHz reference clock from a differential signal received from the fetal monitor (FM20/30/40/50) over a 7-conductor shielded cable330with 2.2V common mode direct current (DC) voltage bias. These LVDS receiver IC chips have a fail-safe detection circuit that blocks the output when three distinct fault conditions occur—(1) both the inputs are open, (2) both the inputs are shorted together and (3) both the inputs are un-driven while a termination resistor is intact at the receiver input/the LVDS transmitter output (at fetal monitor302) is in a high impedance state (electrically floating). A shortcoming of the prior LVDS line receivers is that if only one of the inputs is open (unconnected), the fail-safe detection circuit does not block the LVDS output, and hence random spurious noise from the LVDS receiver's floating input is allowed to pass to the processing circuit408, which in turn, results in spurious and inaccurate fetal heart rate determinations and display326/324on the FHR monitor302. Exacerbating this shortcoming is that before a cable330fails completely (open-circuited permanently), one or more of the cable330signal conductors can intermittently break continuity while the cable330flexes and spurious/wrong fetal heart rate readings326/324can lead to mistaken clinical interpretation by medical professionals. Referring toFIG.2, there is illustrated one example of a logic state table for a fail-safe differential receiver100having single differential input disconnect detection with latchable control signal interrupt capability. Reference ‘A’ illustrates the seven-conductor (plus the shield) cable330illustrated in at leastFIG.8, if any of the five conductors on pins 1, 4, 5, 6, or 7, breaks (electrically open) then the transducer completely stops working but if one of the remaining two conductor connector pins 2 or 3 (LVDS+/−) breaks (electrically open) then the transducer becomes noisy and gives spurious FHR readings even though there is no ultrasound beam transmission from the transducer and hence no return ultrasound Doppler echo signal to be processed. These two conductors (pins 2 and 3) carry a 1 MHz reference clock signal from the fetal monitor to the transducer head that is processed through the LVDS receiver chip on the backend PCB (signal processor408). In an exemplary embodiment, pin 1 is −Vs, pin 2 is LVDS-connected to IN−129, pin 3 is LVDS+ connected to IN+127, pin 4 is controller area network (CAN) Bus−, pin 5 is CAN Bus+, pin 6 is Transducer Recognition, pin 7 is +Vs, and cable shield is connected to earth ground on the fetal monitor connection end of the cable330. Reference ‘B’ illustrates the logic state table402of prior LVDS receivers that are absent the ability to latch off the signal upon first detection of a cable fault. Additionally, prior LVDS receivers are limited to detecting only (1) both the inputs are open, (2) both the inputs are shorted together and (3) both the inputs are un-driven while a termination resistor is intact at the receiver input/the LVDS transmitter (at fetal monitor302) output is in high impedance state (electrically floating). The present invention overcomes shortcomings of prior LVDS receivers by providing a fail-safe differential receiver100circuits as illustrated in at leastFIGS.3-6. In an exemplary embodiment and in contrast to prior transducers, in the present invention, the improved fail-safe LVDS differential receiver100can be embodied in one or more semiconductor chips406. In this regard, the fail-safe circuit can be configured and/or otherwise encoded to operate in accordance with the state table in reference ‘C’. In operation, the fail-safe differential receiver100outputs a fault condition state that is translated by the FHR monitor as an equipment malfunction error. Such fault condition state errors can be displayed to alert the operator. In the alternative, the improved fail-safe LVDS differential receiver100circuits can output the receiver output143responsive to a normal operating state condition is detected (cable conductors are physically and electrically intact between the fetal monitor and the LVDS receives both IN+127input and IN−129input). The fail-safe differential receiver100output can be latchable such that when an error condition is first detected the fail-safe differential receiver100output is latched to the fault condition state blocking the receiver output143until the system is reset. This prevents temporary error conditions from allowing the operator to believe the FHR monitor302, cable330, and transducer336are operating correctly. In an exemplary embodiment, the fail-safe differential receiver100functions in accordance with state table404including the fault condition state table402. In operation, one or more semiconductors406can be used, as may be required and/or desired in a particular embodiment. Additionally, programmable LVDS and/or latch-capable combinational logic devices179can be encoded185by flashing or otherwise downloading programmable logic code software181to effectuate the desired operation. Such latch-capable combination logic device179can be a programmable logic device (PLD), a complex programmable logic device (CPLD), a floating point gate array (FPGA), or other types or kinds of latch-capable combinational logic devices as may be required and/or desired in a particular embodiment. With reference toFIG.2, the present invention fail-safe differential receiver100is shown in references ‘A’ and ‘C’. In operation, the fail-safe differential receiver100circuit not only detects intermittent or broken wire conditions (from cable330) but also detects both inputs LVDS+/−(twisted pair conductors) open or short-circuited. LVDS+/−(twisted pair conductors) correspond to IN+127input and IN−129input. The output of the fail-safe differential receiver100, Vout180is then received by signal processor408which comprises the microcontroller (MCU) or CPLD by way of, as needed, additional comparing, combinational logic gates, latching (such as by flip-flop), tri-state buffering, other types of buffering, or other signal processing circuitry which is illustrated and referred to as signal processor408. Signal processor408can generate DC voltage output135which can be used by other circuits. In an exemplary embodiment, such semiconductor406can be fabricated in a form factor and pin-compatible manner so that the fail-safe differential receiver100can be a direct semiconductor part replacement in prior transducers that have prior LVDS shortcomings mentioned above. Alternatively, separate fail-safe differential receiver100can be incorporated into the existing frontend printed circuit board (PCB) or backend PCB within the transducer to detect cable fault conditions, as may be required and/or desired in a particular embodiment. In an exemplary embodiment and with reference to reference ‘C’, state table404illustrates a normal operation state table418and a fault condition state table420, an electronic control system400can comprise a fail-safe differential receiver100having an IN+ input (cable330pin 3), an IN− input (cable330pin 2), and a Vout output180. The fail-safe differential receiver100provides stable 1 MHz reference clock signal to signal processor408of transducer336by generating at a control signal133during normal operating state until a fault condition state is detected as follows:when the IN+127input is open (not connected) the control signal133is latched to the fault condition state (such as logic high (H));when the IN−129input is open (not connected) the control signal133is latched to the fault condition state (such as logic high (H));when the IN+127input and the IN−129input are connected by a first resistance130that is configured as an undriven parallel termination, the control signal133is latched to the fault condition state (such as logic high (H));when the IN+127input is shorted to Vcc or ground (GND) the control signal133is latched to the fault condition state (such as logic high (H));when the IN−129input is shorted to Vcc or GND the control signal133is latched to the fault condition state (such as logic high (H));when the IN+127input and the IN−129input are shorted together the control signal133is latched to the fault condition state (such as logic high (H)); andwhen only one of the IN+127or the IN−129is intermittently open and reconnected the control signal133is latched to the fault condition state (such as logic high (H)). Once latched the signal out133logic state remains the same until the fail-safe differential receiver100is reset. In this regard, resetting the latch and error condition can be done by way of cycling power on the fetal monitor, replacing the cable330which also cycles power on the transducer, or other suitable resetting methods, as may be required and/or desired in a particular embodiment. For disclosure purposes, the state of the normal operating state and the fault condition state is not particularly limited. In operation, the normal operating logic state can be either high (H) or low (L), and the fault condition state can be the opposite of the normal operating state. Additionally, the normal operating state is latched on the control signal133, requiring a reset to clear latching of the control signal133when the normal operating state persists on control signal133for more than a predetermined error condition time period. In this regard, normal operation418sees the difference between IN+127input and IN−129input transitioning416between greater than or equal to 100 mV and less than or equal to 100 mV causing control signal133to transition416between the normal operating state and the fault condition state. If the control signal133remains at the normal operating state for an extended period of time (exceeding the predetermined error condition time period) something is wrong as illustrated in the state table420and the control signal133is latched to the normal operating state which stops FHR detection until the transducer336is reset. A reset can be done by unplugging the transducer336from the fetal monitor302(removing power temporarily), changing cables, or other suitable reset methods. The predetermined error condition time period can be set in the range of milliseconds to seconds, as may be required and/or desired in a particular embodiment. An advantage, in the present invention, is that by latching the control signal133when an error condition in the state table420is detected, displaying an incorrect FHR is prevented. As one example, the error condition when one of IN+127or IN−129is open (not connected) an erroneous waveform can be created that is interpreted by the fetal monitor302as an FHR in the range of 60 to 240 beats per minute even when the transducer336is not connected to a patient502. The present invention solves this error condition and others by latching the control signal133to the normal operating state when the control signal has been at the normal operating state for a time period that exceeds the predetermined error condition time period preventing incorrect FHR readings from being displayed on the fetal monitor302and requiring a technician to remove from service broken cables330and/or transducer336. Referring toFIG.3, there is illustrated one example of a circuit diagram for a fail-safe differential receiver100having a latchable control signal interrupt. In an exemplary embodiment, a transmitter drives a current between IN+ input127and IN− input129, which generates a voltage across the load or terminating resistor130. This voltage is detected by differential amplifier111, which receives signals IN+ input127and IN−129input on its non-inverting and inverting inputs respectively. In normal operation, control signal133input to NOR gate104is logic low (normal operating state), so the receiver output143from differential amplifier111is passed through the OR gate104to generate output Vout180. Output Vout180is a digital signal such as a Transistor-Transistor-Level (TTL) signal that may be driven full-rail between power (Vcc) and ground (GND). The differential signals IN+127and IN−129are applied to the inverting inputs of comparators108and112and the non-inverting inputs of comparators114and118respectively. The non-inverting inputs of comparators108and112are driven by a reference voltage Vref1131, which is configured to be very close to the power-supply voltage Vcc. Resistors132and134form a voltage divider that generates Vref1131. The resistance of pull-up resistor132is much less than the resistance of pull-down resistor134, so Vref1131is in the range of 97% of Vcc, or 0.97×Vcc in this example. As an example, for a 3.0-volt Vcc, Vref1131is 2.91 volts. Of course, resistors132and134can be adjusted to obtain other values of Vref1131near Vcc as may be required and/or desired in a particular embodiment. In general, the best results are obtained when Vref1131is as close as possible to Vcc. In an exemplary embodiment, the first voltage reference131comprises a first resistor132connected in series with a second resistor134creating the first voltage reference131at the junction of the first resistor132and the second resistor134. The first resistor132connects at one end to Vcc and the second resistor134connects at one end to GND. Values of the first resistor132and the second resistor134are selected such that the first voltage reference131is in the range of 97% of Vcc. The inverting inputs of comparators114and118are driven by a reference voltage Vref2132, which is configured to be very close to the ground (GND). Resistors136and138form a voltage divider that generates Vref2132. The resistance of pull-down resistor138is much less than the resistance of pull-up resistor136, so Vref2132is in the range of 0.03% of Vcc, or 0.03×Vcc in this example. For a 3.0-volt Vcc, Vref2is 0.09 volts. Of course, resistors136and138can be adjusted to obtain other values of Vref2132near GND as may be required and/or desired in a particular embodiment. In general, the best results are obtained when Vref2132is close to GND. In an exemplary embodiment, the second voltage reference132comprises a third resistor136connected in series with a fourth resistor138creating the second voltage reference132at the junction of the third resistor136and the fourth resistor138. The third resistor136connects at one end to Vcc and the fourth resistor138connects at one end to GND. Values of the third resistor138and the fourth resistor138are selected such that the second voltage reference132is in the range of 0.03% of Vcc. During normal operation, bias resistors410pull-up resistors126and128have large resistance values in the 500K ohm range and thus produce small electrical currents. The electrical current through the terminating resistor130is much greater than the pull-up resistor electrical currents, so their effect is negligible. During normal operation, IN+127and IN−129inputs are each below Vref1131, since Vref1131is close to Vcc, such as previously disclosed in the range of 0.97×Vcc, and IN+127and IN−129typically switch near Vcc/2, and comparator108and112output are a logic level high (H) since Vref1131is above IN+127input and IN−129input. Both the inputs of NAND gate110being a logic high (H), drive its output logic low (L). The output of the NAND gate110is one of the inputs of OR gate120. The other input of OR gate120is also logic low (L) due to the fact that Vref2132is close to GND potential as previously disclosed in the range of 0.03×Vcc. Vref2132is applied to the inverting inputs of comparators114and118, and their non-inverting inputs are higher than Vref2132so that both comparators114and118produce a logic high (H) output making both inputs of NAND gate116logic high (H) which gets inverted to a logic low (L) as a second input to OR gate120. Hence the output of OR gate120is logic low (L) during normal operation and acts as SET (S) input to the SR latch122. Just after power on, a long RC time constant formed by resistor140and capacitor142is coupled to hex buffer124the output of which interconnects with the RESET (R) input of latch122raising the RESET (R) line logic high (H) momentarily while SET (S) input is logic low (L) (by the time supply voltage stabilizes). The ‘Q’ output of latch122is driven logic low (L) and latches to the logic low (L) level, as the RESET (R) input permanently transitions to a logic low level as the capacitor142charges to Vcc. This low ‘Q’ output forms the control signal133which is an input to the OR gate104and acts as a control to allow or block the signal at its other input from the differential amplifier111to pass to the output as Vout180. An inverter162receives the output Vout180from the OR gate104and is inverted forming Enable/Disable control signal182output for the peripheral devices/chips. During operation, the differential amplifier111receiver output143is allowed to pass through to the output Vout180when the control signal133is a logic low (L) indicating a normal operating state. In the alternative, when the control signal133is a logic high (H) indicating a fault condition state, the control signal133is passed holding Vout logic (H) and blocking or otherwise terminating the operation of the FHR transducer336until the fault condition is corrected and the fail-safe differential receiver's100latch122is reset. In an exemplary embodiment of an IN+127fault condition detection and with reference toFIGS.3and8, in operation, consider an open cable330conductor connecting LVDS+ transmitter output Do+, by way of the cable330, to the IN+127input of the differential amplifier111. The IN+ input127tries to pull-up to Vcc through resistor128making the inverting input voltage higher than Vref1131(2.91V) on the non-inverting input of comparator108that results in comparators108output switching to a logic low (L). This means one input of the NAND gate110switches to a logic low (L), forcing its output to switch to logic high (H), and as a result, the OR gate120output switches to a logic high (H), which triggers the latch122switching the ‘Q’ output also referred to as the control signal133to a logic high (H) and as an input to the OR gate104, blocks the differential amplifier111receiver output143from reaching Vout180effectively stopping the operation of the FHR transducer336. During this faulty condition, the NAND gate116output remains logic low (L) as the non-inverting input of both comparators114and118remain at a higher voltage level than the Vref2132(0.09V). If the cable connection to IN+127is restored under this situation, comparator108output is restored to logic high (H) as its inverting input is below Vref1131(2.91V) on non-inverting input. As such NAND gate110outputs switches to a logic low (L) and in turn OR gate120output switches to a logic low (L) on SET (S) input of latch122. Since the RESET (R) input is a logic low (L), the output Q which is the control signal133latches to the previous logic high (H) continuing to block the differential amplifier111receiver output143until power Vcc is switched ‘OFF’ and ‘ON’ again. In an exemplary embodiment of an IN−129fault condition detection and with reference toFIGS.3and8, in operation similar logical operational explanation would apply if the other input IN−129input is open (unconnected) and IN+127input is intact (connected) and OR gate104would block the receiver output143. As referenced inFIG.3, some prior LVDS receivers included a frontend410and first-stage receiver406but these sections406/410are not sufficient for fault detection of a single IN+127or IN−129disconnection. In this embodiment and an advantage in the present invention is the circuitry beyond sections406/410that effectuates the single differential input (IN+127or IN−129) disconnect detection with latchable control signal133interrupt capability. A shortcoming of prior LVDS receives that only utilize sections406/410is that when either of the inputs (IN+127or IN−129) is open (unconnected), the remaining connected input has a common mode DC voltage (as an example 2.2V for Philips ultrasound transducer models M2736A, M2736AA, REF #867246) being driven from the LVDS transmitter at the FHR monitor302side of the cable330connection. This common mode DC voltage appears through the termination resistor130, which is in the range of 100 ohm to 120 ohm, to the open input end (the disconnected IN+127or IN−129end) and hence does not get pulled-up to Vcc. As such, comparators108and112cannot be switched to output a logic low (L) when one of the LVDS inputs (either IN+127or IN−129) is open. The present invention overcomes this limitation. With reference toFIGS.3and8, an exemplary embodiment of both IN+127input and IN−129input disconnect fault condition detection follows. If cable330is open to both IN+127input and IN−129input at the same time, there is no drive from the LVDS transmitter on the FHR monitor302side of cable330, no current flows through the termination resistor130or pull-up to Vcc resistors126and128which are connected to IN−129and IN+127respectively. In this fault condition, the inverting inputs of both comparators108and112switch to a logic high (H) producing a logic low (L) at their outputs which are also the NAND gate110inputs. The NAND gate110output switches to a logic high (H) which is received at the input of OR gate120. The logic high (H) on the OR gate120input causes the OR gate120output to switch to a logic high (H). The OR gate120output logic high (H) is received at the latch122SET (S) input causing the ‘Q’ output also referred to as the control signal133to switch and remain latched to a logic high (H) state. The control signal133logic high (H) state is received at the input to OR gate104causing the OR gate104output to switch to a logic high (H) effectively blocking the difference signal143and inactivating the FHR transducer336until the fault condition is corrected and the fail-safe receiver100is reset. In the present invention, an advantage is that even if the open IN+127input and IN−129input connections are restored and the voltages of inverting inputs of comparators108and112return to normal voltage levels such that the SET (S) input of latch122is switch to a logic low (L) the latch122‘Q’ output remains latched to the earlier state of logic high (H) state blocking the receiver output143and continuing to disable the FHR transducer336until the power is reset causing the latch122to reset. In this regard, on the first occurrence of a cable fault, the FHR transducer336is disabled preventing false FHR readings326/324from being communicated to the FHR monitor302and displayed as frequently happens with prior LVDS receivers that are absent the advantages in the present invention of control signal133latching capabilities. With reference toFIGS.3and8, an exemplary embodiment of both IN+127input and IN−129input shorted fault condition detection follows. If the cable330conductor shorts IN+127input and IN−129input together, no current flows through resistor130or the pull-up resistors126and128that pull-up IN+127input and IN−129input to Vcc respectively. In this fault condition, inverting inputs of both comparators108and112switches to a logic high (H) producing a logic low (L) at their outputs which are inputs to NAND gate110causing the NAND gate110output to switch to a logic high (H). The output of the NAND gate110which is a logic (H) passes to the input of the OR gate120whose other input is a logic low (L), resulting in the output of the OR gate120switching to a logic high (H). The OR gate120output logic high (H) is received at the latch122SET (S) input causing the ‘Q’ output also referred to as the control signal133to switch and remain latched to a logic high (H) state. The control signal133logic high (H) state is received at the input to OR gate104causing the OR gate104output to switch to a logic high (H) effectively blocking the difference signal143and inactivating the FHR transducer336until the fault condition is corrected and the fail-safe receiver100is reset. In the present invention, an advantage is that even if the shorted IN+127input and IN−129input connections are restored and the voltages of inverting inputs of comparators108and112return to normal voltage levels such that the SET (S) input of latch122is switch to a logic low (L) the latch122‘Q’ output remains latched to the earlier state of logic high (H) state blocking the receiver output143and continuing to disable the FHR transducer336until the power is reset causing the latch122to reset. In this regard, on the first occurrence of a cable fault, the FHR transducer336is disabled preventing false FHR readings326/324from being communicated to the FHR monitor302and displayed as frequently happens with prior LVDS receivers that are absent the advantages in the present invention of control signal133latching capabilities. With reference toFIGS.3and8, an exemplary embodiment when one of the IN+127or IN−129is shorted to Vcc or greater voltage fault condition detection follows. If one of cable330conductors, either IN+127or IN−129shorts to a conductor that supplies DC power voltage equal to or greater than Vcc, the inverting input voltage of either comparator108or112will be higher than Vref1131on their respective non-inverting input. Correspondingly, the output of comparators108and112output will switch to a logic low (L) causing NAND gate110output to switch to a logic high (H), which in turn causes the output of OR gate120to switch to a logic high (H). The OR gate120output logic high (H) is received at the latch122SET (S) input causing the ‘Q’ output also referred to as the control signal133to switch and remain latched to a logic high (H) state. The control signal133logic high (H) state is received at the input to OR gate104causing the OR gate104output to switch to a logic high (H) effectively blocking the difference signal143and inactivating the FHR transducer336until the fault condition is corrected and the fail-safe receiver100is reset. In the present invention, an advantage is that even if the shorted to Vcc IN+127or IN−129connection is restored and the voltages of inverting inputs of comparators108and112return to normal voltage levels such that the SET (S) input of latch122is switch to a logic low (L) the latch122‘Q’ output remains latched to the earlier state of logic high (H) state blocking the receiver output143and continuing to disable the FHR transducer336until the power is reset causing the latch122to reset. In this regard, on the first occurrence of a cable fault, the FHR transducer336is disabled preventing false FHR readings326/324from being communicated to the FHR monitor302and displayed as frequently happens with prior LVDS receivers that are absent the advantages in the present invention of control signal133latching capabilities. With reference toFIGS.3and8, an exemplary embodiment of when one of the IN+127or IN−129is shorted to the shield of the cable330fault condition detection follows. If one of cable330conductors, either IN+127or IN−129shorts to the cable330shield, the non-inverting input voltage of either comparator114and118will be lower than Vref2132which is applied to their respective inverting inputs. As a result, the output of either comparator114or118will switch to a logic low (L) causing the output of NAND gate116to switch to a logic high (H), which in turn causes the output of OR gate120to switch to a logic high (H). The OR gate120output logic high (H) is received at the latch122SET (S) input causing the ‘Q’ output also referred to as the control signal133to switch and remain latched to a logic high (H) state. The control signal133logic high (H) state is received at the input to OR gate104causing the OR gate104output to switch to a logic high (H) effectively blocking the difference signal143and inactivating the FHR transducer336until the fault condition is corrected and the fail-safe receiver100is reset. In the present invention, an advantage is that even if the shorted to cable330shield IN+127or IN−129connection is restored and the voltages of inverting inputs of comparators108and112return to normal voltage levels such that the SET (S) input of latch122is switched to a logic low (L) the latch122‘Q’ output remains latched to the earlier state of logic high (H) state blocking the receiver output143and continuing to disable the FHR transducer336until the power is reset causing the latch122to reset. In this regard, on the first occurrence of a cable fault, the FHR transducer336is disabled preventing false FHR readings326/324from being communicated to the FHR monitor302and displayed as frequently happens with prior LVDS receivers that are absent the advantages in the present invention of control signal133latching capabilities. In an exemplary embodiment and with reference toFIG.3, a jumper connection121can be utilized to include or exclude the latch122capability. Such a jumper connection121can be useful during cable or transducer330testing and at other times when avoiding a power cycle reset to reset a latched latch122is desired. In an exemplary embodiment, the fail-safe differential receiver100comprises a differential amplifier111. The differential amplifier111comprises a receiver output143and receives an IN+ input127, and an IN− input129. A first combining gate104receives the receiver output143, a control signal133, and generates a Vout180output. The fail-safe differential receiver100further comprises a first reference voltage131and a first comparator108. The first comparator108receives the IN+ input127and the first reference voltage131and generates a first compare signal. A second comparator112receives IN− input129and the first reference voltage131, and generates a second compare signal. A second combining gate110receives the first compare signal and the second compare signal and generates a third compare signal. The fail-safe differential receiver100further comprises a second reference voltage132and a third comparator114. The third comparator114receives the IN+ input127and the second reference voltage132and generates a fourth compare signal. A fourth comparator118receives the IN− input129and the second reference voltage132and generates a fifth compare signal. A third combining gate116receives the fourth compare signal and the fifth compare signal and generates a sixth compare signal. A fourth combining gate120receives the third compare signal and the sixth compare signal and generates a seventh compare signal. And a latch122receives the seventh compare signal and generates the control signal133. The control signal133transitions between a normal operating state and a fault condition state. In operation, the receiver output143is applied to a Vout180output as long as the control signal133is in the normal operating state, and on the first occurrence of the fault condition state the latch122latches blocking the receiver output143from being applied to Vout180output until the latch is reset. In an exemplary embodiment, a fail-safe differential receiver100comprises a first voltage reference131, a second voltage reference132, and a differential amplifier111. The differential amplifier111comprises a receiver output143and receives an IN+ input127, and an IN− input129. The fail-safe differential receiver100comprises more than one comparator108/112/114/118. Each of the comparators108/112/114/118comprises a compared output and receives at least two of the following: the IN+ input127, the IN− input129, the first voltage reference131, or the second voltage reference132. A latch-capable combinational logic device179receives each of the compared outputs, and the receiver output143. The latch-capable combinational logic device179comprises a latch122, and a control signal133that is switched to a normal operating state until at least one of the following is detected:when the IN+ input and the IN− input are connected by a first resistance130, which is configured as an undriven parallel termination, the control signal133is latched to the fault condition state;when the IN+ input is shorted to Vcc or GND the control signal133is latched to the fault condition state;when the IN− input is shorted to Vcc or GND the control signal133is latched to the fault condition state; andwhen the IN+ input and the IN− input are shorted together the control signal133is latched to the fault condition state. In operation, the receiver output143is applied to a Vout180output as long as the control signal133is in the normal operating state, and on the first occurrence of the fault condition state, the latch122latches blocking the receiver output143until the latch is reset. In an exemplary embodiment, the latch-capable combinational logic device179comprises one or more of an OR gate, one or more of a NOR gate, one or more of an AND gate, one or more of a NAND gate, or one or more of an inverter. Referring toFIG.4, there is illustrated one example of a system diagram for a fail-safe differential receiver100having single differential input disconnect detection with a latchable control signal interrupt. In an exemplary embodiment, during normal operation, bias resistors410pull-up resistors128and128have large resistance values in the 500K ohm range or other suitable range and thus produce small electrical currents. The electrical current through the terminating resistor130is much greater than the pull-up resistor electrical currents, so their effect is negligible. In operation, the differential amplifier111receives IN+127input and IN−129input and generates a receiver output143. The op-amp window detector/comparator119comprises Vref1131and Vref2132. An op-amp window detector/comparator119can be configured with comparators108/112/114/118as illustrated inFIG.3, be configured with voltage follower op-amp buffer113, rectifier/peak detector115, and voltage follower op-amp buffer117as illustrated inFIGS.5and6, or configured in other suitable ways as may be required and/or desired in a particular embodiment. Inputs to a latch-capable combinational logic device179can include the receiver output143and at least one window detector/comparator119output. The latch-capable combinational logic device179can be one or more programmable logic devices, floating point gate arrays, or other suitable programmable logic devices as may be required and/or desired in a particular embodiment. In an exemplary embodiment, a programmable logic software code181can be created. In this regard, programming languages such as VHDL, CUPL, and other suitable programming languages can be used to write logic software. The programmable logic software code181can then be encoded185or otherwise programmed into the latch-capable combinational logic device179causing the latch-capable combinational logic device179to operate in accordance with the state table404including the fault conditions420, and in accordance with other aspects of the present invention. Additionally, the control signal133can be inverted by inverter162to create an enable/disable peripheral device/semiconductor182that can be used to control other semiconductors and/or peripheral devices as may be required and/or desired in a particular embodiment. The output of the LVDS, Vout180is then received by the microcontroller (MCU) or CPLD by way of, as needed, additional comparing, combinational logic gates, latching (such as by flip-flop), tri-state buffering, other types of buffering, or other signal processing circuitry which is illustrated and referred to as signal processor408. Signal processor408can generate DC voltage output135which can be used by other circuits. Referring toFIG.5there is illustrated one example of a system diagram andFIG.6a circuit diagram for a fail-safe differential receiver having single differential input disconnect detection with a latchable control signal interrupt capability. A shortcoming of prior LVDS receivers is fault detection for a single input open (disconnected), either IN+ input or IN− input, while the other input is being driven from the LVDS transmitter on the fetal monitor302end of cable330. In this regard, prior LVDS detection circuits tested found that it was not possible to detect the fault condition when only one input IN+127or IN−129input is open (disconnected) and the termination resistor at the LVDS input is intact. Stated differently, as long as one end of the termination resistor is driven by the LVDS transmitter, the common mode DC voltage from the LVDS driver appears on the unconnected IN+ input or IN− input end of the termination resistor thereby preventing both ends from being pulled, by pull-up resistors126/128, to Vcc making single differential input open circuit fault detection not possible. As such, prior LVDS line receivers and circuits don't support fault detection for single input open (disconnected) conditions. No matter the methods of fault detection at the input of prior LVDS receivers, whether using external resistor biasing or comparators or differential receivers, or a combination of external biasing with an active parallel circuit; the single input (IN+ input or IN− input) open (disconnected) fault detection is not possible with prior LVDS devices, circuits, and methods. As a result, industry standards for prior LVDS receivers support only three fault detection conditions (1) both the IN+127input and IN−129input open, (2) both the IN+127input and IN−129input shorted together, and (3) both the IN+127and IN−129inputs are undriven with a termination resistor at the input. A shortcoming of prior LVDS receivers and an advantage in the present invention is to be able to detect the fault condition of a single differential input (IN+127or IN−129) disconnect in combination with a latchable control signal133interrupt to disable the FHR transducer336on a first occurrence of a fault condition. Absent this feature and a shortcoming of prior LDVS receivers, intermittent behavior, and spurious noise can cause faulty FHR readings326/324such as incorrect even excessively high FHR readings326/324. In this regard, FHR transducers that rely on prior LVDS receivers such as Philips Avalon fetal heart rate ultrasound transducer models M2736A, M2736AA, REF #867246 can give spurious/intermittent wrong fetal heart rate (FHR) reading when a cable goes through intermittent break-make of conductors yielding a temporary or permanent open circuit condition of a single input IN+ input or IN− input at the prior LVDS line receiver on the back-end circuit board (Main CPU PCB) signal processor408inside the transducer head. The present invention overcomes this shortcoming of prior LVDS receivers with the circuit diagrams for a fail-safe differential receiver100having single differential input disconnect detection with a latchable control signal interrupt illustrated in at leastFIGS.5and6. In an exemplary embodiment, all of the fail-safe fault detection conditions are satisfied in the state table404and specifically fault condition state table420which is better illustrated inFIG.2. With reference to the fault condition state table420, one of the most critical fault conditions to detect is the single input IN+127or IN−129open (disconnected) at the differential amplifier111while the termination resistor130is in the circuit. In the present invention, an advantage is the use of a second OR gate104where input ‘A’ interconnects with the differential amplifier111receiver output143, and input ‘B’ interconnects with the latch122‘Q’ output that is also referred to as the control signal133. During operation, the OR gate104output Vout180follows the difference signal143on input ‘A’ when the control signal133on input ‘B’ is a logic level (L) (normal operating state), and the OR gate104output180follows the control signal133when a logic high (H) (fault condition state) when a fault condition is detected causing the latch122‘Q’ output/control signal133to latch and stay logic high (H) effectively blocking operation of he FHR transducer336until the latch122is reset such as by a power ‘OFF’/‘ON’ cycle or other suitable methods. A logic voltage translator can be used to receive OR gate104output Vout180to create a transducer control signal that is used to interface with the electronics within the FHR transducer336. Additionally, the control signal133can be inverted by inverter162to create an enable/disable peripheral device/semiconductor182that can be used to control other semiconductors and/or peripheral devices, as may be required and/or desired in a particular embodiment. With reference toFIGS.5and6, a frontend410conditions IN+ input127and IN− input129to differential amplifier111. The front end410comprises pull-up resistors126/128and termination resistor130. The receiver output143from the differential amplifier111is coupled to a voltage follower113. The voltage follower113comprises an op-amp buffer146, resistor170, and diode174. The voltage follower113is coupled147to a rectifier/peak detector115(comprises diode176and capacitor178) and the rectifier/peak detector115is coupled151to a voltage follower op-amp buffer117. The rectifier/peak detector115and the voltage follower op-amp buffer117comprise an op-amp buffer148, a diode176, a capacitor178, a feedback resistor172between the output of the second op-amp buffer148and the inverting input of the first op-Amp buffer146, the feedback resistor172provides better circuit stability. The voltage follower op-amp buffer117is coupled153to an op-amp window detector/comparator119. The op-amp window detector/comparator119comprises resistor159, Vref1131formed with resistors132/134, Vref2132formed with resistors136/138, and op-amp comparators150/152. The op-amp window detector/comparator119is coupled155to the logic gates/latch179. The logic gates/latch179comprises logic gates121which is coupled to the latch122. A latch reset125is coupled to the latch122. The ‘Q’ output of the latch122is also referred to as the control signal133. The logic gates/latch179also comprises OR gate104. The control signal133and receiver output143are inputs to OR gate104. During operation, the OR gate104output180follows the difference signal143on input ‘A’ when the control signal133on input ‘B’ is a logic level (L) (normal operating state), and the OR gate104output180follows the control signal133when a logic high (H) (fault condition state) when a fault condition is detected causing the latch122‘Q’ output/control signal133to latch and stay logic high (H) effectively blocking operation of the FHR transducer336until the latch122is reset such as by a power ‘OFF’/‘ON’ cycle or other suitable methods after the corrective action is taken to address the fault such as cable replacement. Logic voltage translators can be used to invert OR gate104output180to create a transducer control signal that is used to interface with the electronics within the FHR transducer336. Additionally, the control signal133can be inverted by inverter162to create an enable/disable peripheral device/semiconductor182that can be used to control other semiconductors and/or peripheral devices as may be required and/or desired in a particular embodiment. In an exemplary embodiment, in the window comparator119design, the resistors132/134/136/138can be selected such that the reference voltage Vref1131and Vref2132are in the range of Vop plus the expected variation in millivolt during normal operation in the range of 20 to 60 millivolts margin and Vop minus the expected variation in millivolt during normal operation in the range of 20 to 60 millivolts margin respectively or other suitable value as illustrated in at leastFIG.7. Here Vop is the voltage value during the normal operation as Vcc/2 minus the forward voltage drop across rectifier diode178. The slight variation of Vop in millivolts during the normal operation is based on the duty cycle change of the normal signal at differential receiver output143. In an exemplary embodiment, with regards to the rectifier/peak detector115, the value of the capacitor178should be chosen so that the time constant Rd2×C (Rd2forward resistance of diode176×the capacitance of capacitor178) is higher than the period of the expected input waveform. This value should be 5T or slightly greater (where T is the time period of the output square waveform (the conditioned difference signal143) from the differential amplifier111) in order to act as a peak sample hold element of the circuit. During normal operation, the receiver output143is a square waveform with amplitude close to the DC power supply Vcc. The receiver output143is coupled to the voltage follower op-amp buffer113which is set for unity gain and by way of the voltage follower op-amp buffer113coupled to the rectifier/peak detector115. The rectifier/peak detector115outputs a DC voltage level lower than Vcc/2 (input square wave amplitude voltage/2) depending on the voltage drop across the rectifying diode176as well as the duty cycle of the input square wave (the conditioned difference signal143). The output of the rectifier/peak detector115is coupled to the input of the window comparator119by way of the voltage follower op-amp buffer117without any change because the voltage follower op-amp buffer117is also a unity gain voltage follower. The DC voltage level of the square wave (the conditioned difference signal143) will have a very minute deviation (a few millivolts to tens of millivolts) centered around the DC value obtained when the square wave input (the conditioned difference signal143) to the rectifier/peak detector stage115is in the range of a 50% duty cycle as long as the differential amplifier111inputs IN+137and IN−139do not have any faults and there is no failure internally to the differential amplifier111semiconductor(s). When the differential amplifier111is used to distribute a reference clock signal (as in the case of Philips ultrasound transducer models M2736A, M2736AA, ref #867246), the output DC voltage level of the rectifier/peak detector stage115is expected to be rock steady indicating a normal function of differential amplifier111. The output of the rectifier/peak detector stage115is Vop149and has a DC voltage level when the input is a square wave (the conditioned difference signal143) of 50% duty cycle with an amplitude of Vcc. The Vop149will be lower than Vcc/2 based on the voltage drop across the rectifying diode176. The window comparator119has two distinct settings of the reference voltage levels. In this regard, during normal operation, Vref1131is slightly higher than Vop149and Vref2132is slightly lower than Vop149voltage value. The difference or deviation of these reference voltages Vref1131and Vref2132from the Vop149voltage value should be in the range of 20 to 60 mV more than the expected variation in Vop149due to the expected variation in the duty cycle of the square waveform (the conditioned difference signal143) at the input of the rectifier/peak detector115during normal operation of the differential amplifier111. The Vref1131voltage value (calculated as Vop149value plus the expected millivolt variation in Vop149during normal operation plus 20-60 millivolt margin). As such Vref131should fall at least 300 millivolts below the common mode voltage Vcm drive on the IN+127and IN−129inputs of the differential amplifier111. For example, and not a limitation, if Vcc=3.3V, 1 MHz 50% duty cycle with 3.3V amplitude reference clock (square waveform which is the conditioned difference signal143) output of differential amplifier111. In normal operation, a 2.2V common mode voltage (Vcm) drive on the IN+127and IN−129inputs of differential amplifier111. In this example, it is expected there will be hardly a +/−10 mV variation in Vop output149of the rectifier/peak detector115as the reference clock square wave (the conditioned difference signal143) is configured to be steady during normal operation and considering diode176drop to be 0.3V, the Vop output149will be in the range of 1.35V (square waveform amplitude 3.3V/2-0.3V diode drop). So, Vref1131should be 1.42V (Vop 1.35V+10 mV of expected variation+60 mV maximum margin). This Vref1131value of 1.42V is well below the limiting value of 1.9V (Common mode input drive voltage of 2.2V−300 mV=1.9V). Vref2in this example should be 1.28 V (Vop 1.35V−10 mV expected variation−60 mV maximum margin). In this example, if the common mode voltage drive to LVDS IN+127and IN−129inputs would have been 1.65V (Vcc/2) then Vref1131voltage of 1.42V is higher than the lower limit of the differential amplifier111IN+127and IN−129input line swing (common mode voltage 1.65V−300 mV=1.35V which is also the same as Vop149value during normal operation). It is important that Vcm be sufficiently higher than Vcc/2 so that there is at least 300 mV difference between Vref1131and Vcm in order for window comparator119to switch output states when one of the IN+127or IN−129inputs of differential amplifier111is open resulting in the Vcm value at that moment being presented at the receiver output143of the differential amplifier111. If the application requires Vcm to be Vcc/2, then the voltage follower op-amp buffer113(with a very high slew rate) should be chosen so that the 3.3V square pulse amplitude of the receiver output143at the input to the voltage follower op-amp buffer113is reduced at its output, which is coupled to the input of the rectifier/peak detector115, to a lower value half-way between Vcc/2 and Vcc, in the range of about 2.5V. In this example, the Vop149will be reduced to 0.95V (square pulse amplitude input to the rectifier/2−the diode drop of 0.3V), and DC voltage inputs below 2.5V will be passed onto the widow comparator119without any change. This voltage follower op-amp buffer113is followed by rectifier/peak detector115, and voltage follower op-amp119acts as a precision rectifier/peak detector, converting square wave (the conditioned difference signal143) to a DC voltage level Vop149equal to the input waveform amplitude/2 minus the diode176voltage drop that is fed to a pair of op-amp comparators150/152. Comparator150has voltage reference Vref1131value set between Vop149and common mode drive DC voltage Vcm on the IN+127and IN−129inputs of differential amplifier111. Comparator152has reference voltage Vref2132set to a value between Vop149and GND (0V). Whenever there is a failure/fault condition420at the input IN+127or IN−129of the differential amplifier111, the square wave (the difference signal143) at its output disappears, and either more positive voltage (from Vcm to Vcc) or very low voltage close to ground potential (0V) appears depending on the fault condition. For instance, if one of the IN+127or IN−129inputs to the differential amplifier111is open (disconnected), its receiver output143is a DC level close to Vcm. Due to an open input IN+127or IN−129, there may be noise riding on that DC voltage level but the rectifier/peak detector115stage chops off the negative half, and the window comparator119stage receives more voltage input than the Vcm value (due to the positive peaks of the overriding noise) making comparator150with Vref1131to switch its output to Vcc. If positive, input IN+ of differential amplifier111is shorted to GND/shield, the differential amplifier111receiver output143switches close to GND voltage, and in turn, the output of comparator152with Vref2132switches to Vcc. Both the comparator150/152outputs are at low (L) (GND potential) when Vop149voltage is between Vref1131and Vref2132and switches to Vcc when Vop crosses either of the voltage references131/132values due to a fault condition420at the differential amplifier111IN+127or IN−129inputs, or failure internally to the differential amplifier111. In the present invention and an advantage that is not present in prior LVDS receivers is that the fail-safe differential receiver100can detect failures that are internal to the differential amplifier. The outputs of the window comparator119are coupled to the logic gates/latch179. The logic gates/latch179comprise logic gates121including OR gates104/154and latch122. In operation, comparator150/152outputs are coupled to the inputs of OR gate154, and the output of OR gate154is coupled to the SET (S) input of latch122. The output of OR gate154is a logic low (L) state during normal operation of the differential amplifier111and switches to a logic high (H) state when one of the fault conditions in table420occurs at the differential amplifier111IN+127and/or IN−129inputs. This logic high (H) input to SET (S) causes the ‘Q’ output of the latch122also called the control signal133to latch to a logic high (H) state. As an input to the second OR gate104, when the signal output133switches to a logic high (H) the output of the OR gate104follows and also switches into a logic high (H) state indicating a fault condition420has been detected and effectively blocking the receiver output143from passing, stopping the operation of the FHR transducer as a safety measure and to prevent incorrect FHR readings until the fault condition is removed and the FHR transducer reset such as by cycling power. During initial power-on or cycling of power ‘OFF’ and ‘ON’, a reset circuit which comprises resistor140, capacitor142, and inverter124holds the RESET (R) pin of the latch122high until Vcc becomes stable and then drifts permanently logic low (L) in accordance with the RC140/142time constant value selected. The SET (S) pin of the latch122is also normally at a logic low state after power is switched ‘ON’ as the differential amplifier111output steadies with a square waveform (the difference signal143) resulting in Vop149voltage input to window comparator119settling within the limits of Vref1131and Vref2132, thereby setting the comparators150/152outputs logic low (L). The comparators150/152outputs are inputs to the OR gate154and a logic low (L) input causes the output to switch to a logic (L). The moment a fault condition420occurs internally to differential amplifier111or at its IN+127and/or IN−129inputs such that either of the IN+127or IN−127inputs are open, both the inputs open, one of the inputs is open and the other input shorted to shield/ground or Vcc, both the inputs shorted together, both the inputs shorted together and shorted to ground or Vcc, inputs are un-driven (LVDS driver is in a high impedance state on the FHR monitor side of the cable330), or the terminating resistor130is open (disconnected) a fault condition420is detected. Once one of the fault conditions420is detected, the differential amplifier111output level switches to a DC voltage level from Vcm to Vcc or close to GND (0V). The DC voltage level is passed through the voltage follower op-amp buffer113, rectifier/peak detector115, and the voltage follower op-amp buffer117as Vop149to the window comparator119. The window comparator119outputs a logic high (H) state to one of the inputs of the OR gate154which in turn sets latch122‘Q’ output also called the control signal133logic high (H). The control signal133is coupled to input ‘B’ of OR gate104and the output of the OR gate104switches to a logic high (H) state thereby blocking the receiver output143from passage through the OR gate104to further FHR transducer336signal processor408. When any fault/failure condition420occurs, and either one or both the inputs of the differential amplifier111are open or shorted together or short to either Vcc or shield/earth for a moment, (temporary/sporadic/intermittent) such as the cable330conductor breaks (disconnects) and resumes continuity again (such as during cable flexing), the receiver output143square waveform at the differential amplifier111output changes to a DC voltage level either more than Vcm or close to GND (0V). The DC voltage translates to Vop149voltage level crossing either Vref1131or Vref2132. This causes the output of either of the comparators150/152to switch to a logic high (H). The logic high (H) is coupled to the input of the OR gate154causing the OR gate154output to switch to a logic high (H) which is coupled to the SET (S) input of the latch122causing the ‘Q’ output also referred to as the control signal133to latch high (H). In this regard, the fail-safe detection is latched high on the first occurrence of the fault condition420thereby blocking the receiver output143from passage through the OR gate104to further FHR transducer336processing circuits. This is an important feature in the present invention and an advantage over prior LVDS receives as such intermittent failures can degrade the signal transmission resulting in false FHR readings. Such false FHR readings can lead to fatal errors in the final throughput of the FHR monitoring and display system302/324/326and other critical applications beyond FHR monitoring applications such as medical, analytical, process control, material testing, production line feedback control systems, digital devices such as computers, tablets or other industrial applications. This approach, in the present invention, allows corrective action to be taken (such as replacing cable330or transducer336) at the first occurrence of the intermittent fault condition420. In addition, this feature allows suspect cables330to be quickly checked by flexing along its length and observing whether it triggers fail-safe fault condition420, blocking the difference signal143and/or any random noise from being processed producing adverse and inaccurate results such as spurious fetal heart rate with Philips Avalon Ultrasound transducer models M2736A, M2736AA and ref #867246, and other FHR transducers336. The outputs of latch122, either Q or Q can be used directly or buffered162to create an enable/disable control signal182. The enable/disable control signal182can be used by peripheral devices and semiconductors operationally related to the circuit of at leastFIG.6for an added level of fail-safe feature to avoid unwanted results from certain circuit blocks in the signal processing chain. In a plurality of exemplary embodiments, depending upon the application/system requirements, the latch122with the reset logic circuit124/140/142could be eliminated121and the output of OR gate154can be fed directly to the input of OR gate104. In operation, the latching feature would be disabled while the fail-safe fault condition420detection logic continued to operate. This configuration can be useful when for testing, manufacturing, servicing, and in other cases where cycling power to reset the operation of FHR monitor302and/or transducer336, or other pieces of equipment is desired. In an exemplary embodiment, the fail-safe differential receiver100comprises a first voltage reference131, a second voltage reference132, and a differential amplifier111. The differential amplifier111comprises a receiver output143and receives an IN+ input127, and an IN− input129. A first voltage follower operational amplifier113comprises a first op-amp output147and receives the receiver output143. A rectifier peak detector115comprises a peak output151and receives the first op-amp output147. A second voltage follower operational amplifier117comprises a second op-amp output153and receives the peak output151. A window comparator119comprises a window compared output155and receives the second op-amp output153, a first reference voltage131, and a second reference voltage132. A latch-capable combinational logic device179receives the window compared output and the receiver output143. The latch-capable combinational logic device179comprises one or more logic gates121, a latch122, a latch reset125, and a control signal133that is switched to a normal operating state until at least one of the following is detected:when the IN+127input is open (not connected) the control signal133is latched to the fault condition state;when the IN−129input is open (not connected) the control signal133is latched to the fault condition state;when the IN+127input and IN−129input are connected by a first resistance130, which is configured as an undriven parallel termination, the control signal133is latched to the fault condition state;when the IN+127input is shorted to Vcc or GND the control signal133is latched to the fault condition state;when the IN−129input is shorted to Vcc or GND the control signal133is latched to the fault condition state;when the IN+127input and the IN−129input are shorted together the control signal133is latched to the fault condition state; andwhen only one of the IN+127input or the IN−129input is intermittently open and reconnected the control signal is latched to the fault condition state. In operation, the receiver output143is applied to a Vout180output as long as the control signal133is in the normal operating state, and on the first occurrence of the fault condition state, the latch122latches blocking the receiver output143until the latch is reset. Referring toFIG.7, there is illustrated one example of a voltage levels diagram for a fail-safe differential receiver having single differential input disconnect detection with a latchable control signal interrupt. In an exemplary embodiment, Vcc202can be in the range of 3.3V. The Vcm204Common mode voltage on the LVDS input IN+127and IN−129is in the range of 2.2V. Vref1131/208is in the range of 1.42V. The difference206between the Vcm and Vref1131is in the range of 300 mV. A Vop216(Pulse amplitude/2−Diode forward voltage drop) variation to212/218is in the range214of 1.35V. Vref2132/222is in the range of 1.28V. The expected variation224of Vop is based on LVDS input waveform duty cycle214from 1.35V+/−10 millivolts (1.34V to 1.36V). There is a 20-60 mV margin226of reference voltages Vref1/Vref2to the window comparator to swing the output from 0V to Vcc. Referring toFIG.8, there is illustrated one example of a communication cable schematic with LVDS line testing capabilities. One concern in general and more specifically with the Philips Avalon ultrasound transducer models M2736A, M2736AA, Ref #867246 is that while in operation intermittent signals can produce erroneous FHR readings. In this regard, if the cable develops intermittent (and subsequently continuous/permanent) open circuit (connection discontinuity) on either of the conductors carrying 1 MHz reference clock signal IN+127or IN−129from fetal monitor FM20/30/40/50to differential amplifier111, the FHR transducer generates errant spurious/random fetal heart rate (FHR) readings at the FHR monitor302anywhere from 60 to 240 beats per minute. While one of the IN+127or IN−129conductors is disconnected, there is no ultrasound beam transmitted out of the transducer face, which means there is no ultrasound echo Doppler signal received from either the fetus or any tissue/internal organ of the abdomen of the pregnant patient502. These spurious/intermittent false FHR readings are purely the result of internal circuit noise being processed in absence of the 1 MHz reference clock signal from LVDS receiver output and hence not any limitation of ultrasound Doppler technology or application/end-user/operator error. In other words, transducer336continues giving spurious FHR when one of the IN+127or IN−129wires is disconnected, no matter what the fetus's condition is. During this time spurious FHR readings are being displayed on fetal monitor302. When a Philips Avalon transducer unit of model M2736A/M2736AA/Ref #867246 or other models is suspected to be showing spurious FHR or if a clinical incidence is reported, the testing protocol to confirm the problem and the root cause is as follows:Take the transducer unit off the fetal monitor and check the fetal monitor independently following the testing protocol as per the relevant Philips/manufacturer's service manual.Try another ultrasound transducer unit (M2736A/M2736AA/Ref #867246) that has the fail-safe LVDS circuit of the present invention incorporated and tested in accordance with Philips specifications as per Food and Drug Administration (FDA) approval documentation.Ensure that the fetal monitor302does not have any problems resulting in spurious/wrong fetal heart readings, as well as any other failures.And, ensure the 1 MHz reference clock generation circuitry, LVDS driver or some other hardware of the fetal monitor is not the root cause of spurious FHR readings. The cable330from the transducer under test in the above steps, should be connected to another transducer336that has a fail-safe LVDS circuit of the present invention incorporated and tested. Connect the transducer336with the cable330in question to the fetal monitor302and perform a cable330flex test 6-8 times to see if the fail-safe circuit triggers and latches, indicating a fault condition. If cable330is found to be an issue, check the continuity of each conductor with resistance values and also perform other electrical tests such as insulation, adhering to the original equipment manufacturer (OEM) specifications. Even if cable330is found not to have intermittent or permanent conductor open/short problems, test it for all other electrical performance parameters adhering to OEM specifications to ascertain good functionality. Testing cable330many times by way of flexing from the strain relief to the fetal monitor302end connector is a best practice way to detect cable330causes of spurious FHR readings due to intermittent or permanent open circuits of one of its conductors IN+127or IN−129that carry the 1 MHz reference clock signal from fetal monitor302to the ultrasound transducer336circuitry. In an exemplary embodiment, one way to test the fail-safe detection circuit of the present invention before strapping the transducer to patient502for monitoring is to use a connector adapter shown inFIG.8CON3. This adapter CON3has a male connector that is the same as the transducer cable on one end, and the other end has a female socket connector that is the same as the one on the fetal monitor302FM20/30/40/50. There are two switches SW2/SW4, and two pushbuttons SW1/SW3. SW1and SW2are in series and interconnected with IN1129, and SW3and SW4are in series and interconnected with IN+127. These switches are configured as normally closed contacts. For the test, the adapter is plugged into the fetal monitor302, the transducer is connected to the adapter, and either SW1or SW3momentarily pushbuttons are pressed to simulate either an IN+ input or IN− input disconnect fault. Upon detection of the fault condition transducer336should be disabled and there should be no FHR reading on the FHR monitor302. To reset the transducer336, disconnect the transducer336from the adapter CON3, wait for 15-20 seconds, and then reconnect the transducer336to the adapter CON3and repeat the test, this time pressing the other switch. Ensure the simulated device error appears on the fetal monitor302. Remove the transducer336from the adapter CON3, take the adapter CON3off the FHR monitor302, and connect the transducer directly to the fetal monitor for use on patient502. This test procedure ensures that the system of the fetal monitor302with the transducer336is in good working order and capable of detecting the intermittent or permanent failure of the cable330conductors. In operation, and with the fail-safe differential receiver100of the present invention, this eliminates spurious FHR readings that occur due to processing random noise when there is no ultrasound beam transmission resultant from a disconnected IN+127or IN−129conductor. If the system fails, it would likely be a total failure disabling the transducer output226to avoid any clinical FHR misinterpretation. This fail-safe circuit function test adapter CON3is also useful to demonstrate to the clinical end user that the transducers models M2736A, M2736AA, Ref #864276 (without the incorporation of prior LVDS circuit) that such transducers are prone to spurious FHR detection the moment there is an intermittent or permanent open circuit in one of the conductors connecting to either IN+127or IN−129. The process to do so is to first connect the adapter CON3to the fetal monitor, then connect the transducer to the adapter, set the FHR sound volume on the fetal monitor to OFF or level ‘1’ (to keep the noise level down) as well as alarm volume to minimum, set either switch SW2or SW4to OPEN by sliding away from the fetal monitor (towards transducer end), start the thermal recorder on the fetal monitor and leave the set up undisturbed for an hour or two making sure that the transducer face is up towards the room ceiling and there is no stationary or moving object just above within 4 feet distance. Spurious FHR readings will be seen on the fetal monitor display as well as recorded on thermal paper, alarm will beep occasionally based on set values as if the fetal heart rate is out of range (even though there is no fetus), occasionally monitor may display ‘FHR signal loss’ but there will not be ‘FHR Equip Malf’ error that appears with ‘?’ in place of actual FHR value display when both the cable conductors (IN+ input, IN− input) are open. During this test, a hydrophone can be placed on the transducer face to confirm that there is no ultrasound beam transmission out of the transducer head when one of the cable conductors (IN+ input, IN− input) are open intermittently or continuously so that the clinical end user can understand the possibility of various clinical misinterpretation situations due to spurious FHR that includes the possibility of missing detection of dead fetus moment fetus heart stops functioning or shows signs of stress by way of abnormal heart rate. When a clinical incidence is reported that involves a fatality or serious conditions, the first thing that needs to be done is to check the cable from the ultrasound transducer unit that was used prior to the incidence. Such a transducer cable in question can be very quickly tested for intermittent or continuous conductor wire (IN+ input, IN− input) break/physical disconnection/open circuit using another transducer head assembly that has LVDS receiver incorporated with fail-safe detection circuit100of this invention and once confirmed, send it with original transducer head to specialized accredited laboratory/agency for further testing. In this regard, in the present invention, an advantage is that the present invention allows quick check and problem confirmation after the incidence reporting or even when the clinical end user is in doubt about the reliability of the clinical throughput of the ultrasound transducer being used. As a best practice, when evaluating the performance of a transducer336, a disciplined approach that studies the mechanical, operational, and electrical performance of the transducer336should be undertaken. In this regard, below is such a disciplined approach when a transducer336is to be evaluated and/or serviced as follows:Note if any of the PZT discs or other components including cable are non-Philips that would require additional testing to find out whether they comply with OEM specifications (electrical as well as physical/structural) and pertinent medical device compliance regulations or not. The reworked, repaired, rebuilt, refurbished, and remanufactured ultrasound transducer units may also have some modifications done on electrical circuit boards (front-end and back-end main CPU PCB) that have to be documented properly as these modifications may compromise overall circuit performance including wrong/spurious FHR readings.Make a report stating all the causes/sources of the intermittent/spurious FHR readings and send the transducer assembled with the original cable to further second accredited lab testing for cable, and ultrasound beam profile as per Philips specifications to confirm the identified PZT disc/discs/components (including cable) being the source of spurious FHR response as outlined in the report. A list of all the probable sources of spurious FHR are listed below:1. PZT discs—total number 7—loose, weak bonding to the plastic surface with air pockets and trapped loose adhesive particles intermittently produce very strong Doppler echo signals when they vibrate and get translated as spurious FHR anywhere from 60 bpm to 240 bpm.2. Flat metal electrodes soldered to PZT discs to connect to the transmitter/receiver on front end PCB—total number 14—electrically and mechanically floating that contribute to spurious FHR.3. Front-end PCB bonding to the plastic surface is weak resulting in sporadic/intermittent mechanical instability/movement that in turn makes the top-mounted backend PCB move intermittently directly in the path of the ultrasound beam resulting in Doppler Echo signal that is processed as spurious FHR (60 to 240 bpm).4. Back-end PCB (Main CPU PCB) mounted on front-end PCB—mechanically unstable due to loose connectors to front-end PCB or the back-end PCB alone is loose due to no clamping force on it and it moves up-down relative to backend PCB intermittently causing spurious FHR.5. Bottom Plastic Case has 3 threaded metal inserts—one or all three threaded metal inserts get loosened, allowing relative movement of top-bottom plastic cases and internal parts (models M2736A and M2736AA only, model ref #867246 do not have threaded metal inserts) which in turn cause spurious FHR.6. Loose plastic spacer (battery housing used as a spacer) clamp. (Applicable to model ref #867246 only) that allows backend PCB movement intermittently resulting in spurious FHR.7. Moisture intrusion due to loose top-bottom case, which causes multiple types of spurious and sometimes permanent failures after it gets condensed on SMD components' metallic leads which have spacing distances of 10-100 micro-meters.8. Loose/rattling decoupling capacitors and other SMD (surface mount devices) components. Anything inside the transducer head that moves freely in the path of the ultrasound beam results in a Doppler signal and ultimately gets processed as spurious FHR between 60 to 240 bpm.9. Excessive flux between electrical connections leads to SMD components, microcontroller, and all the active/passive components on the front-end as well as back-end PCB can lead to many types of intermittent failures, spurious FHR being one of them.10. Front-end PCB connector crimp contacts corrode, resulting in intermittent open circuit, especially input to LVDS line receiver on back-end PCB. Open circuit inputs of the LVDS receiver pick up random noise that gets processed as spurious FHR between 60 to 240 bpm.11. Cable connector assembly—an intermittent or permanent open circuit of conductors/input to LVDS—random noise is processed as spurious FHR from 60 to 240 BPM without transmission of ultrasound beam or in the absence of any ultrasound echo signal.12. Cable connector assembly—direct permanent or intermittent shorting or resistive contact shorting of conductors, especially one of the LVDS input conductors to other conductors carrying signals CAN+, CAN− and Transducer Recognition. Repaired, reworked, refurbished, and remanufactured transducer units—identify non-OEM/non-compliant parts and materials such as PZT discs, cable, adhesive bonding layer material, plastic top-bottom cases, rubber seal, etc. After-market replacement/non-OEM low-cost PZT discs' specifications variation tolerance is extremely high, +/−20% is very common and at times in the range of +/−80% or more. The center resonance frequency tolerance required for accurate FHR reading is +/−0.1% (as per Philips device specifications in manuals) and hence wrong FHR reading is very common with re-worked/refurbished/re-manufactured transducer units with non-Philips PZT discs and other parts. Most third-party replacement cable-connector assembly parts do not use balanced twisted pair conductors or comply with original specifications for the characteristic impedance for the reference signal transmission and hence pick up random noise that results in spurious FHR readings even though those conductors are physically intact as opposed to original Philips make a cable that starts being a source of spurious FHR problem only when conductors have intermittent or permanent electrical continuity (open circuit) problem. Repaired, reworked, refurbished, and remanufactured transducer units should be checked to identify if there are any electrical circuit modifications on the front-end and back-end PCB. The aforementioned testing procedure to pinpoint all sources of spurious FHR reading is extremely important for all Philips, Corometrics, Spacelabs, or any other manufacturer's fetal ultrasound transducer (irrespective of the model number) as there is no established procedure available in any of the relevant user/service manuals. In the worst-case scenario, it would take about 48 hours, but in most cases, the average time taken would be between 12 to 24 hours to confirm whether the ultrasound transducer unit has an intermittent/spurious FHR reading problem with root cause/causes identification which is very critical to know when a clinical fatality incidence is reported. There can be various situations such as an expectant mom is being monitored for twins, one of the ultrasound transducers has intermittent/spurious FHR detection issues but not being noticed by the clinical end user as the FHR sound set to be heard is from the other perfectly working transducer and the ‘bad’ transducer (with intermittent/spurious FHR issue) completely misses to sound an alarm that the fetus (one of the twins) is under stress. The ‘bad’ transducer is tested after the clinical incidence reporting by certified trained or authorized technical personnel from the manufacturer, adhering to the current procedure as per the equipment manuals, and may behave perfectly well as the cable flexing restores connections of LVDS inputs to normal when it is removed from the pregnant patient. Such a transducer with intermittent/spurious FHR problems would always be detected when tested following the procedure described in the previous section. This testing procedure also serves as a set of qualitative and quantitative tests after any kind of service is performed that involves disassembly of the transducer head, replacement of components including PZT discs, cable, plastic cases, and re-bonding/adhesive gluing of PCB/PZT discs to the plastic substrate (bottom cases). Referring toFIG.9, there is illustrated one example of a method of using a fail-safe differential receiver100having single differential input disconnect detection with a latchable control signal interrupt capability. The method begins in step1002. In step1002, a device that comprises a fail-safe differential receiver100is connected to a signal-transmitting device such as an FHR monitor302or other signal-transmitting devices by way of a communication line330. The communication line comprises a plurality of electrical connections that include an IN+127, an IN−129, Vcc, and ground (GND). The fail-safe differential receiver100is configured in one of the following ways:Either the fail-safe differential receiver comprises a first voltage reference131, a second voltage reference132, a differential amplifier111that comprises a receiver output143and receives an IN+ input127, and an IN− input129, more than one of a comparator108/112/114/116, each of the comparator108/112/114/118comprise a compared output and receive at least two of the following: the IN+ input127, the IN− input129, the first voltage reference131, or the second voltage reference132, and a latch-capable combinational logic device179that receives each of the compared output, and the receiver output142, the latch-capable combinational logic device comprises a latch122, and a control signal133.Or, the fail-safe differential receiver100comprises the first voltage reference131, the second voltage reference132, the differential amplifier111comprises the receiver output143and receives the IN+ input127, and the IN− input129, a first voltage follower operational amplifier113comprises a first op-amp output and receives the receiver output143, a rectifier peak detector115comprises a peak output and receives the first op-amp output, a second voltage follower operational amplifier117comprises a second op-amp output and receives the peak output, a window comparator119comprises a window compared output and receives the second op-amp output. The fail-safe differential receiver100further comprises a first reference voltage131, a second reference voltage132, and the latch-capable combinational logic device179receives the window compared output and the receiver output143. The latch-capable combinational logic device129comprises a latch122, and the control signal133.In step1004, the method then continues by transitioning the control signal133based on inputs to the latch-capable combinational logic device179, the control signal133is switched to a normal operating state until at least one of the following is detected illustrated in block1006:when the IN+127input is open (not connected) the control signal is latched to the fault condition state;when the IN−129input is open (not connected) the control signal is latched to the fault condition state;when the IN+127input and the IN−129input are connected by a first resistance, that is configured as an undriven parallel termination, the control signal is latched to the fault condition state;when the IN+127input is shorted to Vcc or GND the control signal is latched to the fault condition state;when the IN−129input is shorted to Vcc or GND the control signal is latched to the fault condition state;when the IN+127input and the IN−129input are shorted together the control signal is latched to the fault condition state; andwhen only one of the IN+127input or the IN−129input is intermittently open and reconnected the control signal is latched to the fault condition state. In operation, the receiver output143is applied to a Vout180output as long as the control signal133is in the normal operating state, and on the first occurrence of the fault condition state420, the latch122latches blocking the receiver output143until the latch122is reset. The method returns to step1004. Referring toFIG.10, there are illustrated exemplary embodiments that can be used interchangeably with the methods of the present invention. In step1102, the first voltage reference131is configured. The first voltage reference131comprises a first resistor132connected in series with a second resistor134creating the first voltage reference131at the junction of the first resistor132and the second resistor134. The first resistor132connects at one end to Vcc and the second resistor134connects at one end to GND. Values of the first resistor132and the second resistor134are selected such that the first voltage reference131is in the range of 97% of Vcc in the configuration illustrated inFIG.3. For the configuration illustrated inFIG.6, the first voltage reference Vref1131is in the range of Vop149plus the expected millivolts variation in normal operation plus 20 to 60 millivolt margin for the comparator to swing output from 0V to Vcc when a fault condition occurs. In step1104, the second voltage reference132comprises a third resistor136connected in series with a fourth resistor138creating the second voltage reference132at the junction of the third resistor136and the fourth resistor138. The third resistor136connects at one end to Vcc and the fourth resistor138connects at one end to GND, values of the third resistor136and the fourth resistor138are selected such that the second voltage reference132is in the range of 0.03% of Vcc for the configuration illustrated inFIG.3. For the configuration illustrated inFIG.6, the second reference voltage Vre21132is in the range of Vop149minus the expected millivolts variation in normal operation minus 20 to 60 millivolts margin for the comparator to swing output from 0V to Vcc when a fault condition occurs. In step1106, the device can be retrofitted by removing the current LVDS semiconductor and inserting the fail-safe differential receiver100. In this regard, the device can be an FHR transducer336or other suitable devices that comprises a prior LVDS. In this regard, the prior LVDS can be removed with a soldering iron or other suitable methods and the fail-safe differential receiver100of the present invention can be inserted in the place of the prior LVDS. Such fail-safe differential receiver100of the present invention can be a semiconductor for semiconductor swap of the fail-safe differential receiver100can be a small circuit card connected in place of the prior LVDS semiconductor. The small circuit card can be a combination of one or more semiconductors, resistors, diodes, capacitors, or other components as may be required and/or desired in a particular embodiment. In an exemplary embodiment, a device such as a transducer336can be retrofitted by removing the current LVDS semiconductor and inserting the fail-safe differential receiver100of the present invention. In step1108, a programmable logic software code can be created. In this regard, programming languages such as VHDL, CUPL, and other suitable programming languages can be used to write logic software. The method then moves to step1110. In step1110, the programmable logic software code can be encoded or otherwise downloaded into the latch-capable combinational logic device179causing the latch-capable combinational logic device179to operate in accordance with the step of transitioning the signal out in step1004. In step1112, the fail-safe differential receiver100can be reset by removing power and then reapplying power to the fail-safe differential receiver100. In step1114, the fail-safe differential receiver100can be reset by a user pressing a button or another system sending a reset logic signal to the fail-safe differential receiver100. In step1116, the fail-safe differential receiver100is tested by inserting an adapter CON3in series with the communication line330and pressing at least one of a button SW1/SW3or switch SW2/SW4to disconnect the IN+ input127or the IN− input129. The adapter CON3comprises the button SW1/SW3or switch SW2/SW4. The method then moves to step1118. In step1118, the fault condition state is verified resultant from the step of testing occurred. In a plurality of exemplary embodiment, reference voltages Vref1131and Vref2132can be chosen to be close to Vop149(value of the normal function at differential amplifier111) to speed up the fail-safe detection response. Additionally, many different resistance values126/128/130/132/134/136/138/140/170/172can be used, and the load or terminating resistor130is normally selected to match the impedance of the IN+127and IN−129transmission lines, usually in the range of 50 to 120 ohms. The offset can be in the range of less than 50 mV, and a preferred range of 20 mV. In an exemplary embodiment, various inversions in the logic can be introduced, and NAND gates rather than NOR gates can be substituted using DeMorgan's theorem. The inverting and non-inverting inputs to the comparators108/112/114/118/146/148/150/152and the differential amplifier111can be swapped to invert their outputs as may be required and/or desired in a particular embodiment. In addition, active logic low (L) signals rather than active logic high (H) signals can be substituted as may be required and/or desired in a particular embodiment. Furthermore, several logic gates can be combined into a larger gate, such as a 3 or 4 input AND or NAND gate as may be required and/or desired in a particular embodiment. The overall output can sometimes be disabled by turning ‘OFF’ the differential amplifier111with the fail-safe signal rather than blocking the receiver output143by way of the control signal133as may be required and/or desired in a particular embodiment. In an exemplary embodiment, the reference voltage Vref2132near GND and the inputs to comparators114/118inFIG.3and comparator152inFIG.6can be swapped as may be required and/or desired in a particular embodiment. In an exemplary embodiment, a flip-flop rather than an SR latch122may be substituted as may be required and/or desired in a particular embodiment. In addition, a toggle (T) or other kinds of latches or flip-flops can be substituted for the SR type latch122with appropriate logic changes for the latch122of flip-flop inputs. The RESET R input to latch122could be controlled by other systems or circuits as may be required and/or desired in a particular embodiment. Additional latches, buffers, and gates can be added as may be required and/or desired in a particular embodiment. Referring toFIG.11, there is illustrated one example of a cable330assembly. An advantage, in the present invention, is that, unlike prior cables that connect transducers100to fetal monitors302and notoriously fail where crimped and over-molded, the present invention, utilizes a fetal monitor end connector349A/349B that has a hollow rigid body341made of plastic and a connector top337that can be manually fastened together by way of the screw hole347and standoff363formed within the hollow rigid body341. Additionally, a strain relief345made out of 60 A shore hardness elastomer can be fitted into an integrally formed groove end361to secure cable351from pulling out of the hollow ridge body340. For disclosure purposes, fetal monitor end connectors349A/349B are two different styles that interface to different models of fetal monitor302. Other shaped types and kinds of fetal monitor end connectors349can be utilized as may be required and/or desired in a particular embodiment. Additionally, the use of secondary strain relief343inside the hollow rigid body341further protects cable351against pulling forces and relieves stress on the individual conductors339that are crimped/soldered on contact pins365located at the wire connection end363. In an exemplary embodiment, such secondary strain relief343can be at least two tie wraps or other suitable strain relief. In an exemplary embodiment, electronic-grade silicone can be filled inside to make the connector compliant with IP68 specifications for water ingress. The increased length of the hollow rigid body341compared to the length of prior rubber over-molded approaches makes it easier for an operator to grab the hard-plastic connector for connecting and disconnecting from the fetal monitor302. In an exemplary embodiment, the transducer connector353terminates the individual conductors339on the opposite end of cable351. The transducer connector353connects to the electronics inside the transducer336and strain relief355and rubber boot359secure the cable351inside the case of transducer336. In an exemplary embodiment, the transducer336can comprise cable351having a first cable end351A and a second cable end351B. A fetal monitor connector349A/349B comprises a hollow rigid body341having a wire connection end363and an integrally formed grooved end361. A strain relief345is placed over the first cable end351A and secured within the integrally formed grooved end361holding the first cable end351A from slipping out of the hollow rigid body341. In an exemplary embodiment, a secondary strain relief343can be fastened around the first cable end351A within the hollow rigid body341proximate to the strain relief361. In an exemplary embodiment, such secondary strain relief343can be at least two tie wraps fastened in parallel around the first cable end351A within the hollow rigid body341proximate to the strain relief361. The secondary strain relief343further prevents the first cable end351A from being pulled out of the hollow rigid body341. More than one conductor339from the first cable end351A terminates with electrical connections at the wire connection end363. The fetal monitor connector349A/349B plugs into a fetal monitor302, and the second cable end351B terminates with a transducer336connector353. In an exemplary embodiment, the hollow rigid body341can be filled with silicon, and a connector top337can be fastened by way of the screw hole347and standoff363to seal fetal monitor connector349A/349B including the hollow rigid body341. In an alternative approach, the connector top337and standoff363can be eliminated, and fetal monitor connector349A/349B including the hollow rigid body341can be filled and or otherwise sealed with silicon. In an exemplary embodiment, the cable between the Philips Avalon ultrasound model M2736A, M2736AAA, Ref #867246 transducer head, and the fetal monitor is known to incur mechanical stress-related failures during the operation of monitoring the patient and over a period of time (based on usage). This can result in intermittent disconnection (open circuit) as well as intermittent noise pick-up due to characteristic impedance variation between a twisted pair of conductor wires. An advantage in the present invention is that the fetal monitor end connector that is originally over-molded at the Philips factory during manufacture, is replaced with a hollow rigid body341with contact pin365as illustrated inFIG.11that can be manually assembled by hand. In this regard, the conductors339can be soldered to the contact pins365in place of crimping and two tie-wraps343can be used inside the hollow rigid body341to relieve the mechanical stress on the conductors339. This approach eliminates associated problems due to over-molding at the connector end and maintains the characteristic impedance between the contact pins365. In the use of prior cables that use over-molding at the connector ends, during patient monitoring operation, the flexing of cable between the transducer head and the fetal monitor connector end can result in the physical separation of twisted pair conductors internally at various points along the length of cable that in turn cause spurious FHR readings by intermittent noise pick-up due to electrical unbalance. To overcome this unbalancing problem, in the present invention, a twisted pair of conductors (IN+127, IN−129) for LVDS input connections can be tightly bunched together using fine cotton/nylon thread along the length and then wrapped in Teflon tape that will not allow physical separation in flexing operation. The insulation for these conductors is also changed to Teflon or any other tough elastomer from the original design of polypropylene to improve the mechanical toughness and withstand at least 10 million flexing cycle tests improving the durability of the cable. In an exemplary embodiment, a cable330, by way of more than one conductor339, interconnects the IN+ input127and the IN− input129to a device such as FHR monitor302. At least one end of the cable330terminates with more than one contact pin365, a hollow rigid body341, and a connector top337. During manufacture, manually by hand each conductor339is soldered to the contact pins365and secured within the hollow rigid body341, and the connector top337is fastened to the hollow rigid body341. Such fastening can be done by way of the screw hole347and standoff363using a screw or other suitable fastener. The contact pins365are secured within the hollow rigid body341in a manner that allows the contact pins365to extend outside the hollow rigid body341and interconnect with the device such as FHR monitor302. The capabilities of the present invention can be implemented in software, firmware, hardware, or some combination thereof. As one example, one or more aspects of the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately. Additionally, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided. The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified. All of these variations are considered a part of the claimed invention. While the preferred embodiment of the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described. | 101,983 |
11863356 | DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. With reference now to the Figures, several exemplary aspects of the present disclosure are described. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The terms “computing device” and “mobile device” are used interchangeably herein to refer to any one or all of servers, personal computers, smartphones, cellular telephones, tablet computers, laptop computers, netbooks, ultrabooks, palm-top computers, personal data assistants (PDAs), wireless electronic mail receivers, multimedia Internet-enabled cellular telephones, Global Positioning System (GPS) receivers, wireless gaming controllers, and similar personal electronic devices which include a programmable processor. While the various aspects are particularly useful in mobile devices (e.g., smartphones, laptop computers, etc.), which have limited resources (e.g., processing power, battery, size, etc.), the aspects are generally useful in any computing device that may benefit from improved processor performance and reduced energy consumption. The term “multicore processor” is used herein to refer to a single integrated circuit (IC) chip or chip package that contains two or more independent processing units or cores (e.g., CPU cores, etc.) configured to read and execute program instructions. The term “multiprocessor” is used herein to refer to a system or device that includes two or more processing units configured to read and execute program instructions. The term “system on chip” (SoC) is used herein to refer to a single integrated circuit (IC) chip that contains multiple resources and/or processors integrated on a single substrate. A single SoC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions. A single SoC may also include any number of general purpose and/or specialized processors (digital signal processors (DSPs), modem processors, video processors, etc.), memory blocks (e.g., read only memory (ROM), random access memory (RAM), flash, etc.), and resources (e.g., timers, voltage regulators, oscillators, etc.), any or all of which may be included in one or more cores. Memory technologies described herein may be suitable for storing instructions, programs, control signals, and/or data for use in or by a computer or other digital electronic device. Any references to terminology and/or technical details related to an individual type of memory, interface, standard, or memory technology are for illustrative purposes only, and not intended to limit the scope of the claims to a particular memory system or technology unless specifically recited in the claim language. Mobile computing device architectures have grown in complexity, and now commonly include multiple processor cores, SoCs, co-processors, functional modules including dedicated processors (e.g., communication modem chips, GPS receivers, etc.), complex memory systems, intricate electrical interconnections (e.g., buses and/or fabrics), and numerous other resources that execute complex and power intensive software applications (e.g., video streaming applications, etc.). Process technology employed to manufacture semiconductor devices, including IC devices is continually improving. Process technology includes the manufacturing methods used to make IC devices and defines transistor size, operating voltages and switching speeds. Features that are constituent elements of circuits in an IC device may be referred as technology nodes and/or process nodes. The terms technology node, process node, process technology may be used to characterize a specific semiconductor manufacturing process and corresponding design rules. Faster and more power-efficient technology nodes are being continuously developed through the use of smaller feature size to produce smaller transistors that enable the manufacture of higher-density ICs. Certain aspects of this disclosure relate to circuits used in a high-speed serializer-deserializer (SerDes). Circuits are described that can be deployed in the analog front-end (AFE) of a receiver. In one example, some aspects of the disclosure relate to equalizer circuits, which may be constructed with a trans-admittance stage (TAS) and trans-impedance amplifier (TIA). In another example, some aspects of the disclosure relate to equalizer circuits, which may be constructed with a trans-conductance stage (TCS) and a TIA. Some aspects relate to a variable-gain amplifier (VGA) that can be embedded within an equalizer TIA. Reductions in power consumption, improved consistency of frequency response across multiple gain settings and higher data rates can be accomplished using a continuous time linear equalizer (CTLE) configured in accordance with certain aspects of this disclosure. For example, certain aspects of this disclosure relate to a CTLE with a TCS-TIA structure that provides high-bandwidth and high-linearity using both poly-resistor and P-channel metal-oxide-semiconductor (PMOS) resistors in a VGA feedback resistor array embedded in a TIA. FIG.1illustrates example components and interconnections in a system-on-chip (SoC)100that may be suitable for implementing certain aspects of the present disclosure. The SoC100may include a number of heterogeneous processors, such as a central processing unit (CPU)102, a modem processor104, a graphics processor106, and an application processor108. Each processor102,104,106,108, may include one or more cores, and each processor/core may perform operations independent of the other processors/cores. The processors102,104,106,108may be organized in close proximity to one another (e.g., on a single substrate, die, integrated chip, etc.) so that the processors may operate at a much higher frequency/clock rate than would be possible if the signals were to travel off-chip. The proximity of the cores may also allow for the sharing of on-chip memory and resources (e.g., voltage rails), as well as for more coordinated cooperation between cores. The SoC100may include system components and resources110for managing sensor data, analog-to-digital conversions, and/or wireless data transmissions, and for performing other specialized operations (e.g., decoding high-definition video, video processing, etc.). System components and resources110may also include components such as voltage regulators, oscillators, phase-locked loops (PLLs), peripheral bridges, data controllers, system controllers, access ports, timers, and/or other similar components used to support the processors and software clients running on the computing device. The system components and resources110may also include circuitry for interfacing with peripheral devices, such as cameras, electronic displays, wireless communication devices, external memory chips, etc. The SoC100may further include a Universal Serial Bus (USB) or other serial bus controller112, one or more memory controllers114, and a centralized resource manager (CRM)116. The SoC100may also include an input/output module (not illustrated) for communicating with resources external to the SoC, each of which may be shared by two or more of the internal SoC components. The processors102,104,106,108may be interconnected to the USB controller112, the memory controller114, system components and resources110, CRM116, and/or other system components via an interconnection/bus module122, which may include an array of reconfigurable logic gates and/or implement a bus architecture. Communications may also be provided by advanced interconnects, such as high performance networks on chip (NoCs). The interconnection/bus module122may include or provide a bus mastering system configured to grant SoC components (e.g., processors, peripherals, etc.) exclusive control of the bus (e.g., to transfer data in burst mode, block transfer mode, etc.) for a set duration, number of operations, number of bytes, etc. In some cases, the interconnection/bus module122may implement an arbitration scheme to prevent multiple master components from attempting to drive the bus simultaneously. The memory controller114may be a specialized hardware module configured to manage the flow of data to and from a memory124via a memory interface/bus126. The memory controller114may comprise one or more processors configured to perform read and write operations with the memory124. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. In certain aspects, the memory124may be part of the SoC100. FIG.2illustrates an example of a data communication system200that may be adapted in accordance with certain aspects of the present disclosure. The data communication system200includes a transmitter202, a data communication channel210, and a receiver222. The transmitter202may be provided in a first device that is configured to transmit a data signal to a second device. The data communication channel210provides a transmission medium through which the data signal propagates from the first device to the second device. The receiver222may be provided in the second device and may be configured to receive and process the data signal. In one example, the transmitter202includes a serializer204configured to convert parallel data into serial data. The transmitter202further includes a transmit driver206configured to generate a data signal based on the serial data for transmission to the receiver222through the data communication channel210. The data communication channel210may be implemented using any type of transmission medium by which a data signal can propagate from the transmitter202to the receiver222. Examples of the data communication channel210includes one or more metallization traces (which may include one or more vias) on a printed circuit board (PCB), stripline, microstrip, coaxial cable, twisted pair, etc. In the illustrated example, the receiver222includes a variable gain amplifier (VGA) with a CTLE (referred to in combination as the VGA/CTLE224), a clock data recovery circuit (the CDR226), and a deserializer228. CTLE may refer to techniques for boosting the higher frequency components of the signal at the receiver in order to bring all frequency components of the signal to a similar amplitude ratio before channel attenuation, improving jitter and eye-diagram performance. As disclosed herein, the VGA/CTLE224is configured to perform equalization and amplification of the received data signal. The CDR226is configured to recover a clock associated with the data signal and use the clock to recover the serial data from the data signal. The deserializer228is configured to convert the serial data back into parallel data. The data communication channel210typically has a frequency response H1(f) that is similar to a low pass filter. For instance, the frequency response H1(f) has relatively low losses from direct current (DC) up to a particularly cutoff frequency fc1; then the losses increase monotonically above the cutoff frequency fc1. The frequency response H1(f) of the data communication channel210limits the data rate at which data may be sent through the channel. For example, the cutoff frequency fc1should be at least to the Nyquist rate of the data signal. If the Nyquist rate of the data signal is above the cutoff frequency fc1, the data signal exhibits distortion at the receiver222, which may be characterized as the eye in a signal eye diagram closing or getting smaller, making it difficult to recover the clock and the data by the CDR226. The VGA/CTLE224may perform equalization and amplification to increase the high frequency components of the data signal in order to increase the data rate at which the data signal may be sent through the data communication cable and reliably recovered at the receiver222. For example, the VGA/CTLE224may be configured to provide a frequency response H2(f) that is substantially flat from DC up to a frequency fzcorresponding to a Zero. Then, above the zero frequency fz, the frequency response H2(f) of the VGA/CTLE224increases up to a frequency fpcorresponding to a pole. Above the pole frequency fp, the frequency response H2(f) of the VGA/CTLE224decreases monotonically. In some examples, the VGA/CTLE224may have more than one pole and one zero. The VGA/CTLE224may be configured to have a frequency response H2(f) where the pole frequency fpsubstantially coincides with the cutoff frequency fc1of the frequency response H1(f) of the data communication channel210. As the data communication channel210is cascaded with the VGA/CTLE224, the frequency responses H1(f) and H2(f) of the data communication channel210and the VGA/CTLE224combine at the output of the VGA/CTLE224to form a composite frequency response H3(f). Thus, the high frequency boost at the pole frequency fpof the VGA/CTLE frequency response H2(f) compensates for the loss roll off at the cutoff frequency fc1of the channel frequency response H1(f) to generate the composite frequency response H3(f) having a cutoff frequency fc3much higher than the cutoff frequency fc1of the channel frequency response H1(f). Thus, through the use of the VGA/CTLE224, much higher data rates between the transmitter202and receiver222may be realized. FIG.3illustrates certain aspects of a data communication interface300that may be implemented in an SoC or in another IC device. The receiver302in the data communication interface300includes differential signal processing circuits, including an equalizer304and a variable gain amplifier306. The differential signal processing circuits can be configured to generate a differential output signal316by applying a frequency-dependent gain to a differential input signal312, which is received from a differential communication channel310in the illustrated example. The differential output signal316may be provided to sampler circuits308configured to extract data and other information transmitted over the communication channel310. In one example, the differential input signal312is applied to gate inputs or other control inputs of a pair of input transistors in the equalizer304and the output of the equalizer304is provided to the VGA306. The gain of the VGA306is configurable through a gain control input314. In one example, the gain control input314may include a 4-bit binary value that selects a gain setting from among 16 possible settings. In the receiver302, the VGA306cooperates with the equalizer304to equalize and amplify a small differential input signal312to a level that can be processed by a next stage. Ideally, the frequency response320of the equalizer304and the frequency response322of the VGA306produce an ideal combined frequency response324for each gain setting of the VGA306. In the ideal situation the responses are substantially parallel for multiple gain settings in the combined frequency response324. Parallel responses are indicative of consistent frequency response regardless of gain setting. A consistent equalization frequency response is typically desired regardless of the gain configured for the VGA306. For example, the same equalization frequency response is typically desired for low amplitude signals and high amplitude signals, including when different gain settings are configured for the two signals. In conventional systems, maintaining parallel responses for the different VGA gain settings can be very challenging. In many conventional systems, changes in VGA gain can affect equalizer pole/zero locations at high data rates. An observed combined frequency response326illustrates a loss of consistency between the different VGA gain settings that is indicated by a loss of parallelism at higher frequencies. In some instances, changes in the VGA306can affect the location of a parasitic-related Zero in the frequency response326, in a manner referred to herein as “Zero pull-in”328. FIG.4illustrates the equalization and amplification stages in a conventional receiver. Two-stage equalization is performed using a low-frequency stage (LF Equalizer400) and a high-frequency stage (HF Equalizer420). A VGA440includes a source-degenerated resistive structure, which is a commonly used VGA structure. The VGA440includes a pair of input transistors442a,442b(the gm pair) and corresponding tail circuits446a,446b. In some instances, the input transistors442a,442bmay be formed as n-type metal-oxide-semiconductor field effect transistors (NMOS FETs). The VGA440further includes a source degeneration resistor450(generally a resistive device) between the sources of the input transistors442a,442b. The value of the source degeneration resistor450may be controlled by a gain controller circuit452. In the illustrated example, the gain controller circuit452provides a 16-bit word that is used to select the resistance value of the source degeneration resistor450and thereby exercises control over the gain of the VGA440. In various examples, the 16-bit word may be encoded using binary or unary encoding. Unary encoding, which may be referred to as thermometer encoding, represents data in the quantity of bits set to ‘1’ that precede a terminating ‘0’, or the quantity of bits set to ‘0’ that precede a terminating ‘1’. Parasitic capacitance is represented by parasitic capacitors448a,448b(Cp) coupled to the sources of the input transistors442a,442b, respectively. The differential pair transconductance GM may be stated as: Gm=(gm1+gm*Rs), where gm represents the transconductance gain of the input transistors442a,442band Rs represents the resistance of the source degeneration resistor450. Absent the effect of the parasitic capacitors448a,448b, VGA gain can be linearly tuned by changing Rs. However, the capacitive contribution (Cs) at the sources of the input transistors442a,442bcreates a Zero (Rs*Cs). When Rs is small, this Zero is not an issue since it is close to output Pole and is suppressed. When Rs increases, VGA high-frequency gain is boosted at Zero frequency due to Zero pull-in. The LF Equalizer400, HF Equalizer420and VGA440illustrated inFIG.4has a TIA402and a TAS404. This TAS-TIA structure is widely used in equalizers and variable gain amplifiers where high data rates and/or long channels are used. For example, the TAS-TIA structure may be used in a PCIe interface that operates at 16 Gb/s, with a loss of −30 dB at 8 GHz or in a USB interface that operates at 20 Gb/s with a loss of −23 dB at 10 GHz. Multiple equalizer stages and VGA stages may be needed to counteract such large channel attenuations. The increased number of serial stages tends to induce greater direct current (DC) offset, bandwidth roll-off, and power consumption. The presence of the RC Zero corresponding to the parasitic capacitors448a,448band source degeneration resistor450can cause Zero pull-in, resulting in VGA gain curves that are not parallel.FIG.5includes a simulated characteristic500for the equalization and amplification stages illustrated inFIG.4. The characteristic500assumes fixed equalizer gain while the VGA gain is tuned from 0 to 16. The gain curves for the 16 gain settings are not parallel with respect to one another, and Zero pull-in is exhibited.FIG.6includes a first graph600that shows gain differential between 10 GHz and 100 MHz at each gain setting for the equalization and amplification stages illustrated inFIG.4. For example, at gain setting 12.0, the difference between gain obtained at 10 GHz and gain obtained at 100 MHz is approximately 20 dB. Gain differences over the 16 gain settings varies by approximately 6 dB. A second graph620inFIG.6shows gain differential between 8 GHz and 80 MHz at each gain setting for the equalization and amplification stages illustrated inFIG.4. For example, at gain setting 4.0, the difference between gain obtained at 8 GHz and gain obtained at 80 MHz is approximately 19 dB. Gain differences over the 16 gain settings varies by approximately 5 dB. Certain aspects of this disclosure provide a receiver circuit that can reduce the number of equalization and gain stages, limit the effect of parasitic capacitance on the output characteristic and distribute gain across multiple equalizer stages. FIG.7illustrates an equalizer700that includes a low-frequency stage (LF Equalizer700) and a high-frequency stage (HF Equalizer720) that each include VGA circuits702,704,722,724in accordance with certain aspects of this disclosure. The configuration of the LF Equalizer700and the HF Equalizer720is illustrated in the schematic drawing740. Explicit VGA stages are removed from the configuration and replaced with VGA circuits702,704,722,724that are embedded within the TIAs of the LF Equalizer700and the HF Equalizer720. In the illustrated example, variable gain is obtained by controlling the resistance values of the feedback resistors706aand706bin the LF Equalizer700and the feedback resistors726aand726bin the HF Equalizer720. The resistance values of the feedback resistors706aand706bin the LF Equalizer700are controlled using the even bits708of a gain control word744. The feedback resistors706aand706bmay be configured to have identical values, which may vary with gain setting defined by the gain control word744. The resistance values of the feedback resistors726aand726bin the HF Equalizer720are controlled using the odd bits728of the gain control word744. The feedback resistors726aand726bmay be configured to have identical values for each gain setting defined by the gain control word744. The provision of variable gain in the TIAs of the LF Equalizer700and the HF Equalizer720can implement the VGA function of the receiver with minimal or no additional power consumption. In one aspect of the disclosure, the feedback resistors706a,706b,726aand726bcan be implemented using circuits750that provide parallel resistor switching by small size complimentary transmission gates, such as transmission gates752a,752b, and752c. Each of the feedback resistors706a,706b,726aand726bhas a tunable resistance value (RFB). VGA functionality is realized in the LF Equalizer700by tuning the corresponding the feedback resistors706a,706b. VGA functionality is realized in the HF Equalizer720by tuning the corresponding feedback resistors726a,726b. In the illustrated example, a 16-bit gain control word744is provided to define a total gain to be applied in the receiver. In accordance with certain aspects of this disclosure, the bits of the gain control word744are inter-digitally distributed among equalizer stages. For example, even bits of the gain control word744may be used to control gain in the LF Equalizer700, and odd bits of the gain control word744may be used to control gain in the HF Equalizer720. In some instances, the 16-bit gain control word744is provided by a gain controller742and may be dynamically configured through feedback or calibration. The distribution of gain between the LF Equalizer700and the HF Equalizer720can produce gain curves that vary smoothly and maintain parallelism. In one aspect of the disclosure, Zero pull-in can be minimized or eliminated when the VGA functionality is embedded in the TAS-TIA structures of the LF Equalizer700and the HF Equalizer720. Frequency equalization is obtained through variable TAS-TIA equalizer gain that is provided through the source degeneration circuits710and730of the LF Equalizer700and the HF Equalizer720, respectively. The source degeneration circuits710and730each include a source degeneration resistor RS(generally a resistive device) coupled in parallel with a source degeneration capacitor CS(generally a capacitive device) between the sources of the input transistors712a,712bor732a,732b. The equalizer gain (GEQ) may be calculated as GEQ=Gm×Zout, where Gm represents the source-degenerated transconductance and Zoutrepresents output impedance. Output impedance is proportional to TIA feedback resistance, RFB. Tuning RFBmay linearly change the equalizer/VGA combination gain curves. Output parasitic capacitance is included in the output pole, and does not introduce any obvious Zeros. Post-layout simulation has shown gain curves parallel to each other in a large frequency span. FIG.8includes a simulated characteristic800for the equalization stages illustrated inFIG.7. The characteristic800is based on a VGA gain that is tuned from 0 to 16. The gain curves for the 16 gain settings are substantially parallel with respect to one another. Zero pull-in is not exhibited. FIG.9includes a first graph900that shows gain differential between 10 GHz and 100 MHz at each gain setting for the combined equalization stages illustrated inFIG.7. For example, at gain setting 14.0, the difference between gain obtained at 10 GHz and gain obtained at 100 MHz is approximately 19.2 dB. Gain differences over the 16 gain settings varies by approximately 1 dB. A second graph920inFIG.9shows gain differential between 8 GHz and 80 MHz at each gain setting for the combined equalization stages illustrated inFIG.7. For example, at gain setting 10.0, the difference between gain obtained at 8 GHz and gain obtained at 80 MHz is approximately 15.8 dB. Gain differences over the 16 gain settings varies by approximately 1 dB. Certain aspects of this disclosure may be applicable to a CTLE that is formed by cascading a transconductance stage (TCS or Gm) and TIA. CTLEs that include the Grp-TIA structure are widely used in ultra-high speed serial link design to increase the bandwidth. In some examples, high-speed amplifiers are implemented using a current-mode-logic (CIVIL) structure that operates as a TCS or Gm amplifier.FIG.10illustrates an example of a VGA1000with embedded equalization that may be configured or adapted for use in accordance with certain aspects of the present disclosure. The VGA1000may also be characterized as a CTLE with embedded gain control. The VGA1000includes load resistors1002, a pair of input transistors1004(the gm pair) and corresponding tail circuits1006. A first load resistor RD1, a first input transistor M1and a first tail current source IT1may be coupled in series between an upper voltage rail Vdd and a lower voltage rail Vss. A second load resistor RD2, a second input transistor M2and a second tail current source IT2are coupled in series between the upper voltage rail Vdd and the lower voltage rail Vss. The load resistors1002may be implemented as resistive devices. The pair of input transistors1004may be formed as N-channel metal-oxide-semiconductor field effect transistors (NMOS FETs). The VGA1000further includes load capacitors Cp1012a,1012bcoupled between the drains of the input transistors M1and M2and the lower voltage rail Vss, respectively. The load capacitors Cp1012a,1012bmay represent parasitic capacitance and/or capacitive devices. The VGA1000further includes a source degeneration circuit1008that provides frequency equalization. The source degeneration circuit1008includes a source degeneration resistor RS(generally a resistive device) coupled in parallel with a source degeneration capacitor CS(generally a capacitive device) between the sources of the input transistors M1and M2. The general transfer function of the VGA1000may be stated as: H(s)=gCp×s+1RSCS(s+1+gmRS/2RSCS)(s+1RDCp). In the illustrated example, the VGA1000includes a gain controller1010configured to generate a gain control signal (GCS) for controlling the amount of bias current Ibiasthat the tail current sources IT1and IT2sink, respectively. The gain of the VGA1000may be directly related (in the same direction) to the bias current Ibias. Thus, the gain controller1010can be configured to control the gain of the VGA1000by controlling the bias current Ibiasof the tail current sources IT1and IT2via the gain control signal (GCS). Conventional CTLEs that include the Gm-TIA structure may suffer from non-linearity and may be unable to support higher bandwidths needed by increasingly complex devices that drive a demand for increased data rates. The VGA in certain conventional CTLEs is implemented using a feedback circuit in which a TIA feedback resistor is implemented using a feedback resistor array controlled by a binary or thermometer coded word. A feedback resistor array controlled by a binary coded word can help reduce parasitic resistance in comparison to a feedback resistor array controlled by a thermometer coded word. A feedback resistor array controlled by a binary coded word may be more susceptible to glitch and monotonicity issues, including when a CTLE adaptation algorithm is applied to configure the feedback resistor array. A feedback resistor array controlled by a thermometer coded word can improve monotonicity and linearity but may have a limited bandwidth. Bandwidth may be significantly reduced with respect to a CTLE that uses a feedback resistor array controlled by a binary coded word when the feedback resistor array is affected by large parasitic resistance and capacitance and is located on critical data path. Some conventional CTLEs are provided with a cascaded additional inductor, in series with the feedback resistor array, to boost the bandwidth. The addition of an inductor can significantly increase the area consumed in an SoC or another IC device. Certain aspects of this disclosure relate to a CTLE with a Gm-TIA structure that provides high-bandwidth and high-linearity. In one aspect, a VGA embedded in a TIA includes resistors fabricated using a thin film of polysilicon and resistors that utilize the channel resistance of PMOS transistors in the TIA feedback resistor array. The resistors fabricated using a thin film of polysilicon may be referred to as poly-resistors herein and the resistors that utilize the channel resistance of PMOS transistors may be referred to as PMOS resistors herein. The poly-resistors in the TIA feedback resistor array may be controlled by binary code, while the PMOS resistor is controlled by PMOS gate voltage generated from a thermometer coded replica circuit. The thermometer coded PMOS resistor helps to prevent non-monotonicity during the gain adaptation, and undesired glitch due to switch on/off. The binary coded poly-resistor can reduce the parasitic load on critical data paths. The binary coded poly-resistor can promote or enable an increase in TIA linearity when the total resistance of the resultant feedback resistor array is dominated by poly-resistors. FIG.11illustrates a CTLE that is configured in accordance with certain aspects of this disclosure. The CTLE includes a Gmstage1100and a TIA1120. The Gmstage1100includes a load1102, a pair of input transistors1104(the gm pair) and corresponding tail circuits1106. The Gmstage1100further includes a source degeneration circuit1108that provides frequency equalization. The source degeneration circuit1108includes a source degeneration resistor RS(generally a resistive device) coupled in parallel with a source degeneration capacitor CS(generally a capacitive device) between the sources of the input transistors M1and M2. The source degeneration circuit1108controls equalization strength. The TIA1120includes embedded VGA circuits1122,1124. In the illustrated example, variable gain is obtained by controlling the resistance provided by a combination of feedback resistor arrays1126a,1126band corresponding PMOS resistors1128a,1128b. Each feedback resistor array1126a,1126bincludes two or more poly-resistors and the resistance values of the feedback resistor arrays1126a,1126bmay be controlled by paralleling one or more of the poly-resistors based on the value of a binary-encoded poly-resistor gain-control word1130. The use of binary encoding to provide the poly-resistor gain-control word1130can reduce the effect of parasitic load. The feedback resistor arrays1126a,1126bmay be configured to have identical resistance values for each gain setting defined by the poly-resistor gain-control word1130. The resistance values of the PMOS resistors1128aand1128bmay be controlled using a gate control signal1132(Vb_gain) with a voltage that is generated based on the value of a thermometer-encoded gain-control word. The gate control signal1132is generated by a calibration circuit1140that may replicate certain aspects of the TIA circuits that include PMOS resistors. A thermometer code1144configures the resistance value of the poly-resistor1142(Rrep) and, through the action of a feedback circuit, the PMOS transistor1146(Mp_rep) mimics the resistance of the poly-resistor1142. The gate voltage of Mp_fbis replicated in the gate control signal1132provided to the TIA1120. The feedback circuit may include a differential voltage comparator1148that compares the magnitudes of voltages1150and1152produced under the effects of the PMOS transistor1146and poly-resistor1142respectively. The output of the differential voltage comparator1148tends to pull the voltage1150produced under the effect of the PMOS transistor1146toward the voltage1152produced under the effect of the poly-resistor1142. The feedback resistance in the TIA1120controls the overall gain of the CTLE. According to one aspect, the feedback resistance in the TIA1120is the result of paralleling one or more poly-resistors in the corresponding feedback resistor array1126a,1126band the corresponding PMOS resistor1128aor1128b. According to one aspect, high linearity of the CTLE may be maintained when the resistance provided by a combination of feedback resistor arrays1126a,1126band corresponding PMOS resistors1128a,1128bis dominated by the poly-resistors in the feedback resistor arrays1126a,1126b. In one example, a larger value may be configured for the PMOS resistor1128aor1128b“ON” resistance than for the feedback resistor arrays1126a,1126b. FIG.12is a flow diagram illustrating an example of a method1200for equalizing a signal received from a communication channel in accordance with certain aspects of the present disclosure. The receiver may include the equalizers illustrated inFIGS.7and11. At block1202, the signal is provided to a first stage of a first equalizer circuit, the first stage of the first equalizer circuit having a source degeneration circuit configured to apply a first equalizing gain to the signal. At block1204, an output of the first stage of the first equalizer circuit is coupled to a TIA in the first equalizer circuit. At block1206, resistance values of feedback resistors in the TIA are configured to select a gain to be applied to the output of the first stage of the first equalizer circuit, each feedback resistor being coupled between an input of the TIA and an output of the TIA. In certain examples, an output of the first equalizer circuit may be coupled to a first stage of a second equalizer circuit. The first stage of the second equalizer circuit may have a source degeneration circuit configured to apply a second equalizing gain to the output of the first equalizer circuit. An output of the first stage of the second equalizer circuit may be coupled to a TIA in the second equalizer circuit. Resistance values of feedback resistors in the TIA in the second equalizer circuit may be configured to select a gain to be applied to the output of the first stage of the second equalizer circuit. Each feedback resistor may be coupled between an input and an output of the TIA in the second equalizer circuit. A gain configured for the receiving circuit may be implemented or provided as a combination of a first gain provided by the first equalizer circuit and a second gain provided by the second equalizer circuit. In one example, the resistance values of the feedback resistors in the TIA in the first equalizer circuit may be configured using even bits in a binary control signal and the resistance values of the feedback resistors in the TIA in the second equalizer circuit may be configured using odd bits in the binary control signal. In one example, the resistance values of the feedback resistors in each TIA may be selected based on values of a number of bits in a multi-digit word. The source degeneration circuit in the first equalizer circuit may be configured to equalize a first band of frequencies and the source degeneration circuit in the second equalizer circuit may be configured to equalize a second band of frequencies different from the first band of frequencies. In certain examples, a TIA comprises at least one feedback poly-resistor coupled in parallel with a PMOS resistor. A feedback circuit in a calibration TIA may be used to match a resistance of the at least one feedback poly-resistor to a channel resistance of a PMOS transistor. A gate control signal applied to the PMOS transistor may be coupled to a gate of the PMOS resistor. The gate control signal may produce a comparable or identical channel resistance in the PMOS resistor. The operational steps described in any of the exemplary aspects herein are described to provide a subset of examples of possible implementations. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application-specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. In certain aspects, means for equalizing a signal received from a communication channel may include the LF Equalizer700and HF Equalizer720illustrated inFIG.7. Furthermore, means for applying a gain to the signal received from the communication channel may include the VGA circuits702,704,722,724that are embedded within the TIAs of the LF Equalizer700and the HF Equalizer720. The means for selecting a gain to be applied to the signal received from the communication channel, may include the feedback resistors706a,706b,726a,726bin the TIAs of the LF Equalizer700and the HF Equalizer720, when the feedback resistors706a,706b,726a,726bhave configurable resistance values. In one example, a receiving circuit provided in accordance with certain aspects of this disclosure has a first equalizer circuit. The first equalizer circuit may include a first stage and a TIA. The first stage may have a source degeneration circuit that includes a source degeneration resistor coupled in parallel with a source degeneration capacitor. The TIA may include feedback resistors. Each feedback resistor may be coupled between an input and an output of the TIA. The receiving circuit may further have a second equalizer circuit coupled in series with the first equalizer circuit. The second equalizer circuit may have a first stage and a TIA. The first stage may have a source degeneration circuit that includes a source degeneration resistor coupled in parallel with a source degeneration capacitor. The TIA may include feedback resistors. Each feedback resistor may be coupled between an input and an output of the TIA in the second equalizer circuit. In some examples, a gain configured for the receiving circuit is provided as a combination of a first gain provided by the first equalizer circuit and a second gain provided by the second equalizer circuit. In some instances, the gain configured for the receiving circuit may be expressed in a binary control input to the receiving circuit. The first gain may be configured based on even bits in the binary control signal and the second gain may be configured based on odd bits in the binary control signal. In some instances, the gain configured for the receiving circuit is expressed in a multi-digit word, and the feedback resistors in each of the TIAs are selected based on values of a number of bits in the multi-digit word. In certain examples, the source degeneration circuit in the first equalizer circuit may be configured to equalize lower frequencies attenuation than the source degeneration circuit in the second equalizer circuit. In various examples, the TIA includes at least one feedback poly-resistor coupled in parallel with a PMOS resistor. The receiving circuit may include a calibration TIA. The calibration TIA may have a first input coupled to a first output through one or more configurable poly-resistors, and a second input coupled to a second output through a PMOS transistor. In some implementations the calibration TIA has a feedback circuit configured to control a voltage applied to a gate of the PMOS transistor such that channel resistance of the PMOS transistor matches a resistance provided by the one or more configurable poly-resistors. The resistance provided by the one or more configurable poly-resistors may be configured based on content of a multi-digit control word provided to the feedback circuit. In some implementations the calibration TIA has a feedback circuit that includes a voltage comparator having one input coupled to a source of the PMOS transistor and a second input coupled that has a voltage level controlled by the one or more configurable poly-resistors. An output of the voltage comparator may be coupled to a gate of the PMOS transistor. The voltage applied to a gate of the PMOS transistor may be to a gate of the PMOS resistors in the TIAs to select the channel resistance of the PMOS resistors. Some implementation examples are described in the following numbered clauses:1. A receiving circuit, comprising: a first equalizer circuit including: a first stage having a source degeneration circuit that comprises a source degeneration resistor coupled in parallel with a source degeneration capacitor; and a trans-impedance amplifier (TIA) comprising feedback resistors, each feedback resistor being coupled between an input and an output of the TIA.2. The receiving circuit as described in clause 1, further comprising: a second equalizer circuit coupled in series with the first equalizer circuit including: a first stage having a source degeneration circuit that comprises a source degeneration resistor coupled in parallel with a source degeneration capacitor; and a TIA comprising feedback resistors, each feedback resistor being coupled between an input and an output of the TIA in the second equalizer circuit.3. The receiving circuit as described in clause 2, wherein a gain configured for the receiving circuit is provided as a combination of a first gain provided by the first equalizer circuit and a second gain provided by the second equalizer circuit.4. The receiving circuit as described in clause 3, wherein the gain configured for the receiving circuit is expressed in a binary control input to the receiving circuit, wherein the first gain is configured based on even bits in the binary control signal and wherein the second gain is configured based on odd bits in the binary control signal.5. The receiving circuit as described in clause 3 or clause 4, wherein the gain configured for the receiving circuit is expressed in a multi-digit word, and wherein the feedback resistors in each of the TIAs are selected based on values of a number of bits in the multi-digit word.6. The receiving circuit as described in any of clauses 2-6, wherein the source degeneration circuit in the first equalizer circuit is configured to equalize lower frequencies attenuation than the source degeneration circuit in the second equalizer circuit.7. The receiving circuit as described in any of clauses 1-6, wherein the TIA comprises at least one feedback poly-resistor coupled in parallel with a P-channel metal-oxide-semiconductor (PMOS) resistor.8. The receiving circuit as described in clause 7, further comprising: a calibration TIA including: a first input coupled to a first output through one or more configurable poly-resistors; and a second input coupled to a second output through a PMOS transistor.9. The receiving circuit as described in clause 8, wherein the calibration TIA further comprises: a feedback circuit configured to control a voltage applied to a gate of the PMOS transistor such that channel resistance of the PMOS transistor matches a resistance provided by the one or more configurable poly-resistors, wherein the resistance provided by the one or more configurable poly-resistors is configured based on content of a multi-digit control word provided to the feedback circuit.10. The receiving circuit as described in clause 8 or clause 9, wherein the calibration TIA further comprises: a feedback circuit that includes a voltage comparator having one input coupled to a source of the PMOS transistor and a second input coupled to a node that has a voltage level controlled by the one or more configurable poly-resistors, wherein an output of the voltage comparator is coupled to a gate of the PMOS transistor.11. The receiving circuit as described in any of clauses 8-10, wherein the voltage applied to a gate of the PMOS transistor is provided to a gate of the PMOS resistors in the each TIA.12. An apparatus, comprising: means for equalizing a signal received from a communication channel, including a first equalizer circuit that includes a first stage having a source degeneration circuit configured to apply a first equalizing gain to the signal; means for applying a gain to the signal received from the communication channel, including a trans-impedance amplifier (TIA) in the first equalizer circuit; and means for selecting a gain to be applied to the signal received from the communication channel, including feedback resistors in the TIA that have configurable resistance values, each feedback resistor being coupled between an input of the TIA and an output of the TIA.13. The apparatus as described in clause 12, wherein the means for equalizing the signal received from the communication channel further includes a second equalizer circuit having a first stage that includes a source degeneration circuit configured to apply a second equalizing gain to the signal received from the communication channel, and wherein the means for applying the gain to the signal received from the communication channel comprises a TIA in the second equalizer circuit.14. The apparatus as described in clause 13, wherein the gain to be applied to the signal received from the communication channel includes a combination of a first gain provided by the first equalizer circuit and a second gain provided by the second equalizer circuit.15. The apparatus as described in clause 14, wherein the means for selecting a gain to be applied to the signal received from the communication channel is configured to: configure the resistance values of the feedback resistors in the TIA in the first equalizer circuit using even bits in a binary control signal; and configuring resistance values of feedback resistors in the TIA in the second equalizer circuit using odd bits in the binary control signal.16. The apparatus as described in any of clauses 12-15, wherein the TIA comprises at least one feedback poly-resistor coupled in parallel with a P-channel metal-oxide-semiconductor (PMOS) resistor.17. The apparatus as described in clause 16, further comprising: a feedback circuit in a calibration TIA operable to match a resistance of the at least one feedback poly-resistor to a channel resistance of a PMOS transistor.18. The apparatus as described in clause 16 or clause 17, wherein a gate control signal applied to the PMOS transistor is coupled to a gate of the PMOS resistor.19. A method for equalizing a signal received from a communication channel, comprising: providing the signal to a first stage of a first equalizer circuit, the first stage of the first equalizer circuit having a source degeneration circuit configured to apply a first equalizing gain to the signal; coupling an output of the first stage of the first equalizer circuit to a trans-impedance amplifier (TIA) in the first equalizer circuit; and configuring resistance values of feedback resistors in the TIA to select a gain to be applied to the output of the first stage of the first equalizer circuit, each feedback resistor being coupled between an input of the TIA and an output of the TIA.20. The method as described in clause 19, further comprising: coupling an output of the first equalizer circuit to a first stage of a second equalizer circuit, the first stage of the second equalizer circuit having a source degeneration circuit configured to apply a second equalizing gain to the output of the first equalizer circuit; coupling an output of the first stage of the second equalizer circuit to a TIA in the second equalizer circuit; and configuring resistance values of feedback resistors in the TIA in the second equalizer circuit to select a gain to be applied to the output of the first stage of the second equalizer circuit, each feedback resistor being coupled between an input and an output of the TIA in the second equalizer circuit.21. The method as described in clause 20, wherein a desired gain is obtained as a combination of a first gain provided by the first equalizer circuit and a second gain provided by the second equalizer circuit.22. The method as described in clause 20 or clause 21, further comprising: configuring the resistance values of the feedback resistors in the TIA in the first equalizer circuit using even bits in a binary control signal; and configuring the resistance values of the feedback resistors in the TIA in the second equalizer circuit using odd bits in the binary control signal.23. The method as described in any of clauses 20-22, further comprising: selecting the resistance values of the feedback resistors in each TIA based on values of a number of bits in a multi-digit word.24. The method as described in any of clauses 20-23, further comprising: configuring the source degeneration circuit in the first equalizer circuit to equalize a first band of frequencies; and configuring the source degeneration circuit in the second equalizer circuit to equalize a second band of frequencies different from the first band of frequencies.25. The method as described in any of clauses 19-24, wherein the TIA comprises at least one feedback poly-resistor coupled in parallel with a P-channel metal-oxide-semiconductor (PMOS) resistor.26. The method as described in clause 25, further comprising: using a feedback circuit in a calibration TIA to match a resistance of the at least one feedback poly-resistor to a channel resistance of a PMOS transistor.27. The method as described in clause 26, further comprising: coupling a gate control signal applied to the PMOS transistor to a gate of the PMOS resistor. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). The present disclosure is provided to enable any person skilled in the art to make or use aspects of the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. | 53,460 |
11863357 | DETAILED DESCRIPTION Among network-connected devices, for devices to communicate with each other, common communications capabilities and parameters are discovered. Auto-negotiation (AN) is a process whereby end points of a link share information on various capabilities relevant to their communication. For an example of AN, see Clause 73 of Institute of Electrical and Electronics Engineers (IEEE) 802.3-2018. Link partner devices exchange abilities and modes of operation via the exchange of base pages and, if requested, the link partner devices exchange next pages. According to Clause 73 of IEEE 802.3-2018, each device sends a list of its data-rate capabilities to its link partner. Auto-negotiation can determine the highest common capability and the highest common capabilities are used for communication between the link partner devices. After both devices receive their link partner's capability list, the devices can transition to the highest common data rate and feature capabilities. Link training is a process used by a device connected through a copper cable, backplane, or other wired or wireless signal transmission media by which the transmitter and receiver on a high-speed serial link communicate with each other in order to tune their equalization settings. For example, serializers/deserializers (SerDes) can use link training. Link training enables tuning of the finite impulse response (FIR) filter for each channel in an application-specific integrated circuit (ASIC) or other device to achieve the desired bit error rate (BER), eye size, signal-to-noise ratio (SNR), eye size, or link errors (e.g., uncorrectable and correctable forward error correction (FEC) errors, pseudorandom bit sequence (PRBS) errors, physical coding sublayer (PCS) errors). In some examples, the receiver examines the eye after applying equalization to the signal and determines if eye height and/or eye width is acceptable. The receiver can make a decision to terminate link training because the eye is acceptable, or keep training to optimize the eye further. If the receiver requests that its link partner transmitter change the precursor, main cursor or post-cursor equalization setting, the eye examination process begins again. As a link partners both include a transmitter and receiver, a link partner can simultaneously train the other partner's transmitter. After the link is trained, the two devices begin sending normal data traffic using the optimized transmitter settings. The Ethernet (IEEE 802.3) standards for 10 Gb/s and above over backplane and copper cables include a PMD (Physical Media Dependent) control function that enables adjusting the transmitter equalization settings as part of the link training. The PMD control function uses a handshake-based protocol for requesting coefficient changes. The protocol is described by state diagrams (e.g., FIGS. 72-4, 72-5 and 72-6 in IEEE Std 802.3-2012 and variations thereof). Those state diagrams are referenced in approved and draft standards for multiple PMDs (e.g., 10GBASE-KR, 40 GB ASE-KR4, 40 GB ASE-CR4, and 100 GB ASE-CR10). IEEE 802.3-2018 clause 73 and subclauses 73.7.4 and 73.7.5 and table 73-7 set forth that a device under test will defer for the proper amount of time before attempting to verify the status of the link determined by the Auto-Negotiation process. IEEE 802.3-2018 clauses 72 and 73 specify suitable link training times. A link fails if the link_fail_inhibit_timer has expired before link is active (e.g., signal is being properly decoded). However, in some cases, the amount of time set by link_fail_inhibit_timer may be insufficient to achieve a desired Bit Error Rate (BER) on the link resulting in more errors during normal operation (e.g., fewer Cyclic Redundancy Check (CRC) errors). Reference to any standard herein refers to any version including prior versions, current and future versions as well as proprietary derivatives thereof. Various embodiments attempt to improve link quality that use auto-negotiation at least in cases where the IEEE defined link training time may not allow for optimal training. IEEE 802.3 proposes negotiation using a next page exchange phase. For an example of next page messages, see Annex 28C of IEEE 802.3-2018. In some examples, next pages can be used to exchange identifier tags, Energy Efficient Ethernet (EEE) parameters, operating parameters and vendor specific information. According to various embodiments, one or both link partners can use a next-page exchange during auto-negotiation to advertise capability to extend link training time and an amount of time a link partner can extend link training time. In some examples, link partners can advertise one or more amounts of time that link training time can be extended. Link partners can negotiate how much to extend “the link_fail_inhibit_timer” to set an amount of time for link training. In some examples, if both sides advertise ability to do this, then the highest common denominator of two extension times is used and added to the link-fail inhibit time to determine a total amount of link-fail inhibit time permitted. However, if a link partner indicates extending link-fail inhibit time is not supported, then the default inhibit time is used (e.g., IEEE 802.3 default link_fail_inhibit_timer). Various embodiments can be used for link training or link re-training among chip-to-chip over traces of backplane connections (e.g., 10 GBASE-KRx or derivatives thereof (where x is an integer)) or network interface-to-network interface connections through copper cable (e.g., 40GBASE-CR4 or derivatives thereof). Various embodiments can be applied to link or lane speeds at or above 10 Gbps, or any link or lane speed. Various embodiments provide a manner for partners or ports to agree to extend the IEEE defined link-training time allowing for more optimal equalizer tuning. Receipt of a proprietary “next page” can represent, among other reasons, a request by a link partner to extend the subsequent link-training phase by a specified time. The maximum amount of time to allow for extending link-training can be specified in a field of the next page. For example, the value in the field represents the additional time, over the IEEE defined maximum time, to allow. Many SerDes have trouble achieving good Rx equalization (e.g., eye quality, signal integrity) in a time allotted under IEEE 802.3. Extending the link-fail inhibit time can allow for more time for link-training to improve the link. By extending the link training time the Bit Error Rate (BER) on the link can be minimized resulting in fewer errors during normal operation (e.g., fewer CRC errors). Link training is applicable to other wired communications or networking systems such as but not limited to FibreChannel, InfiniBand, or Serial Attached Small Computer System Interface (SAS). Extending link training time can be useful for 4-level pulse amplitude modulation (PAM) links (e.g., PAM4 links), PAM4, PAM5, PAM6, n-level PAM links (where n is an integer), non-return-to-zero (NRZ) line code, and so forth. Various embodiments provide a protocol that may be invoked at any time after the establishment of a point-to-point link. Various embodiments provide at least for allowing tuning to continue after a link is established. Various embodiments can be used to extend the tuning of a link beyond the time allowed by IEEE 802.3 for link-training following auto-negotiation. In some cases, the protocol can be used after the link is established to modify the transmit (Tx) equalization on both sides of the link. Some embodiments provide a capability to request a link-partner to make changes to the link-partner's transmit settings to optimize the local receiver. Link tuning may be desirable for a variety of reasons such as changed conditions (e.g., power, voltage, temperature changes), periodic change, and other reasons. Various embodiments can provide an extended link-training after a link is up, whether brought up using AN or not. The extended link-training can allow tuning of the link-partner transmit equalization settings as well as local receiver equalization resulting in potentially improved tuning. The link-training can take place from the receiver's perspective, so the transmitter settings are being requested to change. In some examples, data can be transmitted while training is occurring and a training data pattern can be used during re-training. During link re-training, a transition density signal or different separate training data can be used. Various embodiments can use a link layer discovery protocol (LLDP) type-length-values (TLVs) to request incremental changes to a link partner's transmit equalization settings (e.g., pre, main, post). However, LLDP protocol and TLV format are not required and any type of message can be used. For example, a packet header can be used to convey transmitter or receiver equalizer settings. For example, various embodiments could use a user datagram protocol (UDP) packet exchange to enable and use extended link-training. Extended link training, in combination with a quality metric for the local receiver equalization, can be used to improve link quality (e.g., lower Bit Error Rate (BER)) and reduce link errors (e.g., forward error correction (FEC) or PCS) identified at the receiver over receive side-only optimization. Extended link training or re-training can be used to extend the tuning of a link beyond the time allowed by IEEE 802.3 (or other standards or specifications) for link-training. According to some embodiments, an LLDP compatible message can be used by an initiator device to communicate or request a partner device to determine or check if received signal characteristics have drifted or the signal quality is acceptable and to trigger the initiator device or the partner device to perform re-training. For example, according to some embodiments, a temperature change at a base station or edge or fog computing device or any type of device (e.g., heat of day or cold of night or day) can trigger retraining of equalizer settings. For example, link training time extension and re-training can be applied by a base station that supports communications using wired or wireless protocols (e.g., 3GPP Long Term Evolution (LTE) (4G) or 3GPP 5G), on-premises data centers, off-premises data centers, edge network elements (computing elements provided physically closer to a base station or network access point than a data center), fog network elements (computing elements provided physically closer to a base station or network access point than a data center but further from an edge network), and/or hybrid data centers (e.g., a data center that uses virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments). Network or computing elements can be used in local area network (LAN), metropolitan area network (MAN), network with devices connected using optical fiber links, campus area network (CAN), or wide area network (WAN). Various embodiments can be used with 50G SerDes speeds and above, although lower speeds can be supported. FIG.1is a block diagram illustrating Ethernet port circuitry in a network interface controller50. The Ethernet port logic includes a Media Access Control (MAC) module52, a reconciliation sublayer module54and a PHY module56. The PHY module56can include a physical medium attachment (PMA) sublayer module62, Physical Medium Dependent (PMD) sublayer64, a forward error correction (FEC) module60and a physical coding sublayer (PCS) module58. MAC module52is configured to transfer data to and from the PHY module56. The Reconciliation Sublayer (RS) module54can provide a mapping operation that reconciles the signals at a Media Independent Interface (MII) to the Media Access Control (MAC)-Physical Signaling Sublayer (PLS) service definitions. MAC module52can be configured to implement aspects of the MAC layer operations and the RS module54can be configured to implement reconciliation sublayer operations. The Physical Medium Dependent (PMD) sublayer64can be responsible for interfacing to transmission medium, Medium Dependent Interface (MDI)80. The Physical Medium Attachment (PMA) sublayer62can perform transmission, reception, signal detection, clock recovery and skew alignment. PMD64and PMA62can be configured to transmit and receive serial data over the MDI80. In some examples, PMD64, PMA62aand/or62bcan include or use a SerDes. In some examples, extended link training and re-training can be provided to adjust filter parameters of a transmit and/or receive equalizer used by a SerDes. For example, a Software SerDes driver executed by a processor in a host or a network interface can be used to change a transmit equalizer parameter. In some examples, any combination of hardware, software and/or firmware can be used to manage and perform link training and/or link re-training. In some examples (e.g., for 100GBASE-CR1 or 100GBASE-KR1), FEC module60may decode data passed from the PMD64and PMA62to the PCS module58or encode data passed from the PCS module58to the PMD64and PMA62a,62b. In some examples, (e.g., for 200G and 400G modes), PCS module58includes FEC module60. Forward error correction code may improve the reliability of data transmission at higher line speeds. In the transmit direction, MAC module52can receive data to be transmitted in a media access control (MAC) frame over MDI80, and generates the MAC frame that includes inter-packet gap (IPG), preamble, start of frame delimiter (SFD), padding, and Cyclic Redundancy Check (CRC) bits in addition to the received data before passing the MAC frame to the PHY module56. The PHY module56can encode the MAC frame for reliable serial transmission over the MDI80. In the receive direction, MAC module52can receive MAC frames over a data bus from PHY module56. MAC module52can accept MAC frames from PHY56, perform Ethernet frame detection and validation, cyclic redundancy check (CRC) validation, update statistics counters, strip out the CRC, preamble detection and removal, and start of frame delimiter (SFD) detection and removal, and forward the rest of the MAC frame that includes headers for other protocols to a next layer (for example, an Internet protocol (IP) layer) for processing. FIG.2Aillustrates a simplified example of a transmitter-receiver pair for between a network interface controller100and a device120. MDI130provides a link between network interface controller100and device120by transferring data in parallel over one or more lanes. Device120can be any device such as another network interface, network interface card (NIC), a switch, router, a server, a host computing platform, and so forth. Network interface controller100can include a host receiver206and a host transmitter208for at least one lane of an electrical link between the network interface controller100and device120. Device120can include a module receiver212and module transmitter210for an electrical link between network interface controller100and device120. For example, link training controller202of NIC100can be used to advertise and negotiate extensions to link training time with link training controller of module120or vice versa. Link training controller202can also initiate or manage link re-training operations as described herein. Link training controller can be implemented as a driver, microcontroller, or other software in a host or network interface. Transmitter (Tx)208/210or receiver (Rx)206/212can use a SerDes to serialize or deserialize a signal. When a SerDes is turned on and a signal is received, Rx tuning can be used to clean-up the signal quality. When there is a time limit to perform Rx tuning, a signal is to be passed to a PCS layer within the time limit and the link comes-up if the link is acceptable. If the link does not pass, training can be restarted. In some examples, Tx208-Rx212and/or Tx210-Rx206can utilize independent Rx tuning. In some embodiments, an amount of time to perform equalizer tuning is the same for Tx208-Rx212and Tx210-Rx206. When auto-negotiation is used to establish link between two ethernet ports an IEEE defined procedure is followed. First, a “base page” exchange can be performed to determine common capabilities and select an operating mode (e.g., link speed (e.g., 1000BASE-KX, 10GBASE-KX4 . . . 100GBASE-CR4 and so forth), FEC mode, pause capability, and so forth). Next, an arbitrary length next page exchange phase can occur. Next page exchange can be used, for example, to advertise IEEE capabilities as well as non-IEEE capabilities such as the Ethernet Consortium modes. At the end of next page exchange, the selected operating mode can be configured and a link-training phase can begin. During this link training phase, changing the peer transmit (e.g., Tx208or Tx210) equalization settings and monitoring the effect on link quality at the receiver (e.g., Rx206or Rx212) and adjusting equalization settings to optimize the link can occur. In some examples, a link training time can be extended by specification of an earlier starting time and use of a default link training time. For example, devices can negotiate (e.g., using AN, Next Page Exchange, or proprietary exchange scheme) a starting time for link training by negotiating an offset from a default start of link training time where the offset indicates an amount of time before the default start of link training time at which to start link training. Devices can indicate a capability to start link training before a default start of link training time, indicate a greatest amount of time before a default start of link training time at which to start link training, and select a lesser of the amounts of time before a default start of link training time at which to start link training. Communications between devices can occur using any protocol. For example, Ethernet frames can be sent by NIC100to device120. For example, Ethernet frames can be sent by device120to NIC100. An Ethernet frame can include one or more of: a preamble, start of frame delimiter (SFD), destination MAC address, source MAC address, EtherType field, length field, frame check sequence (e.g., cyclic redundancy check (CRC)), and payload. From the perspective of either port there are four approaches to link-training, although more or fewer approached can be used. (1) The local port neither sends the proprietary next page nor receives one. This results in no extension of the link-training phase. (2) The local port sends the proprietary next page but does not receive one from the link partner (it will receive a NULL page in response). This indicates the link-partner either is unable or unwilling to extend the link-training time. This results in no extension of the link-training phase. (3) The local port is unable or unwilling to extend the link-training time so does not send the proprietary next page but receives one from the link partner (it will respond with a NULL page). This results in no extension of the link-training phase. (4) The local port both sends and receives the proprietary next page, indicating both ports are able and willing to extend the link training time. The amount of time to extend the link-training phase is defined to be the minimum of the times advertised by the two ports. In case (4), the subsequent link-training phase will be allowed to last longer (if necessary) than the IEEE defined maximum time. If the time is still exceeded then auto-negotiation will be restarted. According to various embodiments, a link training controller advertises extended link training ability during the Next Page Exchange phase of Auto-Negotiation. There are two Next Pages required to advertise this ability. First, an Organizationally Unique Identifier (OUI) tagged formatted first Next Page is sent with the Vendor OUI of <tbd> using message code #1. Next, an OUI tagged unformatted second Next Page is sent with the requested extension of time in milliseconds. An example format for a first next page is shown inFIG.3A. In theFIG.3A, the following are example contents of fields of a next page. D0_3: 0001b (indicating “Organizationally Unique Identifier Tagged Message”) D4_10: 0000000b D11: T D12: 0b D13: 1b D14:ACK D15: 1b (NP) D16_26: Vendor OUI <tbd> bits [23:13] D27_31: 00000b D32_42: Vendor OUI <tbd> bits [12:2] D43_47: 00000b FIG.3Bshows an example second next page format can be as follows. Example content of fields can be as follows.D0_3: 0001b (indicating “Extended Link Training Time” value follows).D4_8: 00000bD9_10: Vendor OUI <tbd> bits [1:0]D11: TD12_13: 00bD14: ACKD15: NPD16_47: time, in milliseconds, to extend for link training (unsigned 32-bit value). Insome examples, this value is the absolute time for link training, which can be less than,equal to, or more than the IEEE 802.3 default link training time. In some examples, if one side of the link does not support “Vendor Link Training Extension Ability”, then it will respond to the OUI tagged formatted Next Page with a NULL page. This can have the effect of advertising a value of “0” for the “Time_in_ms” field. In some examples, if one side of the link does not support option “Extended Link Training Time,” then it will respond to the OUI tagged unformatted Next Page with a NULL page. This can have the effect of advertising a value of “0” for the “Time_in_ms” field. The time allowed for link-training is resolved to be the highest common denominator of the values advertised by the two sides of the link. If the resolved time is less than the IEEE (or Ethernet Consortium) defined link-training time, then the IEEE (or Ethernet Consortium) defined link-training time is used instead. The link-training time specified in the next page can either be: (1) a value to be added to the default IEEE link training time (e.g., a relative value) or (2) an absolute link training time to use. In (1) there is no possibility the negotiated link training time could be less than the default IEEE link training time, in (2) link training could be less than default IEEE link training time. In some examples, the default link training time can be set to a higher or lower value to extend or decrease link training time. For example, AN, Next Page Exchange, or proprietary exchange scheme can be used to set or change the default link training time. In some examples, where a default link training time is set or changed, linking training time extension or decrease may or may not be used. Referring next toFIG.2B, in some embodiments, after link training is complete, a microcontroller244(e.g., any of244-0to244-N) associated with any lane can initiate re-training. For example, a device driver, platform device or software can trigger negotiation and implementation of link re-training. Link re-training can be executed independently per SerDes lane. Various embodiments provide a protocol to setup re-training, provide requests in connection with re-training, or receive responses in connection with re-training. A protocol stack (e.g., layer 3) can recognize a message as indicating setup, request or response. In some examples, various messages can be used to initiate (setup) link training to indicate supported taps, request application of particular equalizer tap parameters (e.g., hold, increment or decrement Tx Tap #x)). In some examples, one or more increases or one or more decreases can be applied instead of an increment or decrement. Various embodiments provide a response that indicates when particular equalizer tap parameters are being used (e.g., Tx Tap #x updated). The Tx Taps can be defined using signed integer values (−3 . . . 0 . . . +3). Pre-emphasis Taps can be identified by negative values whereas Post-emphasis Taps can be identified by positive values. A main tap can be identified by the value zero (0). In some cases, not all taps may be re-trainable by a given SerDes or its controller. In some examples, a transmitter can advertise any tap capable of adjustment by link re-training. The supported taps can be communicated to the link partner in an initial set-up message. The set-up message can identify the local Port_id that will be the subject of Tap change Requests by the link-partner. A Port_id can be provided in any response. In some examples, a Port_id value can be any 8 bit unsigned value (or any other value (e.g., 32 bits)) that the local device can use to map to a local port. Since the link came up prior to this protocol being used it is assumed both sides agree on the number and mapping of SerDes lanes used by the port. Supported taps can be identified by an 8-bit unsigned value containing a map of supported Tx Taps. For example, a mapping from bit to Tap # is as follows: Tap #Bit−30−21−1203142536 For PAM4 encoded signals, Tx taps can be identified as follows: Pre2=−2 Pre1=−1 Main=0 Post1=1 Post2=2. However, other types of formatting can be used. Requests to change unsupported taps can be considered a protocol error and can be be ignored. If the link came up prior to this protocol being used it is assumed both sides agree on the number and mapping of SerDes lanes used by the port. In some examples, a message format for SETUP can include: {SETUP, Port_id, <supported_Taps>}. To identify a tap that is the subject of a Request or Response the following identifying data can be included in the full Request or Response message: {Port_id, Logical_lane_id, Tap_id, REQ/RSP}, where: Port_id is the local Port_id received in the set-up messageLogical_lane_id is a value from 0 to 15 indicating the logical lane on the portTap_id is the subject Tap # (e.g., −3 to +3)SETUP/REQ/RSP/TRAINED (0-3) identifies this message as either a Set-up (0), Request (1), Response (2), or Trained (3) message. Trained messages indicate completion of the protocol. In some examples, Requests can be one of: HOLD, INC, DEC. In some examples, a message format for REQUESTS can be: {REQ, Port_id, Logical_lane_id, Tap_id, <INC/DEC/HOLD>}. In some examples, Responses can include one of: NOT_UPDATED, UPDATED, MIN, MAX. In some examples, a message format for Requests can include: {RSP, Port_id, Logical_lane_id, Tap_id, <NOT_UPDATED/UPDATED/MIN/MAX>}. Accordingly, a full message can contain the following information: {SETUP, Port_id, <supported_Taps>, DESTRUCTIVE_MODE_ABILITY,DESTRUCTIVE_MODE_REQ, <random-initiator-bit> }{ REQ, Port_id, Logical_lane_id, Tap_id, <INC/DEC/HOLD> }{ RSP, Port_id, Logical_lane_id, Tap_id, <NOT_UPDATED/UPDATED/MIN/MAX> } It may take time for the transmitter to accomplish the setting change (Request) and the receiver to know the configuration change has been applied or not applied (Response). Because messages may be lost or corrupted, the exchange protocol can recover from errors. Some SerDes may not be able to fully optimize incrementally and may use a re-adaptation mode that causes the link to go down, in one or both directions, when a tap change is being evaluated (Rx Equalization). Support of this mode of operation can involve use of the protocol operating in one direction at a time. For example, one side can make Tap Requests and adjust its receiver equalization and, next, the other end of the link can start the process to adjust receiver equalization. This is necessary to distinguish the cause of link loss, which could be due to the receiver adaptation (caused by the remote side) or to a requested tap change (caused by the local side). A “destructive mode” (or “restart inhibit mode”) may cause a link to fail (to go “down”) during the training but requests the receiver to not allow the link to go down until after an amount of time has passed so that default Tx tap settings are not reverted-to until after the amount of time has passed (max-link-loss-time) to avoid protocol restart. Training tap settings applied during the re-training that is successful, and before the time expiration, are used and a training protocol may not be restarted. However, after the amount of time has passed, protocol training restart occurs to determine Tx tap settings and/or default Tx tap settings can be used. Operation in “destructive mode” is communicated in the set-up message by three parameters:DESTRUCTIVE_MODE_ABILITYDESTRUCTIVE_MODE_REQ<max-link-loss-time>DESTRUCTIVE_MODE_ABILITY indicates this node supports destructive mode operation as an option.DESTRUCTIVE_MODE_REQ indicates this node requires (not just “requests”) destructive mode operation.<max-link-loss-time> is a 32-bit unsigned value representing the maximum time, in milliseconds, that the link may be down and not cause a protocol restart. This time may be different in each direction and should be set to the maximum amount of time the local receiver adaptation should be allowed to take. A protocol error can occur by setting DESTRUCTIVE_MODE_REQ=1 but DESTRUCTIVE_MODE_ABILITY=0. If one side requires destructive mode (DESTRUCTIVE_MODE_REQ=1) but the other side does not support destructive mode (DESTRUCTIVE_MODE_ABILITY=0), the protocol can terminate. If one or both sides requires destructive mode operation, then an extra message exchange is used to determine which side goes first.{INITIATOR_BID, <local-bid>, <remote-bid>}<local-bid> is a non-zero 32-bit unsigned value that determines which side starts first. It can be generated by some random process to guarantee convergence. The side with the largest “bid” goes first. In the event of a tie, each side submits a new random “bid”.<remote-bid> is the 32-bit value last received from the other side as its “bid”. The initial value of this field is “0”, before any bid has been received from the other side. The INITIATOR_BID message could be transmitted periodically to recover form lost messages. Bidding continues until a bid from the other side is received with a <remote-bid> field matching the value current <local-bid>, and with different values for the local and remote bids. At that point, both sides determine which should go first and the “winning” side can initiate Tap Requests. In some examples, LLDP protocol data units (PDUs) are used for capability advertisement, request and response. In some examples, there are 3 types of TLVs: StartUpTlv can include a local port identifier that will be returned by the link partner in any subsequent TLV to identify a specific port. StartUpTlv can also include indications of which taps are supported on the local device. Supported taps can be pre2, pre1, main, post1, post2, or fewer, more or other taps. In some examples, InfiniBand link settings can be adjusted. RequestTlv can include the localPortId received in the StartUpTlv for this port. RequestTlv can also include a logical laneID (0-7) within the localPortid, a tapId indicating which Tx tap is the subject of the request, and the request itself, HOLD=0 or UPDATE=+/−1. StatusTlv can include the localPortId received in the StartUpTlv for this port. StatusTlv can also include a logical laneID (0-7) within the localPortid, a tapId indicating which Tx tap is the subject of this status report, and the status itself, NOT_UPDATED=0, UPDATED=1, MIN=2, MAX=3. MIN or MAX can be returned to indicate no further changes in the same direction are possible and should be treated the same as UPDATED. In some examples, LLDP PDUs can have up to two link training TLVs per logical lane: one requestTlv and one statusTlv (responseTlv). A requestTlv can initiate a change in a Tx tap setting on the link partner. PDUs may be generated in response to receipt of a PDU from the link partner or after a timeout period. If any tap is in the process of being changed and no statusTlv has been received for it in the last some amount of time, a PDU can be generated to duplicate the requested change to handle lost or corrupted PDUs. In some examples, changes to tap settings can be made in the smallest possible increments/decrements to minimize the potential for causing a loss of link as a result of a tap change. FIG.2Bdepicts an example system for communicatively coupling a network device to another network device. For example, host250and device232can include a network device such as one or more of: a network interface, switch, router, server, host computing platform, interconnect, fabric, rack, or any computing or communications device. For example, device232can be connected to an interface with multiple electrical links (e.g., backplane or copper cable). The system provides for multiple lanes of transmit-receive pairs that can be used to transmit or receive electrical signals between host250and device232. A lane can transmit and/or receive a signal. A transmitter of a lane can use an equalizer implemented in an analog circuit to generate an electrical signal for transmission. The equalizer can have one or more current sources that are used to create a signal whereby weights of current sources can be adjusted to change signal characteristics. Equalizer settings can be modified to change weights of current sources. For example, a digital-to-analog converter (DAC) can be used to create signal in the digital domain and output the result in an analog format. Various embodiments use any microcontrollers244to negotiate time to complete link training and whether to extend training time. In addition, microcontrollers244can initiate and manage re-training of transmitter and/or receiver equalizer settings. Transceiver238can be used for electrical signal transmission and receipt between device232and host network interface device250. Transceiver238can provide multiple transmit and receive lanes for electrical signal communication between device232and host device250. For example, lanes240-0to240-N can provide transmit and receive circuitry for coupling with receive and transmit circuitry of lanes254-0to254-N of host device250. Lanes240-0to240-N can provide serializer/deserializer (SerDes) formatting of signals. In some examples, transceiver238can be part of a PMD or PHY. Device232can be communicatively coupled to host device250by an interconnect242. Interconnect242can be electrical signal conductors that couple pins or holes of lanes240-0to240-N of a pluggable device232to holes or pins of lanes254-0to254-N of host250. Host network interface device250can transmit or receive signals in electrical format to or from device232. Host device250can include transceiver252for communication with device232. Transceiver252can include lanes254-0to254-N where any of lanes254-0to254-N includes receive and transmit circuitry. In some examples, transceiver252can be part of a PMD or PHY. Any microcontroller256-0to256-N can be used to manage operation of its lane in accordance with embodiments described herein. In some embodiments, a single microcontroller can manage equalizer settings of one or multiple lanes. The one or more parameters can cause a receiver or transmitter device in any of lanes254-0to254-N to adjust its equalizer setting for a specific tap, whether to increase or decrease the coefficient value of an equalizer tap. In some embodiments, the settings of a tap can be adjusted independent of adjustment of settings of another tap. In some examples, host250can request to change an equalizer setting of any tap of a transmitter equalizer circuit of device232. Likewise, device232can request to change an equalizer setting of any tap of a transmitter equalizer circuit of host250. Accordingly, device232and host250can adjust transmitter equalizer settings used by a partner device. Moreover, any of device232and host250can adjust receiver equalizer settings to compensate for channel distortions. For example, to initiate an equalizer setting change, any microcontroller244-0to244-N can determine a signal quality of a received signal and determine what transmitter side tap of host device250to change and whether to increment or decrement the setting of the tap. For example, an eye opening of a received signal can be measured. An eye can represent 1-to-0 and 0-to-1 transitions of a signal and indicate whether the transitions occur within isolated time regions. A microcontroller can estimate inter-symbol interference (ISI) and select settings based on an ISI reaching a minimum value. A microcontroller can search through available transmitter tap settings and select settings that lead to a most open eye. Transmitter equalizer settings can be changed periodically starting at or after link startup and can run periodically. Similar operations can occur for microcontroller256-0to256-N to adjust transmit equalizer settings of device232. Any of device232or host250can perform packet processing such as one or more of: media access control, any protocol layer processing, security, routing, destination lookup, and so forth. FIG.4Adepicts a process sequence of a link training. At402, an IEEE 802.3 Clause 73 AN base page exchange can commence. In this example, 200 Gbps link speed and RS-FEC capabilities are advertised. At404, Next page exchange can occur. In some examples, both link partners advertise vendor OUI is identified as Intel whereas one link advertises training extension time of (up to) 5 seconds and the opposite link advertises training extension time of (up to) 10 seconds. However, a vendor OUI of the device can be advertised. In some examples, the highest common denominator of the extension times is selected, namely 5 seconds. At406, the link training scheme uses a default link training time+5 seconds. An example link training format is in clause 72 of IEEE 802.3-2018. FIG.4Bdepicts an example sequence that can be used for a link re-training operation. At450, auto-negotiation can be applied to set one or more parameters of operation between partner devices. For example, IEEE 802.3, Clause 73 AN can be used to set at least link speed, FEC mode, pause capability and/or other parameters. At452, link training can be performed to set transmit and receive SerDes equalizer parameters. For example, IEEE 802.3 Clause 72 link training can be performed to set transmit and/or receiver equalizer settings. In some cases, link training duration can be extended in accordance with examples described herein. Thereafter, data or other content can be transmitted across one or more lanes or a link. At454, capability to re-train a link can be negotiated. For example, a transmitter can advertise any tap capable of adjustment by link re-training to the link partner in an initial set-up message. The set-up message identifies the local Port_id that will be the subject of tap change requests by the link-partner. At456, a request to modify transmitter component settings can be received at the first device. The request can include a port identifier that will be the subject of tap change requests by the link partner, lane identifier, subject tap(s), or increment/decrement/hold tap setting. At458, a response is provided to the second device to indicate that that specified tap settings have been applied. The second device can measure signal characteristics such as BER, eye size, and other errors. In this example, the second device determines that the settings are acceptable. At460, the second device indicates that the settings are to be held. The tap settings can be applied and stored for current and future use. Note that both first and second device can do independent transmitter adjustments. In some examples, receiver equalizer settings can be set by a link partner in a similar manner as that used to specify transmitter tap settings. FIG.5Adepicts an example of equalizer. The transmitter equalizer500includes a pre-cursor tap c(−1), a cursor tap c(0), and a post-cursor tap c(1). In an illustrative embodiment, filter tap settings identifies one of four possible tap values for the pre-cursor tap c(−1) using two bits and one of six possible tap values for the post-cursor tap c(1). The cursor tap c(0) coefficient may be calculated based on the values of the other two tap coefficients, c(−1) and c(1), such that the equation: c(0)−c(1)−c(−1)=1, is satisfied. The filter tap settings may be modified by incrementing or decrementing the filter tap settings. FIG.5Bdepicts a functional model of a structure of a four-tap feed-forward equalizer (FFE)550in a transmitter. An FFE550is implemented in each communication lane interface of a chip-to-chip or chip-to-module interface. The FFE550includes a pre-cursor tap c(−2), a pre-cursor tap c(−1), a cursor tap c(0), and a post-curser tap c(1). The filter tap settings may be modified by incrementing or decrementing the filter tap settings. A coefficient of any tap may be modified independent of coefficient of another tap or taps. FIG.6Adepicts an example process to potentially extend a time allocated for link training. The process can be performed by a transceiver in a first device with a wired or wireless connection to a transceiver in a second device. For example, the first device can be a network interface, host device, electrical or optical module, or any device. For example, the second device can be a network interface, host device, electrical or optical module, or any device. The connection can be a copper cable, optical cable, backplane, any type of Ethernet cable, or any wired or wireless signal propagation media. A transceiver can include a transmitter and a receiver. Signals propagated through the connection can be use compatible with Ethernet, FibreChannel, InfiniBand, or Serial Attached Small Computer System Interface (SAS). At602, an exchange occurs between the transceiver of the first device and the transceiver of the second device to determine common capabilities with a link partner. For example, IEEE 802.3 AN can be performed to determine at least link speed, FEC capabilities, and other capabilities. At602, an operating mode for the connection can be selected. The operating mode can be determined as the least capabilities advertised by both transceivers. The operating mode comprises one or more of speed, forward error correction (FEC), pause capability, and/or other capabilities. At604, the transceivers can engage in an exchange phase to advertise other capabilities. For example, capabilities can be advertised using Next Page Exchange phase of Auto-Negotiation. Other capabilities can include an amount to extend link training time that is supported, not supported, amount of extension, and so forth. At606, both transceivers can determine an amount to extend link training time based on capability of transceiver and advertised capability of link partner transceiver to extend link training time, if any. For example, if one transceiver supports extending link training but the other does not, then the default link training time is used. If one transceiver supports a higher extended link training time than another, the highest common denominator extended link training time is used by both transceivers. At608, the transceivers engage in a link-training phase for the link training time plus the amount to extend link training time. The link-training phase can include requesting changes in the peer transmit equalization settings, monitor the effect on link quality at the receiver, and adjust equalizer settings to optimize one or more of errors, eye size, and so forth. FIG.6Bdepicts an example process to perform link re-training. At650, a capability to perform link re-training is communicated to a link partner. For example, a capability to perform link re-training can be in the format of a setup message described herein. For example,650can include652, where a capability to perform link re-training can include an indication to setup a re-training, a port identifier, and identification of taps that can be configured during link re-training. At654, a request to reconfigure components that are available for configuration during link re-training can be received. For example, a request can include a port identifier that will be the subject of tap change requests by the link partner, lane identifier, subject tap(s), or increment/decrement/hold tap setting. At656, a determination can be made whether a response to a configuration request was received. For example, a response can include an indication of whether the re-configuration was performed or not. Other examples of response messages are provided herein. For example, the response can include one or more of: identifier of a port identifier, lane identifier, subject of tap, and updated/not-updated/min reached/max reached. If the response was received, the process continues to658. If the response was not received, then656can repeat. At658, a determination can be made as to whether the configuration is acceptable. For example, if a bit error rate (BER), eye size, signal-to-noise ratio (SNR), or any other desired characteristics are achieved, the configuration can be determined to be acceptable. After a configuration is determined to be acceptable, data transmission can proceed or resume. Otherwise, the receiver can indicate other configurations at654and the process continues. FIG.7Adepicts an example of states that can be used by a state machine that can be used for modifying a tap value to attempt to achieve both ends of the link agreeing on the state of an update. For example, a SerDes driver or microcontroller associated with a SerDes can implement a state machine. A Logical_lane_id can support independent state-machines for updates to each supported Tap. A request for a tap setting change can be tracked by a state-machine. In some examples, status values in the response message(s) can update the state. The state-machine can include three states, in some examples:ST_IDLE, a state from which a change can be initiated. Entered at start-up or after exiting the ST_UPDATED state upon completion of a tap change.ST_UPDATE, entered upon issuing a requestTlv for the tap. In this state the requestTlv is changed to indicate UPDATE (+/−1). Exited upon receiving a statusTlv indicating UPDATED (or LIMIT).ST_UPDATED, entered upon receiving a statusTlv with the status UPDATED (or LIMIT). In this state the requestTlv is changed to indicate HOLD. Exited upon receiving a statusTlv indicating NOT_UPDATED. A Request message may be issued periodically to handle the case where a message is lost. The initial state can be HOLD where there is “no request to change.” If a request is made to change a Tx Tap setting and the status of the previous Request is NOT UPDATED, then the request can be issued and the state of the Tap Request changed to Update INCREMENT/DECREMENT, depending on the Request. This state can be maintained until the status of the Tap (from a Response message) becomes one of UPDATED, MIN, or MAX. UPDATED can indicate the Tap change request was effected or performed. MIN can indicate a decrement request was made and the Tap is now at its minimum value, or the Tap was already at its minimum value. Likewise, MAX can indicate an increment request was made and the Tap is now at its maximum value, or the Tap was already at its maximum value. FIG.7Bdepicts an example state machine that can be used to track states of update requests. For example, a SerDes driver or microcontroller associated with a SerDes can enact a state machine. In some examples, each Logical_lane_id can support independent state-machines for updates to each supported Tap. An initial state of a Tap can be NOT UPDATED, whereby no change is in-progress. When a Request (INCREMENT (INC)/DECREMENT (DEC)) is received, the state can move to UPDATE TX TAP while the change is being effected in hardware. After the change is complete, the state can change to one of three states depending on the result of the change. If the Tap was successfully changed, the state changes to UPDATED. If the Tap was incremented and the Tap is now at its maximum value then the state can change to MAX. Likewise, if the Tap was decremented and the Tap is now at its minimum value the state changes to MIN. This state can be maintained until a HOLD Request is received and after the HOLD Request is received, the state can revert to NOT UPDATED. Since it is possible changes requested using this protocol could cause the link itself to go down, each side of a link can revert its Tx Taps to their original settings in the event the link goes down. Changing transmitter side parameters can cause a link to drop and in such case, initial settings can be used before adjustment. FIG.8depicts a network interface that can use embodiments or be used by embodiments. Various resources in the network interface can perform link establishment, link training or link re-training in accordance with embodiments described herein. In some examples, Network interface800includes a network interface, network interface controller or a network interface card. In some examples, network interface800can be part of a switch or a system-on-chip (SoC) with devices such as a processor or memory. Network interface800can include transceiver802, processors804, transmit queue806, receive queue808, memory810, and bus interface812, and DMA engine852. Transceiver802can be capable of receiving and transmitting packets in conformance with the applicable protocols such as Ethernet as described in IEEE 802.3, although other protocols may be used. Transceiver802can receive and transmit packets from and to a network via a network medium (not depicted). Transceiver802can include PHY circuitry814and media access control (MAC) circuitry816. PHY circuitry814can include encoding and decoding circuitry (not shown) to encode and decode data packets according to applicable physical layer specifications or standards. MAC circuitry816can be configured to perform MAC address filtering on received packets, process MAC headers of received packets by verifying data integrity, remove preambles and padding, and provide packet content for processing by higher layers. MAC circuitry816can be configured to assemble data to be transmitted into packets, that include destination and source addresses along with network control information and error detection hash values. Processors804can be any a combination of a: processor, core, graphics processing unit (GPU), field programmable gate array (FPGA), application specific integrated circuit (ASIC), or other programmable hardware device that allow programming of network interface800. For example, processors804can provide for identification of a resource to use to perform a workload and generation of a bitstream for execution on the selected resource. For example, a “smart network interface” can provide packet processing capabilities in the network interface using processors804. Packet allocator824can provide distribution of received packets for processing by multiple CPUs or cores using timeslot allocation described herein or RSS. When packet allocator824uses RSS, packet allocator824can calculate a hash or make another determination based on contents of a received packet to determine which CPU or core is to process a packet. Interrupt coalesce822can perform interrupt moderation whereby network interface interrupt coalesce822waits for multiple packets to arrive, or for a time-out to expire, before generating an interrupt to host system to process received packet(s). Receive Segment Coalescing (RSC) can be performed by network interface800whereby portions of incoming packets are combined into segments of a packet. Network interface800provides this coalesced packet to an application. Direct memory access (DMA) engine852can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer. Memory810can be any type of volatile or non-volatile memory device and can store any queue or instructions used to program network interface800. Transmit queue806can include data or references to data for transmission by network interface. Receive queue808can include data or references to data that was received by network interface from a network. Descriptor queues820can include descriptors that reference data or packets in transmit queue806or receive queue808. Bus interface812can provide an interface with host device (not depicted). For example, bus interface812can be compatible with PCI, PCI Express, PCI-x, Serial ATA, and/or USB compatible interface (although other interconnection standards may be used). In some examples, network interface and other embodiments described herein can be used in connection with a base station (e.g., 3G, 4G, 5G and so forth), macro base station (e.g., 5G networks), picostation (e.g., an IEEE 802.11 compatible access point), nanostation (e.g., for Point-to-MultiPoint (PtMP) applications), on-premises data centers, off-premises data centers, edge network elements, fog network elements, and/or hybrid data centers (e.g., data center that use virtualization, cloud and software-defined networking to deliver application workloads across physical data centers and distributed multi-cloud environments). FIG.9depicts an example switch. Various embodiments can be used in or with the switch to perform link establishment, link training or link re-training in accordance with embodiments described herein. Switch904can route packets or frames of any format or in accordance with any specification from any port902-0to902-X to any of ports906-0to906-Y (or vice versa). Any of ports902-0to902-X can be connected to a network of one or more interconnected devices. Similarly, any of ports906-0to906-X can be connected to a network of one or more interconnected devices. Switch904can decide which port to transfer packets or frames to using a table that maps packet characteristics with an associated output port. For example, match-action tables can be used whereby a hash of a portion of a packet is used as an index to find an entry. In addition, switch904can perform packet replication for forwarding of a packet or frame to multiple ports and queuing of packets or frames prior to transfer to an output port. Some embodiments implement hash-lookup in P4 programming language, which is a programming language designed to allow programming of packet forwarding in data-planes. In contrast to general purpose language such as C or Python, P4 is domain-specific language with a number of constructs optimized around network data forwarding. FIG.10depicts a system. The system can use embodiments described herein to perform link establishment, link training or link re-training in accordance with embodiments described herein. System1000includes processor1010, which provides processing, operation management, and execution of instructions for system1000. Processor1010can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system1000, or a combination of processors. Processor1010controls the overall operation of system1000, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices. In one example, system1000includes interface1012coupled to processor1010, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem1020or graphics interface components1040, or accelerators1042. Interface1012represents an interface circuit, which can be a standalone component or integrated onto a processor die. Where present, graphics interface1040interfaces to graphics components for providing a visual display to a user of system1000. In one example, graphics interface1040can drive a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density of approximately 100 PPI (pixels per inch) or greater and can include formats such as full HD (e.g., 1080p), retina displays, 4K (ultra-high definition or UHD), or others. In one example, the display can include a touchscreen display. In one example, graphics interface1040generates a display based on data stored in memory1030or based on operations executed by processor1010or both. In one example, graphics interface1040generates a display based on data stored in memory1030or based on operations executed by processor1010or both. Accelerators1042can be a fixed function offload engine that can be accessed or used by a processor1010. Accelerators1042can be coupled to processor1010using a memory interface (e.g., DDR4 and DDR5) or using any networking or connection standard described herein. For example, an accelerator among accelerators1042can provide sequential and speculative decoding operations in a manner described herein, compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators1042provides field select controller capabilities as described herein. In some cases, accelerators1042can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU). For example, accelerators1042can include a single or multi-core processor, graphics processing unit, logical execution unit single or multi-level cache, functional units usable to independently execute programs or threads, application specific integrated circuits (ASICs), neural network processors (NNPs), programmable control logic, and programmable processing elements such as field programmable gate arrays (FPGAs). Accelerators1042can provide multiple neural networks, CPUs, processor cores, general purpose graphics processing units, or graphics processing units can be made available for use by artificial intelligence (AI) or machine learning (ML) models. For example, the AI model can use or include any or a combination of: a reinforcement learning scheme, Q-learning scheme, deep-Q learning, or Asynchronous Advantage Actor-Critic (A3C), combinatorial neural network, recurrent combinatorial neural network, or other AI or ML model. Multiple neural networks, processor cores, or graphics processing units can be made available for use by AI or ML models. Memory subsystem1020represents the main memory of system1000and provides storage for code to be executed by processor1010, or data values to be used in executing a routine. Memory subsystem1020can include one or more memory devices1030such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory1030stores and hosts, among other things, operating system (OS)1032to provide a software platform for execution of instructions in system1000. Additionally, applications1034can execute on the software platform of OS1032from memory1030. Applications1034represent programs that have their own operational logic to perform execution of one or more functions. Processes1036represent agents or routines that provide auxiliary functions to OS1032or one or more applications1034or a combination. OS1032, applications1034, and processes1036provide software logic to provide functions for system1000. In one example, memory subsystem1020includes memory controller1022, which is a memory controller to generate and issue commands to memory1030. It will be understood that memory controller1022could be a physical part of processor1010or a physical part of interface1012. For example, memory controller1022can be an integrated memory controller, integrated onto a circuit with processor1010. In some examples, processor1010can execute a device driver (not depicted) for network interface1050. OS1032can determine capabilities of network interface1050from the device driver. For example, OS1032can receive an indication of capabilities of network interface1050to perform one or more of the following capabilities or capabilities described herein: link training time extension, commencing link training earlier than scheduled, changing or setting a default link training time, link re-training, or component parameter modification. OS1032can request the device driver to enable or disable network interface1050to perform any of the capabilities described herein. In some examples, OS1032, itself, can enable or disable network interface1050to perform any of the capabilities described herein. OS1032can provide requests (e.g., from an application1034) to network interface1050to utilize one or more capabilities of network interface1050. For example, any of applications1034can request use or non-use of any capabilities described herein by network interface1050. In some examples, a datacenter administrator can configure network interface1050to perform any of the capabilities described herein. While not specifically illustrated, it will be understood that system1000can include one or more buses or bus systems between devices, such as a memory bus, a graphics bus, interface buses, or others. Buses or other signal lines can communicatively or electrically couple components together, or both communicatively and electrically couple the components. Buses can include physical communication lines, point-to-point connections, bridges, adapters, controllers, or other circuitry or a combination. Buses can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a Hyper Transport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (Firewire). In one example, system1000includes interface1014, which can be coupled to interface1012. In one example, interface1014represents an interface circuit, which can include standalone components and integrated circuitry. In one example, multiple user interface components or peripheral components, or both, couple to interface1014. Network interface1050provides system1000the ability to communicate with remote devices (e.g., servers or other computing devices) over one or more networks. Network interface1050can include an Ethernet adapter, wireless interconnection components, cellular network interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces. Network interface1050can transmit data to a device that is in the same data center or rack or a remote device, which can include sending data stored in memory. Network interface1050can receive data from a remote device, which can include storing received data into memory. Various embodiments can be used in connection with network interface1050, processor1010, and memory subsystem1020. In one example, system1000includes one or more input/output (I/O) interface(s)1060. I/O interface1060can include one or more interface components through which a user interacts with system1000(e.g., audio, alphanumeric, tactile/touch, or other interfacing). Peripheral interface1070can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system1000. A dependent connection is one where system1000provides the software platform or hardware platform or both on which operation executes, and with which a user interacts. In one example, system1000includes storage subsystem1080to store data in a nonvolatile manner. In one example, in certain system implementations, at least certain components of storage1080can overlap with components of memory subsystem1020. Storage subsystem1080includes storage device(s)1084, which can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage1084holds code or instructions and data1086in a persistent state (e.g., the value is retained despite interruption of power to system1000). Storage1084can be generically considered to be a “memory,” although memory1030is typically the executing or operating memory to provide instructions to processor1010. Whereas storage1084is nonvolatile, memory1030can include volatile memory (e.g., the value or state of the data is indeterminate if power is interrupted to system1000). In one example, storage subsystem1080includes controller1082to interface with storage1084. In one example controller1082is a physical part of interface1014or processor1010or can include circuits or logic in both processor1010and interface1014. A volatile memory is memory whose state (and therefore the data stored in it) is indeterminate if power is interrupted to the device. Dynamic volatile memory can involve refreshing the data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (Dynamic Random Access Memory), or some variant such as Synchronous DRAM (SDRAM). An example of a volatile memory includes a cache. A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (Double Data Rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on Jun. 27, 2007). DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), DDR4E (DDR version 4), LPDDR3 (Low Power DDR version3, JESD209-3B, August 2013 by JEDEC), LPDDR4) LPDDR version 4, JESD209-4, originally published by JEDEC in August 2014), WIO2 (Wide Input/output version 2, JESD229-2 originally published by JEDEC in August 2014, HBM (High Bandwidth Memory, JESD325, originally published by JEDEC in October 2013, LPDDR5 (currently in discussion by JEDEC), HBM2 (HBM version 2), currently in discussion by JEDEC, or others or combinations of memory technologies, and technologies based on derivatives or extensions of such specifications. A non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable memory device, such as NAND technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND). A NVM device can also comprise a byte-addressable write-in-place three dimensional cross point memory device, or other byte addressable write-in-place NVM device (also referred to as persistent memory), such as single or multi-level Phase Change Memory (PCM) or phase change memory with a switch (PCMS), NVM devices that use chalcogenide phase change material (for example, chalcogenide glass), resistive memory including metal oxide base, oxygen vacancy base and Conductive Bridge Random Access Memory (CB-RAM), nanowire memory, ferroelectric random access memory (FeRAM, FRAM), magneto resistive random access memory (MRAM) that incorporates memristor technology, spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. A power source (not depicted) provides power to the components of system1000. More specifically, power source typically interfaces to one or multiple power supplies in system1000to provide power to the components of system1000. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source. In an example, system1000can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects between components can be used such as: Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Peripheral Component Interconnect express (PCIe), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, Compute Express Link (CXL), HyperTransport, high-speed fabric, NVLink, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof. Data can be copied or stored to virtualized storage nodes using a protocol such as NVMe over Fabrics (NVMe-oF) or NVMe. Embodiments herein may be implemented in various types of computing and networking equipment, such as switches, routers, racks, and blade servers such as those employed in a data center and/or server farm environment. The servers used in data centers and server farms comprise arrayed server configurations such as rack-based servers or blade servers. These servers are interconnected in communication via various network provisions, such as partitioning sets of servers into Local Area Networks (LANs) with appropriate switching and routing facilities between the LANs to form a private Intranet. For example, cloud hosting facilities may typically employ large data centers with a multitude of servers. A blade comprises a separate computing platform that is configured to perform server-type functions, that is, a “server on a card.” Accordingly, a blade includes components common to conventional servers, including a main printed circuit board (main board) providing internal wiring (e.g., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board. FIG.11depicts an environment1100includes multiple computing racks1102, some including a Top of Rack (ToR) switch1104, a pod manager1106, and a plurality of pooled system drawers. Various embodiments can be used in or with the switch to perform link establishment, link training or link re-training in accordance with embodiments described herein. Generally, the pooled system drawers may include pooled compute drawers and pooled storage drawers. Optionally, the pooled system drawers may also include pooled memory drawers and pooled Input/Output (I/O) drawers. In the illustrated embodiment the pooled system drawers include an Intel® XEON® pooled computer drawer1108, and Intel® ATOM™ pooled compute drawer1110, a pooled storage drawer1112, a pooled memory drawer1114, and a pooled I/O drawer1116. Some of the pooled system drawers is connected to ToR switch1104via a high-speed link1118, such as a 40 Gigabit/second (Gb/s) or 100 Gb/s Ethernet link or a 100+Gb/s Silicon Photonics (SiPh) optical link. In one embodiment high-speed link1118comprises an 800 Gb/s SiPh optical link. Multiple of the computing racks1102may be interconnected via their ToR switches1104(e.g., to a pod-level switch or data center switch), as illustrated by connections to a network1120. In some embodiments, groups of computing racks1102are managed as separate pods via pod manager(s)1106. In one embodiment, a single pod manager is used to manage racks in the pod. Alternatively, distributed pod managers may be used for pod management operations. Environment1100further includes a management interface1122that is used to manage various aspects of the environment. This includes managing rack configuration, with corresponding parameters stored as rack configuration data1124. Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as “module,” or “logic.” A processor can be one or more combination of a hardware state machine, digital control logic, central processing unit, or any hardware, firmware and/or software elements. Some examples may be implemented using or as an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language. One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. The appearances of the phrase “one example” or “an example” are not necessarily all referring to the same example or embodiment. Any aspect described herein can be combined with any other aspect or similar aspect described herein, regardless of whether the aspects are described with respect to the same figure or element. Division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments. Some examples may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms “connected” and/or “coupled” may indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The terms “first,” “second,” and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. The term “asserted” used herein with reference to a signal denote a state of the signal, in which the signal is active, and which can be achieved by applying any logic level either logic 0 or logic 1 to the signal. The terms “follow” or “after” can refer to immediately following or following after some other event or events. Other sequences of operations may also be performed according to alternative embodiments. Furthermore, additional operations may be added or removed depending on the particular applications. Any combination of changes can be used and one of ordinary skill in the art with the benefit of this disclosure would understand the many variations, modifications, and alternative embodiments thereof. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Additionally, conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, should also be understood to mean X, Y, Z, or any combination thereof, including “X, Y, and/or Z.′” Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below. Flow diagrams as illustrated herein provide examples of sequences of various process actions. The flow diagrams can indicate operations to be executed by a software or firmware routine, as well as physical operations. In one embodiment, a flow diagram can illustrate the state of a finite state machine (FSM), which can be implemented in hardware and/or software. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated embodiments should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. Additionally, one or more actions can be omitted in various embodiments; thus, not all actions are required in every embodiment. Other process flows are possible. Various components described herein can be a means for performing the operations or functions described. Each component described herein includes software, hardware, or a combination of these. The components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuitry, and so forth. Example 1 includes an apparatus comprising: a first device associated with a lane of a communications link, wherein: the first device comprises a transmitter and receiver; the first device is to receive a first communication identifying capability to re-train a link; the first device is to transmit a second communication identifying one or more components and the second communication is to cause modification of one or more parameters of the one or more components; and the first device is to receive a third communication identifying a status of re-training. Example 2 includes any example, wherein the one or more components comprise an equalizer and the one or more parameters comprise at least one tap setting. Example 3 includes any example, wherein the one or more parameters comprises a precursor, main cursor or post-cursor equalizer setting. Example 4 includes any example, wherein the first device is to: advertise one or more capabilities; receive one or more capabilities from the second device; and select an operating mode for a lane based on the advertised and received capabilities. Example 5 includes any example, wherein the one or more capabilities comprise one or more of link speed, forward error correction (FEC) capabilities, or pause capability. Example 6 includes any example, wherein the first communication comprises a setup message that identifies at least one tap capable of adjustment by link re-training and includes a local port identifier of a local port to be a subject of tap change requests by the first device. Example 7 includes any example, wherein the second communication comprises a request message that comprises one or more of: a port identifier, lane identifier, identifier of one or more taps, change tap setting, or hold tap setting. Example 8 includes any example, wherein the third communication comprises a response message that comprises one or more of: a port identifier, lane identifier, identifier of one or more taps, indication of updated, indication of not updated, indication of minimum reached, or indication of maximum reached. Example 9 includes any example, wherein the first device is to commence link re-training in response to a change in power, voltage, or temperature or an elapsed time. Example 10 includes any example, and includes one or more of: a server, rack, or data center and wherein the first device is provided in one or more of: the server, rack, or data center Example 11 includes any example, and includes a method to conduct link re-training, the method comprising: receiving, by a receiver in a first device, signals over a lane from a transmitter in a second device, the signals comprising a first communication identifying capability to re-train a link; transmitting, from the first device, a second communication including one or more components of a second device with capability to be adjusted and a request to modify one or more parameters of the one or more components; and receiving, at the first device, a third communication identifying a status of re-training. Example 12 includes any example, wherein the one or more components comprise an equalizer and the one or more parameters comprises at least one tap setting. Example 13 includes any example, wherein the one or more parameters comprise a precursor, main cursor or post-cursor equalization setting. Example 14 includes any example, wherein the first communication comprises a setup message that identifies at least one tap capable of adjustment by link re-training and includes a local port identifier of a local port to be a subject of tap change requests by the first device. Example 15 includes any example, wherein the second communication comprises a request message that comprises one or more of: a port identifier, lane identifier, identifier of one or more taps, change tap setting, or hold tap setting. Example 16 includes any example, wherein the third communication comprises a response message that comprises one or more of: a port identifier, lane identifier, identifier of one or more taps, indication of updated, indication of not updated, indication of minimum reached, or indication of maximum reached. Example 17 includes any example, and includes: training one lane at a time whereby the first device makes tap requests and adjusts its receiver, followed by a link partner of the first device starting link re-training for its receiver. Example 18 includes any example, and includes: commencing link re-training in response to a change in power, voltage, or temperature or an elapsed time. Example 19 includes any example, and includes a computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: engage in a link re-training by: receiving, by a receiver in a first device, signals over a lane from a transmitter in a second device, the signals comprising a first communication identifying capability to re-train a link; transmitting, from the first device, a second communication including one or more components with capability to be adjusted and a request to modify one or more parameters of the one or more components; and receiving, at the first device, a third communication identifying a status of re-training. Example 20 includes any example, wherein the one or more components comprise an equalizer and the one or more parameters comprises at least one tap setting. Example 21 includes any example, wherein the first communication comprises a setup message that identifies at least one tap capable of adjustment by link re-training and includes a local port identifier of a local port to be a subject of tap change requests by the first device. Example 22 includes any example, wherein the second communication comprises a request message that comprises one or more of: a port identifier, lane identifier, identifier of one or more taps, change tap setting, or hold tap setting. | 86,797 |
11863358 | DETAILED DESCRIPTION Interconnection has long been a limiting factor in the design of large digital systems. Whether at the level of modules interconnected by a backplane, or of functional subsystems interconnected within a large printed circuit board, the need for reliable, error free, high-speed digital interconnection has constantly pushed the limits of available technology to its limits. The systems and methods described herein provide robust, reliable transfer of data between at least one transmitting device and at least one receiving device, at data rates of at least 50 Gigabits per second per interconnection wire. An example channel model having the frequency- and time-domain characteristics illustrated inFIG.1will be used. It will be obvious to one familiar with the art that such a transport channel is incompatible with conventional communication signaling methods; for example, straightforward NRZ signaling at an example 112 Gibabits/second has a Nyquist frequency of 56 GHz, corresponding to an intractable 46 dB attenuation over the proposed physical transport channel. This proposed data rate also strains integrated circuit data processing capabilities within the attached transmitting and receiving devices. It is therefore presumed that high-speed data handling in these devices will be distributed across multiple parallel processing “phases”. As one example, rather than a single data path handling data at 100 Gigabits per second (i.e. with merely 10 picosecond between bits), the same data stream may be distributed across sixteen processing phases, each one thus having a more reasonable 160 picoseconds of processing time per bit. However, this added processing time comes at the cost of significantly increased complexity from the additional processing elements. This distribution of processing also can lead to increased latency before a given digital bit result becomes available, limiting the ability to utilize that result in predicting a subsequent bit result, which is the basis of the DFE method. The increasing data transfer rates also lead to physical issues as the wavelength of the propagating signals on the interconnection shrinks. As one example, the propagating signal wavelength at 56 Gigahertz on a printed circuit micro stripline is approximately 4 millimeters, thus periodic anomalies with merely fractional wavelength dimensions (even including the weave of the impregnated fabric comprising the circuit board) may represent a significant disturbance to signal integrity, stressing available equalization and compensation methods. Encoding Information Using Hadamard Transforms As taught in [Cronie I], the Hadamard Transform, also known as the Walsh-Hadamard transform, is a square matrix of entries +1 and −1 so arranged that both all rows and all columns are mutually orthogonal. Hadamard matrices are known for all sizes 2N as well as for selected other sizes. In particular, the description herein utilizes the 4×4 Hadamard matrix as the example encoder. The order 4 Hadamard matrix used in our examples is: H4=[+1+1+1+1+1-1+1-1+1+1-1-1+1-1-1+1](Eqn.1)and encoding of the three informational bits A, B, C may be obtained by multiplying those informational bits times the rows 2, 3, and 4 of the Hadamard matrix H4to obtain four output values, subsequently called “symbol values”. By convention, the results are scaled by an appropriate constant factor so as to bound the symbol values to the range +1 to −1. It may be noted that the first row of H4corresponds to common mode signaling, which is not used herein, with the next three vectors being used to encode bits A, B, and C respectively into outputs W, X, Y, Z, these vectors also being called “modes” or “subchannels” of the Hadamard code. As the encoded outputs simultaneously carry information derived from the encoding of A, B, and C, the outputs will be a superposition or summation of modes, i.e. a sum of the sub-channel code vectors of the vector signaling code. One familiar with the art will note that all possible values of A, B, C encoded in this manner result in mode summed values for W, X, Y, Z which are balanced; that is, summing to the constant value zero. If the mode summed values for W, X, Y, Z are scaled such that their maximum absolute value is 1 (that is, the signals are in the range +1 to −1 for convenience of description,) it will be noted that all achievable values are permutations of the vector (+1, −⅓, −⅓, −⅓) or of the vector (−1, ⅓, ⅓, ⅓). These are called the codewords of the vector signaling code H4. As used herein, this H4 code will subsequently be called Ensemble NRZ code or ENRZ and will be used as a representative example of vector signaling code in subsequent examples, without implying limitation. ENRZ [Hormati I] teaches that ENRZ has optimum Inter Symbol Interference (ISI) characteristics, and [Holden I] and [Ulrich I] teach it is capable of efficient detection. As previously described, ENRZ encodes three binary data bits into a four-symbol codeword for transmission, as one example, over four wires of a transport medium. If ENRZ signaling is used over four wires of the proposed channel, the data transfer rate may be achieved with merely a 75 Gigasymbol/second signaling rate, equivalent to 112 Gbps per wire pair, for the two pair transport channel. Simulation of a first embodiment combining ENRZ signaling at a 75 Gigasymbol/second rate with the reference channel model indicates that a two tap FFE (transmit Feed-Forward Equalization) may be combined with receiver continuous-time linear equalization (CTLE) and a 12 tap Decision Feedback Equalizer (DFE), with performance as illustrated in the graphs ofFIG.2. The receive eye simulation ofFIG.3shows a 93 mV vertical eye opening and a 14.5 psec edge-to-edge horizontal eye opening. Duobinary Encoding Duobinary encoding is a solution known in the art in which consecutive bits of a serially transmitted data stream are processed to shape and constrain the resulting transmit data spectrum. It is well known that Inter-Symbol Interference (ISI) such as may be produced by transmission medium perturbations will result in the received amplitude of a signal in one unit interval to be perturbed by residual energy from previous unit intervals. As one example, inverted pulse reflections from a perturbation of the transmission medium will cause a received signal to be reduced by the residual influence of previously transmitted signals. Thus, a transmitter informed of this effect might combine a presently transmitted signal value with that of a previous transmission, in an attempt to anticipate or pre-compensate for this inter-symbol interference effect. Thus, use of partial response codes such as duobinary are often described as a particular form of pre-equalization filtering intended to produce constructive ISI, rather than as a literal data encoding means. As described in [Beyene], other partial-response codes are known to have comparable ISI management capabilities. For reference purposes, the characteristic equations defining these encodings or filterings are listed in Table I. TABLE IPartial Response SystemCharacteristic EquationDuobinaryxn+ xn−1Dicodexn− xn−1Modified Duobinaryxn− xn−2Class 2xn+ 2xn−1+xn−2 Unless otherwise described, as used herein the duobinary processing performed is assumed to be a summation of the present and immediately previous transmit unit interval signal, each scaled by a factor of 0.5. Optionally, this may be combined with a transmit lowpass filter to further control the transmit spectrum. In other embodiments, ISI-controlling encoding is combined in any order with Hadamard encoding, where the ISI-controlling encoding is any of duobinary, modified duobinary, dicode, class2, or a Hamming filter as subsequently described. In such embodiments, the ISI-controlling encoding may also be described as being performed by a partial response encoder, embodying any of the partial response encodings or filterings above. If the characteristics of the communications channel are extremely well understood, it may be possible to configure the ISI-controlling operation of the transmitter such that no explicit complementary operation is required at the receiver, the effective action of the channel characteristics themselves serving to perform the inverse operation. Other embodiments may explicitly detect, as one example, the ternary signals produced by duobinary encoding of binary data, followed by an explicit duobinary to binary decoding operation. Alternatively, commonly used receiver ISI elimination techniques such as DFE will also efficiently address the effects of such transmitter ISI compensation. As each example receiver in this document already incorporates DFE, no further receiver duobinary (or other partial response code) processing will be shown. A second embodiment incorporating ENRZ encoding at a 75 Gigasymbol/second rate, subsequent duobinary processing of each wire signal, a 2 tap FFE, CTLE, and a 12 tap DFE was simulated using the reference channel model, producing the CTLE gain and spectrum results shown inFIG.4. The receive eye simulation shown inFIG.5shows a 75 mV vertical receive eye opening and a 13.7 psec edge-to-edge horizontal eye opening. These results, although representing considerable improvement over straightforward NRZ data transmission, indicate additional work is needed. Channelization If purely baseband communications solutions are insufficient, might a broadband approach be of benefit? Historically, such significant levels of physical transport channel limitation had been seen and addressed before, albeit at far lower data rates, during the efforts to provide high speed digital services over the legacy copper wire infrastructure of the telephony network. For DSL at its desired 3 Megabit data rate, a propagating signal wavelength was several hundred meters, which correlated strongly with the typical spacing of wire stubs, splices, and insulation abrasions seen in the field. Thus, an uncompensated frequency response for a typical copper telephony signal path would exhibit numerous notches and slopes caused by reflective interference among those anomalies, dissipative attenuation from degraded wires and insulation, and intrusive noise from sources such as AM radio transmitters. Ultimately, multichannel frequency domain channelization was used to constrain the effect of those legacy transport issues. One commonly deployed Asymmetric Digital Subscriber Line (ADSL) solution, for example, partitioned the approximate 1 MHz of available transport medium bandwidth into 4.3125 kHz channels. Each channel was then independently tested for attenuation and signal-to-noise ratio, with different data throughput rates assigned to each channel depending on those test results. Thus, a channel frequency coinciding with a frequency response notch or significant external noise source would not be used, while other channels not presenting those issues could be used at full capacity. Unfortunately, the generation and detection of such a high channel count protocol relies on the availability of inexpensive digital signal processing solutions, and such technology has scaled in performance over time by perhaps a factor of ten, versus the approximate factor of 100,000 data rate increase in the present application. Thus, although the present channel attenuation issues suggest a broadband approach may be useful, the conventional high-channel-count embodiment methods known to the art are incompatible with the anticipated data rate. A new approach specifically designed for high speed processing will be required. Broadband Duobinary ENRZ A third embodiment combines ENRZ, duobinary, and a two frequency-domain channel approach to address the issues of the previous proposals. The first frequency channel is at baseband, i.e. comparable to the single channel of the previous embodiment. The second frequency channel is composed of the same ENRZ+ duobinary signaling modulating a sinusoidal carrier, chosen to minimize the frequency overlap between spectral components of the baseband and of the carrier channel. In the following example, a carrier frequency of 37.5 GHz will be used, with no limitation implied. Comparable results have been obtained in simulations using a 30 GHz carrier frequency, and lower frequencies may be used with improved channel attenuation characteristics but somewhat higher inter-channel interference, as will be shown in a subsequent example. Both frequency channels run at a signaling rate of 37.5 Gsymbols/second, with three data bits being transported over the four wires of the baseband channel, and a second three data bits being transported over the same four wires using the carrier channel, to produce an aggregate throughput equal to the previous embodiments. With the same data throughput distributed across two channels, the required signaling rate per channel is halved, thus potentially allowing a much wider horizontal eye opening. FIG.6illustrates the spectrums of the broadband and carrier channels and the corresponding pulse shapes of the two channel signals, as produced by a simulation of this embodiment operating over the reference channel model. In this embodiment, data for each of the two channels is separately ENRZ encoded, and then each of the four signaling streams carrying the ENRZ codewords is duobinary encoded by summing the present and immediately previous Unit Interval's value, each scaled by a factor of 0.5. (Alternatively, the summation of the values may subsequently be scaled by the same factor, or the scaling may be subsumed into later amplification and/or filtering functions.) Each of the two resulting duobinary encoded streams, herein also referred to as sets of baseband-encoded symbols, are pre-emphasized using a two tap FFE, then passed through a Butterworth lowpass filter of order 2 with a cutoff frequency of 9.37 Gigahertz for spectral shaping and ICI reduction. The filtered stream for the carrier channel modulates a sinusoidal carrier at 37.5 GHz, the result of which is linearly combined with the filtered stream for the baseband channel for transmission over the transport channel. As the subchannels of a Hadamard code such as ENRZ are linear, that is, they transparently communicate non-binary as well as binary signals, the order in which duobinary and ENRZ encoding is performed may be reversed. In at least one such alternative embodiment, each of the three data bits is separately duobinary encoded before being presented to the ENRZ encoder, rather than the ENRZ code outputs being duobinary encoded, for each of the baseband and carrier channels. Transmitter FIG.9is a block diagram of one embodiment of a Broadband Duobinary ENRZ transmitter. Data at an aggregate rate of 224 Gigabits/second enters MUX910, which separates it into two independent data streams915and918, each of 112 Gigabits/second that serve as data inputs to the baseband and carrier channels. The baseband channel data is ENRZ encoded920, with each three bits of input data producing one code word of four symbol values. Each baseband symbol value will subsequently be processed independently and ultimately transported (along with its comparable carrier channel processed symbol value) on its own wire. Processing for each baseband symbol value may include duobinary encoding by partial-response signaling encoder940and low-pass filtering and amplification by amplifier960as needed to meet system signal level criteria, to produce a processed baseband output. In some embodiments, the partial response signaling encoder may be implemented with two sets of analog voltage generators, where each set is alternately driven with a codeword input and provides a set of voltages representing the codeword symbols, but the generators maintain their outputs for a duration of two signaling intervals. The sets of voltages are summed at a signal summing circuit. While each set of voltages changes at ½ the symbol rate, because they are staggered in time, the outputs of the summing circuit change at the symbol rate, and represent the sum of the current symbol and the prior symbol. In some embodiments, the encoder such as ENRZ encoder920may comprise two encoders also operating at ½ rate, each encoder configured to drive a corresponding set of analog voltage generators. Processing for the carrier channel is comparable to that of the baseband channel to the point of carrier modulation, with carrier channel data918being ENRZ encoded930, with each three bits of input data producing one codeword of four symbol values. Each carrier symbol value will subsequently be processed independently, and then mixed with its comparable processed baseband symbol value for wire transmission. Processing for each carrier symbol value consists of duobinary encoding950, low-pass filtering and amplification970as needed to meet system signal level criteria, and modulation980of the 37.5 GHz Carrier to produce a processed and modulated carrier output. Each of the four processed baseband outputs is summed990with its comparable processed and modulated carrier outputs, producing wire outputs identified inFIG.9as Wire A, Wire B, Wire C, and Wire D. FIG.10shows an alternative transmitter embodiment, in which duobinary encoding1020and1030is performed prior to ENRZ encoding1040and1050. Other than the order of these operations, this alternative transmitter is identical to that of the embodiment ofFIG.9. Receiver One embodiment of a comparable Broadband Duobinary ENRZ receiver is shown in the block diagram ofFIG.11. Each wire signal from the transport medium Wire A, Wire B, Wire C, and Wire D is amplified and frequency equalized by a continuous-time linear equalizer (CTLE)1110, and then the four amplified and equalized received signals are input to three linear ENRZ mixers1120. In some embodiments, CTLEs1110may include analog delay circuits, and the receiver may include a skew control circuit1112configured to provide a skew control signal to each of the CTLEs1110. In some embodiments, the analog delay circuits may be all-pass filters (including a switched capacitor bank, for example) configured to adjust an analog delay of each individual wire A-D. In some embodiments, the skew control circuit1112may be configured to operate on the outputs of samplers1180that operate on the passband MIC outputs in order to determine a skew control signal for adjusting analog delay values of each wire, however this should not be considered limiting. In one embodiment, each sub-channel MIC may be evaluated by adjusting decision thresholds, and responsively measuring an effective eye opening, and then individual wire skews may be adjusted in order to increase the effective eye opening. In some embodiments, the sub-channel MIC with the narrowest effective eye opening is adjusted first. Further, alternative analog delay circuits known to those of skill in the art may be implemented. As taught by [Holden I], such ENRZ receive mixing is commonly utilized at baseband by so-called multi-input comparators (MIC) to detect ENRZ codewords. Here, the ENRZ mixing in such MICs produces three linear signal “subchannels” comprising a linear superposition of baseband and broadband, or carrier-modulated, results for each of the two ENRZ encoded streams. The mixing operations are defined as: R0=(A+C)−(B+D) (Eqn. 2) R1=(C+D)−(A+B) (Eqn. 3) R2=(A+D)−(B+C) (Eqn. 4)where R0, R1, R2are the three resulting linear signal channels output from ENRZ mixers1120, and A, B, C, D are the four received wire signals output from the CTLE1110. Equivalent mixing results may be obtained using other algebraic permutations of these equations as may be produced by a different ordering of wire labels; as one example R1=(A+B)−(C+D) is equivalent to Eqn. 3 if the wires are labeled in reverse order. MICs embodying such mixing results may also be identified by the signs of wire terms in their defining equation, e.g. ++−− for this example. A four pole Butterworth lowpass filter1130with a cutoff frequency of 18.75 GHz is used to extract the baseband component from each of the linear signal subchannels. As is common practice in the art, the signal amplitude of each of the linear signal subchannels is measured or captured at a particular moment or interval of time by samplers1140at 37.5 Giga sample/second rate to produce the three decoded Baseband Data out bits, at an aggregate 112 Gigabit/second data rate. Concurrently, each decoded bit is presented to a DFE computation1150, producing a DFE correction signal used to adjust that bit's sampler threshold. Digital Feedback Equalization is well known in the art, thus will not be further described here, other than noting that each DFE computation1150is independent, and will provide both correction of transport-channel-induced ISI and of intentionally generated transmitter ISI compensation. It should be noted that the described DFE correction operating on subchannels of the vector signaling code is distinct from the common art, where which DFE correction is performed on e.g. received wire signals. As the history maintained by the DFE must accurately represent the values of each unit interval in the history, a conventional DFE would have to maintain ternary, quaternary, or higher-order history values to represent a vector signaling code having 3, 4, or more possible symbol values. In contrast, binary data communicated over a vector signaling code subchannel requires maintenance of merely a binary history using the described DFE correction. Simultaneously, a second order Butterworth high pass filter1150with a cutoff of 37.5 GHz extracts the carrier channel information from the three linear signal subchannels. Balanced mixers1160provided with a 37.5 GHz carrier signal converts these modulated signals back to baseband where, as with the baseband channel signals, a four pole Butterworth lowpass filter1070with a cutoff frequency of 18.75 GHz is used followed by sampling1080at 37.5 Gig sample/second rate on each of the subchannels to produce the three decoded Carrier Data out bits, at an aggregate 112 Gigabit/second data rate. As with the baseband data, each decoded carrier data out bit is presented to a DFE computation1190, producing a DFE correction signal used to adjust that bit's sampler threshold. Each DFE computation1190is independent, and will provide both correction of transport-channel-induced ISI and of intentionally generated transmitter ISI compensation. Because of the significant frequency-dependent loss characteristics of the transport channel, the gain of the receive baseband channel is set to 14 dB, while the gain of the carrier channel is set to 26 dB. Similarly, the transmitter gain for the carrier channel is set to 3 times that of the baseband channel to provide pre-emphasis. Simulated pulse responses and cross-channel ICI for this embodiment are shown inFIG.7, assuming two taps of transmit FFE and fifteen taps of receive DFE. Receive eyes for the baseband and carrier (passband) channels are shown inFIG.8. Eye openings are 54 mV vertical and 24.1 psec horizontal for the baseband, and 56 mV vertical and 38.7 psec horizontal for the passband, a considerable improvement over the previous embodiments. Skew Considerations As with any vector signaling code solution, skew must be constrained across the transport paths carrying symbols of the same codeword, as the codeword must be presented as a coherent whole to the receiver's detector to be properly recognized. Roughly speaking, propagation latencies across the various transport paths must be matched to less than one half the expected eye width to permit detection, and better than that value to avoid eye width degradation. Known approaches including introduction of variable delay lines and/or FIFO buffers for path compensation, separate CDR and sample timing for individual wires, and transmit-side pre-skew compensation. However, these techniques must be applied cautiously, as they may also lead to increased inter-symbol interference, transmit simultaneous switching noise, and higher perceived receive common mode signals. Because the baseband and carrier-band channels carry separate ENRZ encoded data and are separately receive sampled, their data streams may be considered to be independent and thus do not require absolute temporal alignment. This is an advantage, as differences between the filtering characteristics of the two channels will introduce different time delays, which inherently introduces a timing difference between the set of data bits received at baseband, and the set of data bits received at carrier band. As will be apparent to one familiar with the art, these sets of bits may be passed through retiming latches, FIFO buffers, or other known means to align them with a common timing reference. ALTERNATIVE EMBODIMENTS A number of variations to the preceding embodiments have been considered, all within the scope of the described invention. Transmit signal generation of the ENRZ symbol values, their ISI-controlling encodings, or both may be produced using Digital to Analog converters having an appropriate number of bits. Similarly, mixing of broadband and carrier signals within the transmitter may be done digitally. Transmitter and receiver embodiments may incorporate additional gain and/or frequency-dependent filtering stages to meet the described vertical eye openings, or to compensate for channel characteristics differing from those of the reference channel model. Particular amplitudes, gains, attenuation characteristics, etc. are provided for descriptive purposes, without implying limitation. At least one embodiment performs additional prefiltering of signals within the transmitter to zero out the first few pre-cursors of the channel, thus avoiding the need for extensive DFE tap unrolling at the receiver. The example broadband receiver embodiment described converts the carrier-based channel to baseband for subsequent detection. This presumes that the local carrier available at the receiver is coherent with the transmitter's carrier signal, and is thus derived using a Phase-locked loop or other known method. Other known art receiver methods are well known and may also be incorporated in alterative and equivalent embodiments. A receiver embodiment may also utilize Analog-to-Digital sampling followed by some or all of the previously-described filtering, mixing, and sampling being performed using digital signal processing methods. Extension to Higher Data Rates The embodiments described herein may be extended to support data rates of 224 Gigabits per second per wire pair. In a fourth embodiment incorporating such extension, the data is prefiltered at the transmitter to add more controlled ISI. As one example, a Hamming filter of order 7 is used having the coefficients: H=[0.02, 0.09, 0.23, 0.30, 0.23, 0.09, 0.02] [Eqn. 5] This is contrasted with the duobinary encoding of the previous examples, which corresponds to a transmit filter with the coefficients: H=[0.5, 0.5] [Eqn. 6] In this fourth embodiment the data rate in each of the baseband and carrier channels is doubled, to 75 Gigasymbols/second, resulting in an aggregate data throughput equivalent to 112 Gigabits per second per wire, or 448 Gigabits per second for the four wire interconnection. Simulated eye openings are shown inFIG.12, where the baseband channel has 93 mV of vertical and 8.3 psec of horizontal eye opening, and the carrier channel has 42 mV of vertical and 16.6 psec of horizontal eye opening, assuming 3 pre-cursor taps of transmit equalization, and 15 taps of receive DFE. Alternatively, an embodiment may utilize additional carrier channels. As one example, a baseband channel plus three carrier channels operating at carrier frequencies chosen to minimize the frequency overlap between spectral components of the various channels may be combined, with each channel carrying a data stream combining ENRZ encoding with an ISI-controlling encoding with each channel operating at a rate of 37.5 Gigasymbols/second as previously described. Extension to Other Base Signaling Schemes As previously noted, the embodiments described herein may be used with underlying vector signaling codes other than ENRZ, which has been used for purposes of description in the previous examples without implying a limitation. Other multi-wire signaling schemes may also be combined with the described ISI management and channelization techniques, as should be understood by anyone of ordinary skill in the art. For example, a fifth embodiment is identical to that of the previously described fourth embodiment, except that differential signaling is used on each two wire pair at a signaling rate of 75 Gigabits/second/pair, rather than ENRZ across all four wires. Data on each channel is prefiltered at the transmitter to add more controlled IS using a Hamming filter of order 7 having the coefficients: H=[0.02, 0.09, 0.23, 0.30, 0.23, 0.09, 0.02] [Eqn. 7] In this fifth embodiment the aggregate throughput is thus 300 Gigabits/second; 75 Gigabits/second per wire pair for two wire pairs, for each of the two channels. Use of a Lower Carrier Frequency As previously mentioned, a lower carrier frequency may be used to bring the carrier-modulated channel into a lower attenuation region of the transport channel model, at the cost of increased inter-channel interference. A sixth embodiment operates with a baseband channel and one carrier channel modulating a carrier frequency of 19.5 GHz. Both baseband and carrier channels utilize ENRZ encoding and Duobinary filtering, as previously described, at a signaling rate of 37.5 GBaud, equivalent to a 26.66 psec UI. The resulting signal spectrum experiences a 15 dB channel loss at baseband, and a 30 dB loss at the carrier channel. The simulation results shown inFIG.13and summarized in Table 2 are based on 600 mV Tx amplitude, 200 uV RMS channel noise, a 1:7 baseband to carrier channel power ratio, 1 pre- and 1 post-cursor TX FIR, up to 12 dB of Rx CTLE, and 12 taps of Rx DFE. Eye openings sufficient to obtain at least a 10E-6 Bit Error Rate (BER) were observed. TABLE 2VerticalHorizontal%BandMICmVpsecUICarrier++−−3.9716.6662.5Channel+−+−5.8720.2175.8+−−+5.8720.2175.8Baseband++−−6.6417.2964.9+−+−6.4317.0864.1+−−+6.4517.0864.1 For descriptive convenience, the three ENRZ subchannels on each of the Carrier and Baseband frequencies are identified by the logical wire combinations comprising the defining equation of their corresponding multi-input mixer. Thus, as one example, the mixed combination of wires A, B, C, D corresponding to the mixer performing the (A+B)−(C+D) operation is identified as ++−− in Table 2. As may be seen inFIG.13and Table 2, the eye opening for the ++−− carrier subchannel is significantly smaller than the other eyes, and is thus the limiting factor on performance. In particular, the reduced horizontal eye opening indicates that subchannel may be significantly impacted by wire skew in the transport channel. Incorporation of Error Correcting Codes A seventh embodiment operates with a baseband channel and one carrier channel modulating a carrier frequency of 18.5 GHz. Both baseband and carrier channels utilize ENRZ encoding and order 11 Hamming filtering, at a signaling rate of 75 GBaud, equivalent to a 13.33 psec UI. The resulting signal spectrum experiences a 14 dB channel loss at baseband, and a 22 dB loss at the carrier channel. The simulation results shown inFIG.14and summarized in Table 3 are based on 800 mV Tx amplitude, 200 uV RMS channel noise, 260 femto-seconds of random jitter (Rj), a 1:7 baseband to carrier channel power ratio, 1 pre- and 1 post-cursor TX FIR, up to 12 dB of Rx CTLE, and 25 taps of Rx DFE. TABLE 3VerticalHorizontal%BandMICmVpsecUICarrier++−−1.768.6564.9Channel+−+−3.0210.5278.9+−−+2.9310.3177.3Baseband++−−2.869.4871.1+−+−2.749.3870.4+−−+2.729.3870.4 As with the previous example, eye openings sufficient to obtain a 1E-6 BER were observed, with the ++−− carrier subchannel again limiting the overall performance, especially in the presence of transport channel wire skew. Various approaches were considered to mitigate this subchannel limiting performance, allowing improved system BER to be achieved. An eighth embodiment is identical to the previously described seventh embodiment, but the marginal ++−− carrier subchannel is not used to transmit data. This results in an overall throughput of 5*75=375 Gbps over the four wire transport medium, equivalent to an effective 187.25 Gbps per wire pair. A ninth embodiment is identical to the previously described seventh embodiment, with an additional reliability protocol imposed on data transmitted over the marginal ++−− carrier subchannel. As one example offered without limitation, a “send three times” reliability protocol may be used on that subchannel to transmit the same data bit in three consecutive UIs, with a majority detector used at the receiver to identify the received data bit. Thus, this embodiment transmits a total of 16 bits (rather than the seventh embodiment's 18) in three UIs. This results in an overall throughput of 6*75*( 16/18)=400 Gbps over the four wire transport medium, equivalent to an effective 200 Gbps per wire pair. Addition of this reliability protocol provides an effective BER of 1E-6 if the underlying subchannel provides at least a 5.7E-4 BER, equivalent to an improvement of the vertical eye by 6 dB and almost a doubling of the horizontal eye opening. A tenth embodiment is identical to the previously described seventh embodiment, with a Forward Error Correcting protocol imposed on data transmitted over the marginal ++−− carrier subchannel. As one example offered without limitation, four consecutive data bits may be encoded using a [7,4,3] Hamming code to produce seven Hamming encoded bits to be sequentially transmitted over that subchannel in seven UIs, with the corresponding Hamming decoder used at the receiver to recover the received data bits. Thus, this embodiment transmits a total of 39 (rather than the seventh embodiment's 42) data bits in seven consecutive UIs, resulting in an overall throughput of 6*75*(39/42)=417.86 Gbps. equivalent to an effective 208.93 Gbps per wire pair. Addition of this FEC encoding provides an effective BER of 1E-6 if the underlying subchannel provides at least a 3.6E-3 BER, equivalent to an improvement of the vertical eye opening by 7 dB and an 2.5× enlargement of the horizontal eye opening. This distribution of data bits and redundancy-augmented bits across the six subchannels and multiple sequential transmit unit intervals as described relevant to the ninth and tenth embodiments of the invention is illustrated inFIG.15. FIG.16is a block diagram showing error correction being added to an encoded transmission subchannel and the corrected data identified at the receiver. At the transmitter, Data In is distributed910among the carrier subchannels and the baseband subchannels, as previously shown relative toFIG.9andFIG.10. The portion of the data bits directed to the ++−− carrier subchannel are passed through an error correction function1510which increases its redundancy; relative to the ninth embodiment this redundancy is obtained via repetition, relative to the tenth embodiment this redundancy is obtained via a Hamming Code encoder. The data bits directed to the carrier subchannels915and the data bits directed to the baseband subchannels918are then processed as previously described inFIG.9orFIG.10. At the receiver, data from the sampler associated with the ++−− mixer carrier channel is directed to error correction function1520, which identifies the original data bits; a majority detector is used relative to the ninth embodiment, and a Hamming Code decoder is used relative to the tenth embodiment. The original data bits from1520and the sampler outputs from the other subchannels may be combined1530to produce an aggregated received data stream identical to that presented to the transmitter. It will be obvious to one skilled in the art that redundancy and/or forward error correction may be applied to more than one subchannel, with a corresponding improvement in that subchannel's effective eye opening but also resulting in decreased delivered data rate due to the inevitable overhead. Thus, these examples applying such solution to a single subchannel should not be considered as limiting, but may be preferred within the parameters of the example. | 36,618 |
11863359 | DETAILED DESCRIPTION Technologies to improve throughput in wireless MIMO and single-input-single-output (SISO) systems are described. Wireless systems like wireless local area network (WLAN), LTE, 5G, Next Radio (NR) MIMO systems leverage spatial multiplexing capabilities of the wireless propagation channel and usually employ larger channel bandwidths. For example, IEEE802.11n/ac/ax use 20, 40, 80, and 80+80 MHz channel bandwidths, LTE-Advanced uses a 100 MHz channel bandwidth, LTE-AdvancedPro uses a 640 MHz channel bandwidth, 5G NR: FR1 uses a 100 MHz channel bandwidth, 5G NR: FR2 uses an 800 MHz channel bandwidth to provide higher throughputs than other wireless systems. However, these larger bandwidth systems suffer from frequency selective fading characteristics of the wireless channel due to time dispersion effects, resulting in larger variations in signal-to-noise ratio (SNR) at the receiver. These wireless channel impairments are more pronounced at 5G NR mmWave bands (24 GHz to 40 GHz). Frequency selective fading characteristics cause an imbalance in the received signal power levels at the MIMO radio receivers, causing variations in SNRs that lead to a lower modulation and coding scheme (MCS) rate or a physical layer (PHY) rate. Lower MCS or PHY rate results in lower throughput (and higher airtime occupancy), causing network congestion, degrading the throughput, and increasing latency. Conventional rate adaptation techniques, such as transmit beamforming (TxBF) or specific time block coding (STBC), provide link robustness as long as a client device operates in a MIMO mode and requires channel state information (CSI) feedback to the transmitter. However, when an RF link gets weaker (e.g., the client device is too far away), the client device would transition out of the MIMO mode to operate in other modes at lower legacy rates. Under these conditions of weaker RF links, existing rate adaptation techniques toggle the RF link to the lowest PHY rate, just enough to sustain the RF link but with extremely low, practically unusable throughputs. The client devices in these other modes still have the larger bandwidth but drastically reduce the overall system throughput because of the dependence on the wireless propagation channel's characteristics. While it is possible to increase the transmit power at the AP or base station, the RF link is predominantly dictated by the transmit power at the client (e.g., ACKs), which impacts the client device's battery life. Aspects and embodiments of the present disclosure address these and other challenges by providing subcarrier pre-equalization digital signal processing (DSP) techniques. Aspects and embodiments of the present disclosure can provide a subcarrier pre-equalization at a transmitter based on feedback from a receiver. The receiver measures and sends feedback to the transmitter with the estimated received signal power levels on a per subcarrier basis (spectral profile differences). The transmitter baseband circuitry adjusts the specified subcarrier(s)' amplitude corresponding to the instantaneous feedback from the receiver. Aspects and embodiments of the present disclosure can ensure higher throughput in MIMO mode or SISO mode, especially at the range conditions. Aspects and embodiments of the present disclosure can maintain the same link margins while enabling lower transmit powers of an AP/Base Station and/or client devices, improving battery life. In at least one embodiment, a first device includes a baseband processor with an Orthogonal Frequency Division Multiplexing (OFDM) circuitry (also referred to herein as OFDM block or OFDM system) that uses a digital multi-carrier modulation scheme that defines a set of data subcarriers, a set of pilot subcarriers, and a direct current (DC) subcarrier to communicate data in a wireless channel between the first device and a second device. The baseband processor also includes subcarrier pre-equalization logic that receives, from the second device, feedback data indicative of a frequency selective fading characteristic of the wireless channel and adjusts a first amplitude value of a subset of the set of data subcarriers to a second amplitude value. Adjusting the first amplitude value to the second amplitude value reduces the frequency selective fading characteristic of the wireless channel. In at least one embodiment, the second device includes a receiver with estimation logic that measures a first fast Fourier transform (FFT) response of RF signals across a set of data subcarriers at a first receiver and a second FFT response of the RF signals across the set of data subcarriers at a second receiver. The estimation logic determines, from the first FFT response and the second FFT response, that a subset of the set of data subcarriers have power levels that are lower than a threshold value, the threshold value representing a frequency selective fading characteristic of the wireless channel between the first device and the second device. The threshold value can be expressed in terms of a threshold physical (PHY) rate of the channel. The estimation logic generates a gain code coefficient for the subset of the set of data subcarriers and sends the gain code coefficient to the first device. The gain code coefficient causes the first device to adjust the first amplitude value to the second amplitude value to reduce the frequency selective fading characteristic of the wireless channel. Although various embodiments are described below with respect to WLAN technologies, such as the Wi-Fi® technology, the embodiments described herein can be used in other wireless technologies, such as personal area network (PAN) technologies (e.g., Bluetooth® and Zigbee® technologies), wireless area network (WAN) technologies, such as cellular technologies including Long Term Evolution (LTE) frequency bands, fourth-generation (4G) frequency bands, or the like. Similarly, although various embodiments are described below with respect to OFDM, the embodiments described herein can be used in connection with other multi-carrier modulation schemes. FIG.1is a block diagram of an electronic device100with subcarrier pre-equalization logic102of an OFDM system104to improve a frequency selective fading characteristic of a wireless channel according to one embodiment. The electronic device100includes a baseband processor106, including the OFDM system104with the subcarrier pre-equalization logic102. The electronic device also includes a modulator108, a power amplifier110, and an antenna112. The OFDM system104includes a digital signal processing logic (e.g., hardware, software, or any combination thereof) that implements a digital multi-carrier modulation scheme—OFDM scheme. The OFDM scheme extends a single subcarrier modulation concept by using multiple subcarriers within the same single channel. Rather than transmitting a high-rate stream of data with a single subcarrier, OFDM uses a number of closely spaced orthogonal subcarriers transmitted in parallel. Each subcarrier is modulated with a digital modulation scheme, such as QPSK, 16QAM, etc.) at a low symbol rate. The combination of many subcarriers enables similar data rates as single-carrier modulation schemes with similar bandwidths. In the OFDM system104, different information streams are mapped onto separate parallel frequency channels. Each channel is separated from the others by a frequency guard band to reduce interference between adjacent channels. So, in OFDM system104, multiple subcarriers carry the information stream, and the data subcarriers are orthogonal to each other. The guard interval is added to each symbol to minimize the channel delay spread and inter-symbol interference. In the digital domain, the OFDM system104can map digital modulated input data, referred to as data symbols, onto orthogonal subcarriers. The data symbols are frequency-domain input data, such as complex numbers representing the modulated subcarriers. The OFDM system104converts the data symbols to the time-domain output data representing the analog OFDM symbol waveforms. In the illustrated embodiment, the OFDM system104outputs the OFDM symbol waveforms as I data101and Q data103to the modulator108. In general, the modulator108receives the output data from the OFDM system104, modulates the output data to add the output data to a carrier signal to obtain a data-carrying signal105. The data-carrying signal105is output by the modulator108to the power amplifier110that amplifies the data-carrying signal105to broadcast the data-carrying signal105as an RF signal107via the antenna112. In at least one embodiment, the OFDM system104includes the subcarrier pre-equalization logic102. The subcarrier pre-equalization logic102receives feedback data111from a second device with which the electronic device100is communicating. The feedback data111can include received signal power levels at one or more receivers of the second device on a subcarrier basis. The information can include the spectral profile differences between receivers on a subcarrier basis. Using the feedback data111, the subcarrier pre-equalization logic102adjusts the specific subcarriers' amplitudes corresponding to the second device's instantaneous feedback. In at least one embodiment, the second device provides feedback data111to the subcarrier pre-equalization logic102whenever there is a change in received signal spectral profile due to changes in the wireless channel conditions. For example, if the wireless channel conditions meet a specified criterion, such as exceeding a predefined threshold, the second device can send the feedback data111to the subcarrier pre-equalization logic102to adjust the corresponding subcarriers' amplitude. In at least one embodiment, the electronic device100sends the first data to the second device using a set of data subcarriers in a wireless channel. The set of data subcarriers operate at a first amplitude value. For example, the I data101and the Q data103have data with the first amplitude value. The subcarrier pre-equalization logic102can receive the feedback data111include one or more values, each value corresponding to a received signal power level of one of the multiple data subcarriers. For example, the feedback data111includes a first value indicative of a first received signal power level corresponding to a first data subcarrier and a second value indicative of a second received signal power level corresponding to a second data subcarrier. In at least one embodiment, the feedback data111includes one or more values indicative of an estimated received signal power level on a per data subcarrier basis. Alternatively, the feedback data111includes a first value indicative of a first received signal power level corresponding to a first data subcarrier at a first receiver of the second device and a second value indicative of a second received signal power level corresponding to the first data subcarrier at a second receiver of the second device. The values of the feedback data111can be used to adjust one or more data subcarriers to operate at a second amplitude value that is different than the first amplitude value (e.g., an increased value). The subcarrier pre-equalization logic102adjusts corresponding ones of the data subcarriers to operate at the second amplitude value. For example, a first data subcarrier can be adjusted to operate at the second amplitude value, and a second data subcarrier can be maintained to operate at the first amplitude value. After adjusting the amplitude values of one or more of the data subcarriers, the electronic device100sends additional data to the second device using both the first amplitude value and the second amplitude values for corresponding data subcarriers. For example, the electronic device100sends the additional data with the second amplitude value for the first data subcarrier and the first amplitude value for the second data subcarrier. The subcarrier pre-equalization logic102ensures higher throughput in a MIMO mode of operation or even when the second device operates in SISO mode, specifically at range conditions. The subcarrier pre-equalization logic102enables the use of lower transmit powers of the electronic device100(e.g., AP/Base station), the second device (e.g., client device), or both. The lower transmit powers can improve the battery life of the devices while maintaining the same link margin. In at least one embodiment, the baseband processor106performs bit-level processing on input bits to generate quadrature amplitude modulation (QAM) symbols or phase-shift keying modulation (PSK) symbols. The symbols can be discrete time-domain data in the I data101and Q data103. To perform the symbol-level processing, the baseband processor106can perform an inverse fast Fourier transform (IFFT) of the symbols. In at least one embodiment, the first amplitude value is increased to the second amplitude value before the IFFT of the symbols. In one embodiment, the subcarrier pre-equalization logic102can provide control information (e.g., control signals, instructions, commands, or the like) to other blocks of the OFDM system104to modify an amplitude parameter of one or more of the OFDM system104. For example, an OFDM parameter structure can specify an amplitude value for each of the data subcarriers. Alternatively, the OFDM system104can include one or more registers that identify which subcarriers should be increased, decreased, or otherwise adjusted. The parameter information of the OFDM parameter structure can also include total bandwidth (BW), operating bandwidth (OBW), subcarrier spacing, information rate, modulation, coding rate, total subcarriers, data subcarriers, pilot subcarriers, and direct current (DC) subcarrier. The OFDM parameter structure's parameter information can be modified by the subcarrier pre-equalization logic102to modify the operation of the OFDM system104to control amplitude values for a subset of the data subcarriers. The electronic device100can also include additional components, such as one or more processors (e.g., a host processor or central processing unit (CPU), one or more graphics processors, input-output (I/O) devices, memory devices, storage devices, or the like. The baseband processor106can include additional components, such as a processing device that can execute operations to implement the processing logic of the subcarrier pre-equalization logic102. Alternatively, the subcarrier pre-equalization logic102can be implemented as hardware, such as a hardware state machine that receives one or more inputs, changes to one or more states based on the inputs, and outputs one or more control signals based on the current state. In some cases, the functionality of the subcarrier pre-equalization logic102can be integrated into or in connection with the OFDM system104. The baseband processor106can include one or more interfaces, such as a serial interface (e.g., I2C interface) that can be used by the subcarrier pre-equalization logic102to generate one or more control signals to control the OFDM system104, the power amplifier110, or any combination thereof. The baseband processor106can include one or more interfaces with a host processor to communicate status, data, whether a transmitter is active, which transmitter is active, modulation and coding scheme (MCS) information, or the like. In another embodiment, the baseband processor106includes an interface to receive the feedback data111or other data indicative of received signal strength at one or more receivers of the second device, as described herein. In other embodiments, the electronic device100is an access point (AP), which provides access to the Internet, a private network, or other public networks. In another embodiment, the electronic device100is a base station (BS), which connects to one or more relay stations (RL), one or more gateways (GWs), one or more customer premises equipment (CPE) devices, or the like. The electronic device100may be any content rendering device that includes a modem for connecting the user device to a network. Examples of such electronic devices include electronic book readers, portable digital assistants, mobile phones, laptop computers, portable media players, tablet computers, cameras, video cameras, netbooks, notebooks, desktop computers, gaming consoles, Blu-ray® or DVD players, media centers, drones, audio-input-enabled devices, speech-based personal data assistants, and the like. The electronic device100may also be an audio-input-enabled device, such as the Amazon Echo device, developed by Amazon Technologies, Inc. of Seattle WA. Alternatively, the electronic device100may be a set-top box (STB) or other media streaming device. The electronic device may connect to a network to obtain content from a server computing system (e.g., an item-providing system) or perform other activities. The electronic device may connect to one or more different types of cellular networks. In some embodiments, the electronic device100connects to an access point (AP), which provides access to the Internet, a private network, or other public networks. The electronic device100includes a circuit board, such as a printed circuit board (PCB) upon which one or more of the components described above is disposed. The components can be integrated into one or more integrated circuits. In some embodiments, the baseband processor106and the modulator108are separate integrated circuits or chipsets. In one embodiment, the baseband processor106and the modulator108reside on a common carrier substrate die of an integrated circuit. In other embodiments, the baseband processor106and the modulator108are disposed on the PCB along with RF front-end circuitry, such as the power amplifier110, the modulator108, or the like. The baseband processor106is operable to generate RF signals to radiate electromagnetic energy via one or more antennas, such as the antenna112. In some cases, the baseband processor106, modulator108, the power amplifier110, or any combination thereof can be implemented in an RF module, such as a chipset implementing the Wi-Fi® technology. In one embodiment, the RF circuitry includes a WLAN radio and a PAN radio. In other embodiments, the RF radios may be specific to the frequency bands of interest. A processing device coupled to the baseband processor106may be an application processor that implements other operations of the electronic device100. In another embodiment, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other mixed-signal integrated circuits may be used to implement the operations described herein to control amplitudes of select subcarriers of the OFDM system104when connected to another device on a wireless channel. In one embodiment, the baseband processor106includes one or more transceivers that can operate at 2.45 GHz and 5 GHz. The baseband processor106can implement the Wi-Fi® technology. It should be noted that the Wi-Fi® technology is the industry name for wireless local area network communication technology related to the IEEE 802.11 family of wireless networking standards by Wi-Fi Alliance. For example, a dual-band WLAN RF transceiver allows an electronic device to exchange data or connect to the Internet using radio waves in two WLAN bands (2.4 GHz band, 5 GHz band) via one or multiple antennas. For example, a dual-band WLAN RF transceiver includes a 5 GHz WLAN channel and a 2.4 GHz WLAN channel. The WLAN radio may include additional transceivers that operate in the 2.45 GHz, 5 GHz, or both. A PAN module includes a transceiver that also operates at 2.4 GHz and may implement the Bluetooth® technology or the Zigbee® technology. The WLAN radio and PAN radio can be individual chipsets, even chipsets provided by different vendors. The WLAN radio and the PAN radio may be implemented in the same chipset or on a common carrier substrate with a processing device, such as a System on Chip (SoC) architecture. In another embodiment, other wireless RF radios may be used to implement other technologies, such as the LTE technology, or the like. For example, the RF circuitry may include other radios, such as a wireless area network (WAN) radio, PAN radio, GNSS radio (e.g., global positioning system (GPS) radio), or the like. In other embodiments, the antenna architecture may include additional RF radios and/or other communication modules, such as a WLAN radio, a GPS receiver, a near field communication (NFC) radio, an amplitude modulation (AM) radio receiver, a frequency modulation (FM) radio receiver, a PAN radio (e.g., Bluetooth® radio, Zigbee® radio), a GNSS receiver, or the like. The RF circuitry may also include receivers and/or transmitters, filters, amplifiers, mixers, switches, and/or other electrical components. The RF circuitry may be coupled to a modem that allows the user device to handle both voice and non-voice communications (such as communications for text messages, multi-media messages, media downloads, web browsing, etc.) with a wireless communication system. The modem may provide network connectivity using any type of digital mobile network technology including, for example, LTE, LTE advanced (4G), CDPD, GPRS, EDGE, UMTS, 1×RTT, EVDO, HSDPA, WLAN (e.g., Wi-Fi® network), etc. In the depicted embodiment, the modem can use the RF circuitry to radiate electromagnetic energy on the antennas to communicate data to and from the user device in the respective frequency ranges. In other embodiments, the modem may communicate according to different communication types (e.g., WCDMA, GSM, LTE, CDMA, WiMAX, etc.) in different cellular networks. It should be noted that radiation enables the functionality of both transmission and receiving data using reciprocity. In one embodiment, the OFDM system104is implemented as hardware, software, firmware, or any combination thereof in a digital domain, an analog domain, or both. In other embodiments, the OFDM system104includes an OFDM block in a digital domain and an analog front-end in the RF domain, as illustrated inFIG.3. FIG.2is a block diagram of an electronic device200with estimation logic in a receiver to improve a frequency selective fading characteristic of a wireless channel according to one embodiment. The electronic device200includes a baseband processor206, including the estimation logic202and the OFDM system204. The electronic device200also includes a modulator208, an amplifier210, and an antenna212. In general, the amplifier210receives an RF signal207via the antenna212and outputs an amplified signal205to the modulator208. The modulator208receives the amplified signal205and modulates the amplified signal with a carrier signal to obtain input data in the form of I data201and Q data203. The I data201and Q data203are input to the baseband processor206for further processing by the OFDM system204and the estimation logic202. The baseband processor206includes the estimation logic202. The estimation logic202measures a fast Fourier transform (FFT) response of the I data201and Q data203across the set of data subcarriers. The estimation logic202can measure the FFT response at each receiver of the electronic device200. For example, the estimation logic202measures a first FFT response of the RF signals across the set of data subcarriers at a first receiver and a second FFT response of the RF signals across the set of data subcarriers at a second receiver. The estimation logic202determines from the FFT response(s) that a subset of the set of data subcarriers have power levels that are lower than a threshold value. The threshold value represents a frequency selective fading characteristic of the wireless channel between the electronic device200and the other device (e.g., electronic device100). The estimation logic202generates a value or a gain code coefficient for each of the data subcarriers. The value or gain code coefficient causes an amplitude value for the corresponding data subcarrier to be adjusted (e.g., increased from a first amplitude value to a second amplitude value). The estimation logic202can generate a value based on the respective data subcarrier having a power level that is lower than the threshold value. For example, when a first data subcarrier needs to be adjusted and a second data subcarrier is maintained, the estimation logic202can generate a first value for the first data subcarrier and a second value for the second data subcarrier. The first value can cause the transmitter to adjust the first data subcarrier to operate at a second amplitude value greater than a first amplitude value, such as a default amplitude value. The second value can cause the transmitter to maintain the second data subcarrier to operate at the first amplitude value. Once generated, the values or gain code coefficients can be sent in the feedback data111as described above with respect toFIG.1. FIG.3is a block diagram of a radio300having a subcarrier pre-equalization DSP logic302in an OFDM block304in a digital domain and an analog RF front-end302in the RF domain according to one embodiment. The concepts used in a simple analog OFDM implementation can be extended to the digital domain by using a combination of Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) digital signal processing. These transforms may digitally-modulated input data (data symbols) onto orthogonal subcarriers. In principle, the IFFT takes frequency-domain input data (complex numbers representing the modulated subcarriers) and converts it to the time-domain output data (analog OFDM symbol waveform). In a digitally implemented OFDM system, referred to as the OFDM block304of the baseband processor, the input bits in a databit stream306are input into a baseband modulator308. The input bits are grouped and mapped to source data symbols that are a complex number representing the modulation constellation point (e.g., the BPSK or QAM symbols that would be present in a single subcarrier system). The baseband modulator308provides the output to a serial-to-parallel converter310to provide inputs to an N-point IFFT312. These complex source symbols are treated by the transmitter as though they are in the frequency domain and are the inputs to the N-point IFFT312that transforms the data into the time domain. The N-point IFFT312takes in N source symbols at a time, where N represents the number of subcarriers in the system. Each of these N input symbols has a symbol period of T seconds. The output of the N-point IFFT312is N orthogonal sinusoids. These orthogonal sinusoids each have a different frequency, and the lowest frequency is a direct current (DC). The input symbols are complex values representing the mapped constellation point and therefore specify both the amplitude and phase of the sinusoid for that subcarrier. The output of the N-point IFFT312is the summation of all N sinusoids. Thus, the N-point IFFT312provides a simple way to modulate data onto N orthogonal subcarriers. The block of N output samples from the N-point IFFT312make up a single OFDM symbol. The output of the N-point IFFT312can be received by a parallel-to-serial converter314to convert the output into serial form. After some additional processing, such as adding a cyclic prefix316to the output of the parallel-to-serial converter314, the time-domain signal that results from the N-point IFFT312is transmitted across a radio channel (RFFE302). Although not illustrated inFIG.3, at a receiver, an FFT block is used to process the received signal and bring it into the frequency domain, which is used to recover the original data bits. For example, an 802.11a OFDM carrier signal (burst type) is the sum of one or more OFDM symbols, each comprised of 52 orthogonal subcarriers, with baseband data on each subcarrier being independently modulated using quadrature amplitude modulation (available formats: BPSK, QPSK, 16-QAM, or 64-QAM). This composite baseband signal is used to modulate a main RF carrier. To begin the OFDM signal creation process, the input data bitstream is encoded with convolutional coding and Interleaving. Each data stream is divided into groups of “n” bits (1 bit-BPSK, 2 bits-QPSK, 4 bits-16QAM, or 6 bits-64QAM) and converted into complex numbers (I+jQ) representing the mapped constellation point. Note that the bit-rate will be different depending on the modulation format, a 64-QAM constellation (6 bits at a time) can have a bit rate of 54 Mbps, while a QPSK constellation (2 bits at a time) may only be 12 Mbps. Then 52 bins of the N-point IFFT312are loaded. The 48 bins contain the constellation points mapped into frequency offset indexes ranging from −26 to +26, skipping 4 Pilot bins corresponding to four pilot subcarriers, and zero bin corresponding to a DC subcarrier. There can be four pilot subcarriers inserted into frequency offset index locations −21, −7, +7, and +21. The zero bin is the Null or DC subcarrier and is not used; it contains a 0 value (0+j0). In some embodiments, additional subcarriers can be nulled in addition to the DC subcarrier. To do so, null tones and guard bands are inserted as inputs318into the N-point IFFT312. When the N-point IFFT312is completely loaded, the Inverse FFT is computed, giving a set of complex time-domain samples representing the combined OFDM subcarrier waveform. For example, the samples can be clocked out at 20 Msps to create a 3.2 μs (20 Msps/64) duration OFDM waveform. To complete the OFDM symbol, a 0.8 μs duration Guard Interval (GI) is then added to the beginning of the OFDM waveform. This produces a single OFDM symbol with a time duration of 4 μs in length (3.2 μs+0.8 μs). The process is repeated to create additional OFDM symbols for the remaining input data bits. To complete the OFDM frame structure, the single OFDM symbols are concatenated together and then appended to a 16 microsecond (μs) Preamble (used for synchronization) and a 4 μs SIGNAL symbol (provides Rate and Length information). This completes the OFDM frame and is ready to be transmitted as an OFDM Burst by the analog RF front-end302in the RF domain. The OFDM block304of the baseband processor outputs the OFDM symbol waveforms as I data and Q data in the illustrated embodiment. The analog RF front-end302in the RF domain can include two digital-to-analog converters (DACs)320,322, corresponding low pass filters324,326, mixers328,330, and an adder332coupled to a power amplifier334. The power amplifier334is to be coupled to an antenna (not illustrated inFIG.3). The DAC320, low pass filter324, and mixer328correspond to the I data, and the DAC322, low pass filter326, and mixer330correspond to the Q data. The power amplifier334applies one or more RF signals to the antenna to communicate the data (i.e., information) to another device, such as an access point. In one embodiment, the baseband processor (not illustrated inFIG.3) uses a digital multi-carrier modulation scheme that defines a set of data subcarriers, a set of pilot subcarriers, and a DC subcarrier to communicate data in the same single channel. The baseband processor establishes a wireless communication link with a second device, such as an access point, using a 2.4 GHz frequency band or a 5 GHz frequency band. A modulator can be coupled to the baseband processor. The modulator can include the components illustrated and described with respect to the RFFE302in the analog domain inFIG.3. Alternatively, the modulator can include other components to modulate the OFDM symbols In one embodiment, the baseband modulator308receives feedback data111. The feedback data111can include information that amplitude values for one or more of the data subcarriers. The baseband modulator308uses this information to adjust the data subcarriers' amplitude values before the N-point IFFT312. For example, the input bits can be grouped and mapped to source data symbols that are complex numbers representing modulation constellation points. As noted above, these complex numbers can specify both the amplitude and phase of the sinusoid for that particular subcarrier. The baseband modulator308can use the feedback data111to modify the amplitude of any one or more of the data subcarriers. In one embodiment, the amplitude values can be specified in an OFDM parameter structure that includes parameters that control operations of the OFDM block304of the baseband processor. The OFDM parameter structure can specify the set of subcarriers and a subset of data subcarriers to be nulled. The baseband processor can process the feedback data111and modify the OFDM parameter structure to adjust the respective data subcarriers' amplitude values. In at least one embodiment, a first device includes a baseband processing with an OFDM system and subcarrier pre-equalization DSP logic. The first device also includes a modulator coupled to the baseband processor and a power amplifier coupled to the modulator. The power amplifier applies a radio frequency (RF) signal to an antenna to communicate the data to the second device. The OFDM system uses a digital multi-carrier modulation scheme that defines a set of data subcarriers, a set of pilot subcarriers, and a direct current (DC) subcarrier to communicate data in a wireless channel between the first device and a second device. The subcarrier pre-equalization DSP logic receives feedback data from the second device, indicating a frequency selective fading characteristic of the wireless channel. The subcarrier pre-equalization DSP logic adjusts a first amplitude value of a subset of the set of data subcarriers to a second amplitude value. Adjusting the first amplitude value to the second amplitude value reduces the frequency selective fading characteristic of the wireless channel. In one embodiment, a single antenna is coupled to the power amplifier. In another embodiment, a second OFDM system of a second transmitter can be used with a second subcarrier pre-equalization DSP logic. A second modulator is coupled to the second OFDM system, and a second power amplifier is coupled to the second modulator. A first antenna is coupled to the power amplifier, and a second antenna is coupled to the second power amplifier. The baseband processor can operate in a MIMO mode and send data via the first antenna and the second antenna. In at least one embodiment, the OFDM system performs bit-level processing on input bits to generate modulation symbols (e.g., QAM or PSK symbols) and performs symbol-level processing on the modulation symbols to generate the data. The data is discrete time-domain data, and the OFDM system performs the symbol-level processing by performing an IFFT of the modulation symbols. The first amplitude value is adjusted to the second amplitude value before the IFFT of the modulation symbols. In at least one embodiment, the first device also includes a first receiver coupled to the antenna and a second receiver coupled to a second antenna. The first receiver and the second receiver to receive RF signals from a third device. The first device includes estimation logic coupled to the first receiver and the second receiver. The estimation logic measures a first FFT response of the RF signals across a second set of data subcarriers at the first receiver and a second FFT response of the RF signals across the second set of data subcarriers at the second receiver. The estimation logic determines, from the first FFT response and the second FFT response, that a second subset of the second set of data subcarriers has lower power levels than a threshold value. The threshold value represents a frequency selective fading characteristic of a second wireless channel between the first and third devices. The estimation logic generates a gain code coefficient for the second subset of the second set of data subcarriers and sends the gain code coefficient to the third device. The gain code coefficient causes the third device to adjust a third amplitude value of the second subset of the second set of data subcarriers to a fourth amplitude value. Adjusting the third amplitude value to the fourth amplitude value reduces the frequency selective fading characteristic of the second wireless channel. In another embodiment, the OFDM system maps input bits into a modulation symbol comprising the set of data subcarriers and converts the modulation symbol into discrete time-domain data using an IFFT. The first amplitude value is adjusted to the second amplitude value for the subset of data subcarriers before the IFFT. The OFDM system converts the discrete time-domain data into analog data. The modulator modulates the analog data onto RF signals, and the power amplifier is to amplify and send the RF signals via the antenna. FIG.4Aillustrates a wireless channel401between a first device400with multiple antennas and a second device420with multiple antennas according to one embodiment. The first device400includes two transmitters—a first transmitter402and a second transmitter404. The first transmitter402is coupled to a first antenna406, and the second transmitter404is coupled to a second antenna408. The first transmitter402and the second transmitter404can operate in a MIMO mode. In the MIMO mode, the first transmitter402and the second transmitter404send first data over the wireless channel401, such as illustrated inFIG.4B. The second device420includes two receivers—a first receiver422and a second receiver424. The first receiver422is coupled to a first antenna426and the second receiver424is coupled to a second antenna428. The first receiver422and the second receiver424can operate in the MIMO mode. In the MIMO mode, the first receiver442receives an RF signal having a first received signal power level454over the wireless channel401, such as illustrated inFIG.4B. In the MIMO mode, the second receiver424receives an RF signal having a second received signal power level456over the wireless channel401, such as illustrated inFIG.4B. FIG.4Bis a graph450illustrating a transmit signal power level452of the two transmitters402,404of the first device400and received signal power levels454,456of the two receivers422,424of the second device420according to one embodiment. As described herein, the estimation logic can measure the received signal power levels454,456, and determine whether the received signal power levels454,456exceed a threshold condition. For example, the threshold condition can be that a difference in the received signal power levels454,456do not exceed a threshold range of approximately 2 to 3 dB. The estimation logic can detect a frequency selective fading characteristic in the wireless channel401, illustrated and described in more detail with respect toFIGS.6A-6B. The estimation logic can send feedback data based on the received signal power levels454,456across the data subcarriers. The feedback data can include a first value indicative of the received signal power level454(or456) for a first data subcarrier. Alternatively, the feedback data can include a first value indicative of a difference of the received signal power level454and the received signal power level456for the first data subcarrier. The feedback data can include a second value indicative of the received signal power level454(or456) for a second data subcarrier. Alternatively, the feedback data can include a second value indicative of a difference of the received signal power level454and the received signal power level456for the second data subcarrier. FIG.5is a block diagram of a transmitter500with a subcarrier pre-equalization DSP block502according to one embodiment. The subcarrier pre-equalization DSP block502is similar to or includes the subcarrier pre-equalization logic102ofFIG.1. The transmitter500includes multiple DSP blocks, including a scrambler504, an encoder parser506, forward error correction (FEC) encoder508, stream parser510, interleaver512, constellation mapper514, pilot insertion block516, subcarrier pre-equalization DSP block502, IFFT block518, and guard interval (GI) and windowing block520. The transmitter500includes a digital-to-analog converter (DAC)522to convert digital data to analog data. The transmitter500includes an IQ modulator524and analog radio frequency circuitry526. The analog radio frequency circuitry526is coupled to an antenna528. During operation, the scrambler504receives data, such as an input data block of multiple bits, from an application or memory to be sent by the transmitter500. The scrambler504scrambles the data to randomize the data and passes the scrambled data to the encoder parser506. The encoder parser506parses the data to prepare the data for encoding. The FEC encoder508encodes the data and passes the data to the stream parser510. The stream parser510parses the data into streams and passes the data to the interleaver512. The interleaver512interleaves the data before it is mapped to constellation points by the constellation mapper514. The pilot insertion block516inserts the pilot subcarriers. As described herein, the subcarrier pre-equalization DSP block502adjusts zero or more amplitudes of the data subcarriers based on feedback data111before the IFFT is performed by the IFFT block518. After IFFT, the GI addition and windowing block520can add the guard interval and shape the signal before the DAC522converts the digital signals to analog signals. The IQ modulator524modulates the analog signals, and the analog RF circuitry526sends the RF signals via the antenna528. In at least one embodiment, the subcarrier pre-equalizer DSP block502can adjust amplitudes to improve a frequency selective fading characteristic on one or more subcarriers of the wireless channel, such as illustrated inFIGS.6A-6B. FIG.6Ais a graph600illustrating frequency responses of propagation sub-channels of a 2×2 MIMO wireless, illustrating a frequency selective fading characteristic on one of the sub-channels before equalization according to one embodiment. A wireless channel can be represented as multiple sub-channel frequency responses, including i) a first frequency response602representing a path between a first transmitter and a first receiver (Tx1Rx1); ii) a second frequency response a604representing a path between a first transmitter and a second receiver (Tx1Rx2); iii) a third frequency response606representing a path between a second transmitter and a first receiver (Tx2Rx1); and iv) a fourth frequency response608representing a path between a second transmitter and a second receiver (Tx2Rx2). As illustrated in the first and second frequency responses602,604, the wireless channel has a frequency selective fading characteristic on one of the sub-channels before equalization. That is, due to multipath fading, certain data subcarriers have lower power. Using the embodiments described herein, the data subcarriers with lower power can be identified. The amplitude of those data subcarriers can be increased before IFFT at the transmitter. Increasing the amplitude of those data subcarriers at the transmitter results in removing the frequency selective fading characteristic inFIG.6A, as illustrated in the frequency responses ofFIG.6B. FIG.6Bis a graph650illustrating frequency responses of propagation sub-channels of a 2×2 MIMO wireless, illustrating a reduction in the frequency selective fading characteristic on one of the sub-channels after equalization according to one embodiment. After equalizing the data subcarriers with lower power, the sub-channel frequency response604is corrected to correspond to the frequency response602, thereby improving or removing the frequency selective fading characteristic of the wireless channel. FIGS.7A-7Billustrate a functional flow700of operations for subcarrier pre-equalization of a subset of data subcarriers according to one embodiment. The functional flow700starts at a first stage702with a transmitter IFFT704mapping QAM data706onto N orthogonal subcarriers708. The N orthogonal subcarriers708can each have a first amplitude value, such as a default amplitude value. At a second stage710, a transmitter RF analog circuitry712generates the N subcarrier sinusoidal signals and an RF signal714that is a summation of the N subcarrier sinusoidal signals. At a third stage716, the transmitter RF analog circuitry712sends the RF signal714over a wireless channel718. The wireless channel718can be a multipath channel in a MIMO mode, such as when the RF signal714is transmitted using two or more transmit antennas or using a single transmit antenna and received by two or more receive antennas. Alternatively, the wireless channel718can be a single-path channel in a SISO mode. As illustrated inFIG.7B, at a fourth stage720, receiver RF analog circuitry722receives RF signal(s)724). At a fifth stage726, a receiver FFT728generates N orthogonal subcarriers730from the RF signal(s)724) and maps the N orthogonal subcarriers730into QAM data732. The QAM data732can be used to determine received signal power levels for each of the N orthogonal subcarriers730. The QAM data732can be used to determine a frequency selecting fading characteristic in the wireless channel718, such as illustrated in the frequency response ofFIG.6A. The receiver can send feedback data including the QAM data732, frequency response data, and/or gain code coefficients to adjust amplitudes a subset of the N orthogonal subcarriers to compensate for the selective fading characteristic in the wireless channel718. Referring back toFIG.7A, using the feedback data at a sixth stage734, the transmitter IFFT704can map QAM data736onto N orthogonal subcarriers738and adjust the subset of the N orthogonal subcarriers738from the first amplitude value to a second amplitude value, such as a higher value as illustrated inFIG.7. In at least one embodiment, the feedback data includes a first value indicative of a first received signal power level corresponding to the first data subcarrier and a second value indicative of a second received signal power level corresponding to the second data. The transmitter can adjust the first data subcarrier to operate at a second amplitude value using the first value. The second amplitude value can be greater than the first amplitude value. The transmitter can maintain the second data subcarrier to operate at the first amplitude value using the second value. In at least one embodiment, the subset of the N orthogonal subcarriers738is adjusted to the same amplitude value. In another embodiment, each subcarrier of the subset of the N orthogonal subcarriers738can be individually adjusted to unique amplitude values. In at least one embodiment, a difference value between the first amplitude value and the second amplitude value can be between approximately 2 and 3 dBuV. Alternatively, other difference values can be used for the adjusted amplitude values. As described above, the frequency selective fading characteristic can impair one or more of the MIMO sub-channels. When operating with an impaired MIMO sub-channel, an end-to-end RF link operates at a lower PHY rate, resulting in lower throughput. Using the subcarrier pre-equalization technique described herein, the MIMO sub-channel can be improved, and the end-to-end RF link can operate at higher PHY rates, resulting in higher throughput, as illustrated inFIG.8. FIG.8is a graph800illustrating an improvement in signal-to-noise ratio (SNR) in a wireless channel according to one embodiment. Graph800illustrates a packet error rate (PER)802in transmitted data over a wireless channel between a transmitter and a receiver without the subcarrier pre-equalization technique. Graph800illustrates a PER804in transmitted data over a wireless channel between a transmitter and a receiver with the subcarrier pre-equalization technique described herein. For PER802, the wireless channel loss (WCL) is 54.83 dB, and the frequency selective fading characteristic in the wireless channel causes the RF link to operate at a PHY rate of MCS9, resulting in a throughput of 13 Mbps. For PER802, the WCL can be 53.82 dB, and the subcarrier pre-equalization technique results in an SNR improvement806of approximately 15 dB, causing the RF link to operate at a PHY rate of MCS15, resulting in a throughput of 116 Mbps. FIG.9is a flow diagram of a method900for adjusting amplitudes of a subset of data subcarriers according to one embodiment. The method900may be implemented using processing logic comprising hardware, software, firmware, or any combination thereof. In one embodiment, the subcarrier pre-equalization logic102ofFIG.1implements the method900. Alternatively, the transmitter or first device as described herein implements the method900. Referring toFIG.9, the processing logic of a first device begins by sending first data to a second device using a set of data subcarriers that operate at a first amplitude value (block902). For example, a first data subcarrier and a second data subcarrier of the set of data subcarriers operate at a first amplitude value. At block904, the processing logic receives from a second device second data, including feedback from the second device regarding the received signal power level at a subcarrier basis. In at least one embodiment, the second data includes a first value indicative of a first received signal power level corresponding to the first data subcarrier and a second value indicative of a second received signal power level corresponding to the second data subcarrier. The processing logic determines a second amplitude value based on the first value (block906). In at least one embodiment, the processing logic maintains the first amplitude value for the second data subcarrier based on the second value. The processing logic sends third data to the second device using the first data subcarrier operating at the second amplitude value and the second data subcarrier operating at the first amplitude value (block908), and the method900ends. In at least one embodiment, the processing logic generates, using input bits of the first data, modulation symbols (e.g., QAM symbols or PSK symbols). The processing logic generates, using the QAM symbols, the third data. The third data can be discrete time-domain data. The processing logic generates, using the QAM symbols, the third data. To generate the third data, the processing logic can perform an IFFT of the modulation symbol. In at least one embodiment, the first amplitude value is adjusted (e.g., increased) to the second amplitude value before the IFFT of the modulation symbols corresponding to the third data. In at least one embodiment, the processing logic determines the second amplitude value before the IFFT. In at least one embodiment, the processing logic maps input bits into a modulation symbol, including the set of data subcarriers. For example, the processing logic maps input bits into in-phase (Q) and quadrature phase (Q) components of QAM symbols. The QAM symbols can be ordered in a sequence according to a number of the set of subcarriers in the OFDM symbols. In another embodiment, the processing logic generates a modulation symbol using the input bits of the first data. In one embodiment, the processing logic converts the modulation symbol into discrete time-domain data. In at least one embodiment, the processing logic maps inputs bits into a modulation symbol and converts the modulation symbol into discrete time-domain data using an IFFT. In at least one embodiment, the amplitudes of the subset of data subcarriers are increased or otherwise adjusted before converting the modulation symbol (e.g., before the IFFT). The processing logic converts the discrete time-domain data into analog data. The processing logic modulates the analog data onto RF signals and sends the RF signals via one or more antennas. In at least one embodiment, an OFDM transmitter (802.11n/ac/ax OFDM transmitter), the modulated symbols are mapped to individual subcarriers and sent to an IFFT block. The outputs of the IFFT are time-domain samples. In each OFDM symbol, a certain number of subcarriers are dedicated to pilot signals in order to make the coherent detection robust against frequency offsets and phase noise. Such OFDM symbols are transmitted through the wireless channel and are subjected to small-scale fading due to constructive and destructive interference of multiple signal paths between the transmitter and receiver. This occurs at a spatial scale and is frequency-dependent. Small scale multipath fading impacts the design of indoor wireless communication systems. Based on time delay spread, small-scale fading can be either flat fading or frequency selective fading leading to time dispersion, causing inter-symbol interference (ISI), leading to poor throughputs. On the receiver side, FFT is applied to the OFDM symbols for demodulation. In at least one embodiment, the feedback data's values can be gain code coefficients generated by the second device. In other embodiments, the second device sends the power level information, and the first device determines the gain code coefficients for adjusting the amplitudes of the subset of data subcarriers. For example, the first value in the second data is a first gain code coefficient, and the second value is a second gain code coefficient. The first gain code coefficient causes the first data subcarrier to be adjusted from the first amplitude value to the second amplitude value. In at least one embodiment, the processing logic sends the first data and the third data using a first transmitter in a SISO mode. In at least one embodiment, the processing logic sends the first data and the third data using multiple transmitters in a MIMO mode. FIG.10is a flow diagram of a method1000for measuring received signal power levels on a per subcarrier basis for adjusting amplitudes of a subset of data subcarriers according to one embodiment. The method1000may be implemented using processing logic comprising hardware, software, firmware, or any combination thereof. In one embodiment, the estimation logic202ofFIG.2implements the method1000. Alternatively, the receiver or first device as described herein implements the method1000. Referring toFIG.10, the processing logic of a first device begins by receiving first data from a second device using a set of data subcarriers that operate at a first amplitude value (block1002). For example, a first data subcarrier and a second data subcarrier of the set of data subcarriers operate at a first amplitude value. At block1004, the processing logic measures a first received signal power level corresponding to the first data subcarrier and a second received signal power level corresponding to the second data subcarrier. The processing logic sends second data to the second device, the second data including feedback regarding the received signal power level at a subcarrier basis (block1006). In at least one embodiment, the second data includes a first value indicative of a first received signal power level corresponding to the first data subcarrier and a second value indicative of a second received signal power level corresponding to the second data subcarrier. The first value causes the second device to increase the first amplitude value to a second amplitude value for the first data subcarrier. The second value causes the second device to maintain the first amplitude value for the second data subcarrier. The processing logic receives third data from the second device using the first data subcarrier operating at the second amplitude value and the second data subcarrier operating at the first amplitude value (block1008), and the method1000ends. In at least one embodiment, the processing logic receives the first and third data via a first receiver in a SISO mode. In at least one embodiment, the processing logic receives the first and third data via two or more receivers in a MIMO mode. In at least one embodiment, the processing logic receives RF signals at a first receiver and a second receiver. The processing logic measures a first FFT response of the RF signals across the set of data subcarriers at the first receiver and a second FFT response of the RF signals across the set of data subcarriers at the second receiver. The processing logic determines, from the first FFT and second FFT responses, the first received signal power level corresponding to the first data subcarrier. The processing logic determines that the first receive signal power level is lower than a threshold value. In at least one embodiment, the threshold value represents a frequency selective fading characteristic of the wireless channel. The processing logic generates a first gain code coefficient for the first value. The first gain coefficient is greater than a second gain coefficient for the second value. In at least one embodiment, the processing logic sends the first gain code and the second gain code to the second device, causing the second device to adjust the amplitude value for the first data subcarrier as described herein. In at least one embodiment, the processing logic measures a first FFT response of the RF signals across the set of data subcarriers at the first receiver and a second FFT response of the RF signals across the set of data subcarriers at the second receiver. For each data subcarrier of the set of data subcarriers, the processing logic determines whether the first FFT response is less than the second FFT response by a threshold amount. The threshold amount represents a frequency selective fading characteristic in the wireless channel. The processing logic generates and sends the second data to the second device. The second data causes the second device to increase the first amplitude value to the second amplitude value for the first transmitter. Alternatively, the processing logic generates and sends the second data to the second device to increase the first amplitude value to the second amplitude value for the second transmitter. Alternatively, the processing logic generates and sends the second data to the second device to increase the first amplitude value to the second amplitude value for the first transmitter and the second transmitter. FIG.11is a flow diagram of a subcarrier pre-equalization method1100according to one embodiment. The method1100may be implemented using processing logic comprising hardware, software, firmware, or any combination thereof. In one embodiment, the estimation logic202ofFIG.2implements the method1100. Alternatively, the receiver or first device as described herein implements the method1100. Referring toFIG.11, the processing logic of a first device begins by measuring the FFT response of each receiver (e.g., Rx1/Rx2) across the subcarriers (block1102). The processing logic determines if the FFT responses are similar for the receivers (e.g., Rx1=Rx2 or within a threshold range) (block1104). If the FFT responses are similar for the receivers, the processing logic can use the default amplitude values (block1106). The processing logic can return to measure the FFT response periodically and check if the FFT responses are still similar. In response to a determination at block1104that the FFT response is not similar for the receivers, the processing logic sets an index, i, equal to 1 at a start of a sequence of iterations. The processing logic determines whether the FFT response for a first receiver (Rx1) is less than the FFT response for a second receiver (Rx2). For example, a threshold amount can be defined and if the Rx1 is less than the Rx2 by more than the threshold amount, the processing logic can perform subcarrier pre-equalization at block1110. If the Rx1 is not less than the Rx2, the processing logic returns to block1102. At a first iteration at block1110, the processing logic re-equalizes subcarriers at a transmitter baseband section of a first transmitter. At a second iteration at block1110, the processing logic re-equalizes subcarriers at a transmitter baseband section of a second transmitter. At a third iteration at block1110, the processing logic re-equalizes subcarriers at the transmitter baseband sections of the first transmitter and the second transmitter. After each iteration, the processing logic returns to measure the receivers' FFT response across the subcarriers at block1102. FIG.12is a block diagram of an electronic device1200in which embodiments of subcarrier pre-equalization logic102and estimation logic202may be implemented. The electronic device1200may correspond to the electronic device100ofFIG.1or electronic device200ofFIG.2. The electronic device1200may be any type of computing devices such as an electronic book reader, a PDA, a mobile phone, a laptop computer, a portable media player, a tablet computer, a camera, a video camera, a netbook, a desktop computer, a gaming console, a DVD player, Bluray® player, a computing pad, a media center, an audio-input-enabled device, a speech-based personal data assistant, and the like. The electronic device1200may be any portable or stationary user device. For example, the electronic device1200may be an intelligent voice control and speaker system. Alternatively, the electronic device1200can be any other device used in a WLAN network (e.g., Wi-Fi® network), a WAN network, or the like. The electronic device1200includes one or more processor(s)1230, such as one or more CPUs, microcontrollers, field-programmable gate arrays, or other types of processing devices. The electronic device1200also includes system memory1206, which may correspond to any combination of volatile and/or non-volatile storage mechanisms. The system memory1206stores information that provides operating system component1208, various program modules1210such as the subcarrier pre-equalization logic102and estimation logic202described herein, program data1212, and/or other components. In one embodiment, the system memory1206stores instructions of the methods as described herein. The electronic device1200performs functions by using the processor(s)1230to execute instructions provided by the system memory1206. The electronic device1200also includes a data storage device1214that may be composed of one or more removable storage types and/or one or more types of non-removable storage. The data storage device1214includes a computer-readable storage medium1216on which is stored one or more sets of instructions embodying any of the methodologies or functions described herein, such as the subcarrier pre-equalization logic102and estimation logic202described herein. Instructions for the program modules1210may reside, completely or at least partially, within the computer-readable storage medium1216, system memory1206and/or within the processor(s)1230during execution thereof by the electronic device1200, the system memory1206and the processor(s)1230also constituting computer-readable media. The electronic device1200may also include one or more input devices1218(keyboard, mouse device, specialized selection keys, etc.) and one or more output devices1220(displays, printers, audio output mechanisms, etc.). The electronic device1200further includes a modem1222to allow the electronic device1200to communicate via a wireless network (e.g., such as provided by the wireless communication system) with other computing devices, such as remote computers, an item providing system, and so forth. The modem1222can be connected to one or more radios1286. The modem can include the subcarrier pre-equalization logic102and estimation logic202described herein. The radios may include a WLAN radio, a WAN radio, PAN radio, or the like, as described herein. Antennas1288are coupled to the radios1286, which are coupled to the modem1222. The antennas1288may include a first WLAN antenna and a second WLAN antenna, and a PAN antenna as described herein. Additional antennas may be used and may be GPS antennas, NFC antennas, other WAN antennas, WLAN or PAN antennas, or the like. The modem1222allows the electronic device1200to handle both voice and non-voice communications (such as communications for text messages, multi-media messages, media downloads, web browsing, etc.) with a wireless communication system. The modem1222may provide network connectivity using any type of mobile network technology including, for example, cellular digital packet data (CDPD), general packet radio service (GPRS), EDGE, universal mobile telecommunications system (UMTS), 1 times radio transmission technology (1×RTT), evaluation data optimized (EVDO), high-speed down-link packet access (HSDPA), Wi-Fi®, Long Term Evolution (LTE) and LTE Advanced (sometimes generally referred to as 4G), etc. The modem1222may generate signals and send these signals to antennas1288, via RF radio(s)1286as described herein. Electronic device1200may additionally include a WLAN radio, a GPS receiver, a PAN transceiver, and/or other RF radios. These RF radios may additionally or alternatively be connected to one or more of antennas1288. Antennas1288may be configured to transmit in different frequency bands and/or using different wireless communication protocols. The antennas1288may be directional, omnidirectional, or non-directional antennas. In addition to sending data, antennas1288may also receive data, which is sent to appropriate RF radios connected to the antennas. In one embodiment, the electronic device1200establishes a first connection using a first wireless communication protocol, and a second connection using a different wireless communication protocol. The first wireless connection and second wireless connection may be active concurrently, for example, if a user device is downloading a media item from a server (e.g., via the first connection) and transferring a file to another user device (e.g., via the second connection) at the same time. Alternatively, the two connections may be active concurrently during a handoff between wireless connections to maintain an active session (e.g., for a telephone conversation). Such a handoff may be performed, for example, between a connection to a WLAN hotspot and a connection to a wireless carrier system. In one embodiment, the first wireless connection is associated with a first resonant mode of an antenna structure that operates at a first frequency band and the second wireless connection is associated with a second resonant mode of the antenna structure that operates at a second frequency band. In another embodiment, the first wireless connection is associated with a first antenna element and the second wireless connection is associated with a second antenna element. In other embodiments, the first wireless connection may be associated with a media purchase application (e.g., for downloading electronic books), while the second wireless connection may be associated with a wireless ad hoc network application. Other applications that may be associated with one of the wireless connections include, for example, a game, a telephony application, an Internet browsing application, a file transfer application, a global positioning system (GPS) application, and so forth. Though a modem1222is shown to control transmission and reception via antenna1288, the electronic device1200may alternatively include multiple modems, each of which is configured to transmit/receive data via a different antenna and/or wireless transmission protocol. The electronic device1200delivers and/or receives items, upgrades, and/or other information via the network. For example, the electronic device1200may download or receive items from an item-providing system. The item providing system receives various requests, instructions and other data from the electronic device1200via the network. The item-providing system may include one or more machines (e.g., one or more server computer systems, routers, gateways, etc.) with processing and storage capabilities to provide the above functionality. Communication between the item-providing system and the electronic device1200may be enabled via any communication infrastructure. One example of such an infrastructure includes a combination of a wide area network (WAN) and wireless infrastructure, which allows a user to use the electronic device1200to purchase items and consume items without being tethered to the item providing system via hardwired links. The wireless infrastructure may be provided by one or multiple wireless communications systems, such as one or more wireless communications systems. One wireless communication system may be a wireless local area network (WLAN) hotspot connected with the network. The WLAN hotspots can be created by products using the Wi-Fi® technology based on IEEE 802.11x standards by Wi-Fi Alliance. Another wireless communication system may be a wireless carrier system that can be implemented using various data processing equipment, communication towers, etc. Alternatively, or in addition, the wireless carrier system may rely on satellite technology to exchange information with the electronic device1200. The communication infrastructure may also include a communication-enabling system that serves as an intermediary in passing information between the item-providing system and the wireless communication system. The communication-enabling system may communicate with the wireless communication system (e.g., a wireless carrier) via a dedicated channel, and may communicate with the item-providing system via a non-dedicated communication mechanism, e.g., a public Wide Area Network (WAN) such as the Internet. The electronic devices1200are variously configured with different functionality to enable consumption of one or more types of media items. The media items may be any type of format of digital content, including, for example, electronic texts (e.g., eBooks, electronic magazines, digital newspapers, etc.), digital audio (e.g., music, audible books, etc.), digital video (e.g., movies, television, short clips, etc.), images (e.g., art, photographs, etc.), and multi-media content. The electronic devices1200may include any type of content rendering devices such as electronic book readers, portable digital assistants, mobile phones, laptop computers, portable media players, tablet computers, cameras, video cameras, netbooks, notebooks, desktop computers, gaming consoles, DVD players, media centers, and the like. In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that embodiments may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description. Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “inducing,” “parasitically inducing,” “radiating,” “detecting,” determining,” “generating,” “communicating,” “receiving,” “disabling,” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer-readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present embodiments as described herein. It should also be noted that the terms “when” or the phrase “in response to,” as used herein, should be understood to indicate that there may be intervening time, intervening events, or both before the identified operation is performed. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the present embodiments should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. | 74,622 |
11863360 | The same reference numbers or other reference designators are used in the drawings to designate the same or similar (functionally and/or structurally) features. DETAILED DESCRIPTION The drawings are not necessarily to scale. OOK modulation circuitry is utilized by a number of applications, including digital isolator circuitry. Digital isolators (as used herein, “digital isolator” and “digital isolation circuitry” mean circuitry that includes an isolation barrier and/or circuitry to transmit and/or receive signals, such as data and/or instructions, across the isolation barrier) may be used to transmit information over isolation barriers by using a digital input signal to modulate a carrier signal (e.g., modulating a carrier frequency with data) before transmission. Before the carrier signal traverses the isolation barrier, the digital isolator uses an OOK modulation circuit to convert a digital input signal into a modulated carrier signal. The OOK modulation circuit is configured to use OOK modulation techniques to generate the modulated carrier signal based on the digital input signal. A modulated carrier signal may exhibit variations (such as jitter, duty cycle distortion, intersymbol interference, etc.) as a result of OOK modulation circuitry. The variations may be caused by the speed of the data transmission on the digital input signal and variations within circuit components comprising the OOK modulation circuitry. For example, jitter on the modulated carrier signal may be the result of noise produced by the OOK modulation circuitry during the process of using a digital input to modulate a carrier signal. Intersymbol interference (ISI) is a form of distortion in a signal in which one symbol interferes with one or more subsequent symbols. The modulated carrier signal may exhibit ISI as a result of the components of the OOK modulation circuitry not being designed to operate at a frequency required by the speed of the data transmission of the digital input signal. For example, the carrier signal exhibits ISI as the result of the OOK modulation circuitry not enabling and/or disabling a transistor at a frequency required to accurately represent the data transmission used to modulate the carrier signal. Typically, OOK modulation circuitry includes a digital input signal, an oscillator input signal, and a bias current input. In some applications, the OOK modulation circuitry is configured to generate an OOK carrier signal by using the digital input signal to control the modulation of the carrier signal by controlling a first switch to enable the oscillator input signal to enable a second switch to generate the OOK carrier signal. The OOK modulation circuitry is configured to enable or disable the first switch based on the digital input signal enabling or disabling a gate voltage of the first switch. Conventionally, the OOK modulation circuitry is configured to use a bias current input to generate the gate voltage used to enable the first switch. In such an arrangement, the OOK modulation circuitry may generate a carrier signal that exhibits ISI as a result of the speed of the data transmission of the digital input signal being greater than the speed at which the OOK modulation circuitry may enable or disable the first switch. The OOK modulation circuitry of some examples is configured to use a digital input signal to control a switch which enables an oscillator input signal to generate a modulated carrier signal. The OOK modulation circuitry of some examples include example circuitry to limit the range of a voltage applied to a gate terminal used to generate a modulated carrier signal. In such an example, the limited range of the gate voltage allows the switch to be enabled and disabled at a faster rate. The OOK modulation circuitry of some examples is configured to use a plurality of voltage biases to limit the range of the gate voltage to voltage values near the threshold of a transistor. In some described examples, the OOK modulation circuitry includes level shifting circuitry to decrease the rise and fall time of a digital input signal. The OOK modulation circuitry implements both a reduced range of the gate voltage and level shifting circuitry to reduce variations in the OOK modulation of the digital input signal. Alternatively, the OOK modulation circuitry may implement either a reduced range of the gate voltages or the input level shifting circuitry to reduce variations in the modulated carrier signal. FIG.1is a block diagram of example digital isolator circuitry100. In the example ofFIG.1, the digital isolator circuitry100includes an example transmission circuit102, an example isolation barrier circuit104, and an example receiver circuit106. The transmission circuit102is configured to generate a modulated carrier signal based on a digital input signal, such that the modulated carrier signal may traverse an isolation barrier. The isolation barrier circuit104is configured to include an isolation barrier, wherein the modulated carrier signal may be transmitted to the receiver circuit106. The receiver circuit106is configured to generate a digital output signal based on the modulated carrier signal received from the isolation barrier circuit104. In the example ofFIG.1, the transmission circuit102includes an example first digital input terminal108, a second digital input terminal110, an example current mode logic (CML) buffer112, example CML to complementary metal oxide semiconductor (CMOS) converter circuitry114, example OOK modulation circuitry116, and an example oscillator118. The transmission circuit102is configured to generate a modulated carrier signal based on a digital input coupled to the digital input terminals108and110. The digital input terminals108and110are configured to represent the digital input as a differential signal, such that the difference between the voltages of the digital input terminals108and110represent the digital input. For example, the digital input is a logic high as a result of the voltage difference between the digital input terminals being approximately 3.3 volts (V). Alternatively, the digital input of the transmission circuit102may be a single ended signal, such that the difference between the voltage of a digital input terminal (e.g., the digital input terminal108or110) and common potential (e.g., ground) represent the digital input. The digital input terminals108and110are coupled to the CML buffer112. The CML buffer112is configured as a differential buffer. The digital input terminals108and110are buffered by the CML buffer112, such that a differential output of the CML buffer112is isolated from the digital input terminals108and110. Alternatively, the CML buffer112may be a plurality of single ended buffers configured to individually buffer each of the digital input terminals108and110. The CML buffer112is coupled to the CML-to-CMOS converter circuitry114. The CML-to-CMOS converter circuitry114is configured to convert the digital input signals from CML to CMOS logic. CML is typically used for digital logic operations, such that digital circuitry may be configured to generate, alter, and/or process a digital signal. CMOS logic is typically used for signal transmission as a result of greater power efficiency at higher frequencies than a CML signal. Advantageously, the CML-to-CMOS converter circuitry114increases the efficiency of the digital isolator circuitry100by converting the digital input signal from CML to CMOS for more efficient transmission across the isolation barrier circuit104. The CML-to-CMOS converter circuitry114is coupled to the OOK modulation circuitry116. The OOK modulation circuitry116is configured to generate a modulated carrier signal based on a digital signal input and an oscillator input. The OOK modulation circuitry116may be configured as a power amplifier, such that an output of the OOK modulation circuitry116may traverse the isolation barrier circuit104. The OOK modulation circuitry116may generate the modulated carrier signal to be of a frequency of the oscillator118as the result of a logic “1,” or a logic high, of the digital input signal. For example, the logic high signal is a signal (e.g., a voltage, a current, etc.) representative of a digital one (e.g., a digital ‘1’ or a logic ‘1’), such as a voltage of 2.2V, 3.3V, 5V, etc. In some examples, a logic low signal is a signal representative of a digital zero (e.g., a digital ‘0’ or a logic ‘0’), such as a ground voltage. The CML buffer112and the CML-to-CMOS converter circuitry114are configured to buffer and convert the input received at the digital input terminals108and110. Such operations result in a logic “0” and logic “1” being represented by the same value at the output of the CML-to-CMOS converter circuitry114as at the digital input terminals108and110. In this example, CML-to-CMOS converter circuitry114outputs a logic high signal which may be used to control a transistor to modulate the carrier signal. The OOK modulation circuitry116may generate the modulated carrier signal to be equal to common potential (e.g., ground) as the result of a logic “0” or a logic low of the digital input signal. For example, the modulated carrier signal generated by the OOK modulation circuitry116would have a frequency of the oscillator118as the result of the digital input terminals108and110configured to represent a logic high (e.g., there is a potential difference between terminals). Advantageously, the OOK modulation circuitry116is configured to generate a modulated carrier signal of enough power to traverse the isolation barrier circuit104. The OOK modulation circuitry116is coupled to the oscillator118. The oscillator118is configured to output a differential (e.g., two complementary signals) sinusoidal wave (OSCP and OSCM) of a frequency, which may be referred to as a carrier frequency. The oscillator118may be configured to generate a signal with a frequency based on a speed of the data transmission and intended frequency of the modulated carrier signal. For example, the oscillator118may be configured to generate a sinusoidal signal with of frequency of approximately 14.5 gigahertz (GHz). This signal can be used by the OOK modulation circuitry116as the carrier frequency signal with is modulated by a digital signal (which may have a data rate of approximately 480 megabits per second (Mbps)). Advantageously, the frequency of the modulated carrier signal generated by the OOK modulation circuitry116may be modified based on the frequency of the oscillator118. In the example ofFIG.1, the transmission circuit102is coupled to the isolation barrier circuit104. The isolation barrier circuit104includes an example first inductor120, a second inductor122, an example first capacitor (CISO)124, an example first bond wire (Lbond)126, a second capacitor (CISO)128, a second bond wire (Lbond)130, a third capacitor (CISO)132, a third inductor134, a fourth capacitor (CISO)136, and a fourth inductor138. The isolation barrier circuit104is configured to isolate the transmission circuit102from the receiver circuit106. Inductors120and122may be magnetically coupled (e.g., they may form a transformer). The OOK modulation circuitry116of the transmission circuit102is coupled to the first inductor120. The first inductor120is magnetically coupled to the second inductor122. The first inductor120is configured to induce a current in the second inductor122based on the modulated carrier signal generated by the OOK modulation circuitry116. The first inductor120may be configured based on the second inductor122, such that the difference between inductors may induce currents of different magnitudes. The first inductor120may be configured to induce the current in the second inductor122based on the properties (e.g., the number of windings, the direction of the windings, etc.) of the inductors120and122. The second inductor122is coupled between the capacitors124and128. The second inductor122is configured to induce a current based on the modulated carrier signal of the first inductor120. The current induced in the second inductor122is configured to traverse an isolation barrier (e.g., the isolation barrier formed by capacitors124,128,132and136). The second inductor122may be configured to induce a current based on the properties (e.g., the number of windings, the direction of the windings, etc.) of the inductors120and122. The first capacitor124is coupled between the second inductor122and the first bond wire126. The first capacitor124is configured to isolate the second inductor122from the first bond wire126. The first capacitor124is configured to remove any direct current that may be induced in the first bond wire126or induced within the second inductor122. Alternatively, the first bond wire126may be another type of conductor, such as a metal wiring in a semiconductor device or a metal trace on a printed circuit board (PCB). The second capacitor128is coupled between the second inductor122and the second bond wire130. The second capacitor128is configured to isolate the second inductor122from the second bond wire130. The second capacitor128is configured to remove any direct current that may be induced in the second bond wire130or induced within the second inductor122. Alternatively, the second bond wire130may be another type of conductor, such as a metal wiring in a semiconductor device or a metal trace on a PCB. The third capacitor132is coupled between the first bond wire126and the third inductor134. The third capacitor132is configured to isolate the third inductor134from the first bond wire126. The third capacitor132is configured to remove any direct current that may be induced in the first bond wire126or induced within the third inductor134. The third inductor134is coupled between the capacitors132and136. The third inductor134is configured to induce a current in the fourth inductor138based on the modulated carrier signal induced in the second inductor122(e.g., the third inductor134is magnetically coupled to fourth inductor138). The third inductor134may be configured to induce the current in the fourth inductor138based on the properties (e.g., the number of windings, the direction of the windings, etc.) of the inductors134and138. The inductors134and138may be magnetically coupled (e.g., they may form a transformer). The fourth capacitor136is coupled between the second bond wire130and the third inductor134. The fourth capacitor136is configured to isolate the second bond wire130from the third inductor134. The fourth capacitor136is configured to remove any direct current that may be induced in the second bond wire130or induced within the third inductor134. In the example ofFIG.1, the isolation barrier circuit104is coupled to the receiver circuit106. The receiver circuit106includes example OOK envelope detector circuitry140, example single ended to differential converter circuitry142, an example low voltage differential signal (LVDS) buffer144, an example first digital output terminal146, and a second digital output terminal148. The receiver circuit106is configured to generate a digital output signal based on the modulated carrier signal from the isolation barrier circuit104. The fourth inductor138of the isolation barrier circuit104is coupled to the OOK envelope detector circuitry140. The OOK envelope detector circuitry140is configured to generate a digital output signal based on the modulated carrier signal induced in the fourth inductor138of the isolation barrier circuit104. The OOK envelope detector circuitry140may be configured to generate a logic “1” or a logic high (based on the modulated carrier signal transmitted across the isolation barrier104) of a duration based on determining a frequency of the duration. The OOK envelope detector circuitry140may be configured to generate a logic “0” or a logic low (based on the modulated carrier signal transmitted across the isolation barrier104) of a duration based on determining a frequency of the duration. For example, the OOK envelope detector circuitry140would generate a logic high (or a logic “1”) for 10 picoseconds (pS) based on determining a frequency of the modulated carrier for the same 10 pS duration. The OOK envelope detector circuitry140is coupled to the single ended to differential converter circuitry142. The single ended to differential converter circuitry142is configured to convert the digital output signal generated by OOK envelope detector circuitry140into a differential digital output signal, such that the difference between the digital output terminals146and148represents a digital signal. For example, the single ended to differential converter circuitry142may generate a logic high (or a logic “1”) by creating a potential difference between two outputs (OUTP and OUTM) of a magnitude based on a difference between the digital output signal generated by the OOK envelope detector circuitry140and common potential (e.g., ground). The single ended to differential converter circuitry142is coupled to the LVDS buffer144. The LVDS buffer144is configured as a differential buffer. The LVDS buffer144is configured to isolate a differential output of the single ended to differential converter circuitry142from the digital output terminals146and148. Alternatively, the LVDS buffer144may be a plurality of single ended buffers configured to individually buffer each of the digital output terminals146and148. Advantageously, the digital output signal generated by the LVDS buffer144is a digital representation of the modulated carrier signal. In some examples, the digital isolator circuitry100is a single integrated circuit (IC) (such as circuitry implemented on a single semiconductor die or on multiple die but within a single IC package). For example, the transmission circuit102and the receiver circuit106may be included on the same semiconductor die. In some examples, the digital isolator circuitry100may be implemented by two or more ICs in a single IC package or may be implement as a multi-chip module (MCM). In some examples, the digital isolator circuitry100may be implemented by two or more ICs (such as two or more IC packages). For example, the transmission circuit102may be on a first die and the receiver circuit106may be on a second die. In some examples, the transmission circuit102may be on a first die, the isolation barrier circuit104may be on a second die, and the receiver circuit106may be on a third die. Alternatively, one or more hardware circuit components (such as the CML buffer112, the CML-to-CMOS converter circuitry114, the OOK modulation circuitry116, etc.) of the transmission circuit102may be included in the isolation barrier circuit104. Alternatively, one or more hardware circuit components (such as the inductors120and122, the capacitors124and128, etc.) of the isolation barrier circuit104may be included in the transmission circuit102. Alternatively, one or more hardware circuit components (such as inductors134and138, the capacitors132and136, etc.) of the isolation barrier circuit104may be included in the receiver circuit106. In example operation, the digital isolator circuitry100is configured to receive a differential digital input signal at the digital input terminals108and110. Alternatively, the digital isolator circuitry100may be configured to receive a single ended digital input signal at the digital input terminals108and/or110. The CML buffer112is configured to buffer the digital input signal, such that circuitry coupled to the digital input terminals108and110are less likely to alter the operation of the digital isolation circuitry100. The CML buffer112outputs a differential digital input signal to the CML-to-CMOS converter circuitry114, such that the CML-to-CMOS converter circuitry114may convert the differential digital input signal to a CMOS digital input signal. Advantageously, the conversion from a CML signal to a CMOS signal increases the power efficiency of the modulated carrier signal as it traverses the isolation barrier circuit104. The CMOS digital input signal is coupled to the OOK modulation circuitry116. The OOK modulation circuitry116is configured to implement OOK modulation to generate a modulated carrier signal based on the CMOS digital input signal and the oscillator118. For example, the OOK modulation circuitry116generates a digital logic high on the modulated carrier signal by enabling the oscillator118to contribute a signal of a magnitude greater than zero for the duration of the digital logic high. Advantageously, the OOK modulation circuitry116generates a modulated carrier signal capable of traversing the isolation barrier circuit104. The modulated carrier signal is induced by the first inductor120in the second inductor122. The modulated carrier signal is configured to traverse the wire bonds126and130. The modulated carrier signal is induced by the third inductor134in the fourth inductor138, such that the receiver circuit106may receive the modulated carrier signal as an input. The receiver circuit106is configured to receive the modulated carrier signal from the fourth inductor138of the isolation barrier circuit104. The OOK envelope detector circuitry140is configured to generate a digital output signal based on the modulated carrier signal. For example, the OOK envelope detector circuitry140may generate a logic low based on determining that the magnitude of the modulated carrier signal is near common potential (e.g., ground). Advantageously, the OOK envelope detector circuitry140is configured to generate the digital output signal based on the modulated carrier signal, such that the digital output signal is similar (ideally exactly the same) to the CMOS digital signal generated by the CML-to-CMOS converter circuitry114. The single ended to differential converter circuitry142is configured to generate a differential digital output signal based on the digital output signal generated by the OOK envelope detector circuitry140. The LVDS buffer144is configured to buffer the differential digital output signal from the digital output terminals146and148. Advantageously, the LVDS buffer144is configured to prevent circuitry coupled to the digital output terminals146and148from altering the functionality of the digital isolator circuitry100. In some examples, transmitter102may be implemented as a transceiver (e.g., a transmitter and/or receiver) and receiver106may be implemented as a transceiver so that signals may pass through isolation barrier104in either direction. In such examples, transmitter102may include additional circuitry to receive signals and/or receiver106may include additional circuitry to transmit signals. FIG.2is a block diagram of an example implementation of the OOK modulation circuitry116ofFIG.1. In the example ofFIG.2, the OOK modulation circuitry116includes a system comprised of an example first digital input terminal205, a second digital input terminal210, example current mirror circuitry215, example level shifter circuitry220, example OOK modulator circuitry225, an example first oscillator input terminal230, a second oscillator input terminal235, an example first modulated carrier output terminal240, and a second modulated carrier output terminal245. The OOK modulation circuitry116is configured to generate an OOK modulated carrier signal based on a digital input signal from the CML-to-CMOS converter circuitry114ofFIG.1and the oscillated signal (e.g., sinusoidal carrier signal) from the oscillator. The digital input terminals205and210are configured to represent a differential digital input signal, such that the digital input terminals205and210are complementary signals. For example, the digital input signal may be a logic low based on the first digital input terminal205being determined to be approximately (preferably exactly) equal to common potential (e.g., ground) and the second digital input terminal210being determined to be a logic high. Alternatively, the OOK modulation circuitry116may be modified to receive a single ended digital input signal by coupling one of the digital input terminals205or210to the single ended digital input signal and the other digital input terminal205or210to an inverted replica of the single ended digital input signal. The digital input terminals205and210are configured as the inputs of the current mirror circuitry215. The current mirror circuitry215is configured to generate a current representing the digital input signal. For example, the current mirror circuitry215would generate a current representing a logic low during the same duration as the digital input signal representing a logic low. Alternatively, the OOK modulation circuitry116may be modified to include a buffer to replace the current mirror circuitry215. Advantageously, the current mirror circuitry215isolates circuitry coupled to the digital input terminals205and210, such that the impacts of the circuitry on the OOK modulation circuitry116are reduced. The Current mirror circuitry215is coupled to the level shifter circuitry220. The level shifter circuitry220is configured to generate a shifted differential digital signal with a maximum voltage (representing a logic “1”, a logic high, or a differentially positive value) and a minimum voltage (representing a logic “0”, a logic low, or a differentially negative value) based on the current representation of the digital input signal generated by the current mirror circuitry215. For example, the level shifter circuitry220may be configured to provide approximately 3 volts as the maximum value, representing a logic high, and approximately 0.7 volts as the minimum value, representing a logic low. The level shifter circuitry220may be configured to generate the shifted differential digital signal based on the components of the OOK modulator circuitry225. Advantageously, the shifted differential digital signal generated by the level shifter circuitry220may transition between a logic high and a logic low at a speed greater than a transition of the digital input signal based on the reduced difference between the maximum voltage and the minimum voltage of the shifted differential digital signal. The level shifter circuitry220is coupled to the OOK modulator circuitry225. The OOK modulator circuitry225is configured to generate a modulated carrier signal based on the shifted differential digital signal generated by the level shifter circuitry220, and sinusoidal signal received at the oscillator input terminals230and235from the oscillator118ofFIG.1. For example, the OOK modulator circuitry225may output the signal received at the oscillator input terminals230or235to represent a logic high, or OOK modulator225may alter the magnitude of the received oscillator signal based on the level-shifted signal (e.g., the level-shifted logic “1” value) received from level shifter220. The OOK modulator circuitry225is configured to generate the modulated carrier signal on the modulated carrier output terminals240and245. Advantageously, the shifted differential digital signal enables the OOK modulator circuitry225to generate a modulated carrier signal with reduced jitter and ISI. In example operation, a differential digital input signal is coupled to the digital input terminals205and210. The current mirror circuitry215is configured to generate a current representing the differential input signal. Advantageously, the current mirror circuitry215isolates the differential digital input signal from the OOK modulator circuitry116. The level shifter circuitry220generates a shifted differential digital signal based on the current representing the differential input signal, such that the difference, in voltage, between a logic high and a logic low is reduced. The shifted differential digital signal generated by the level shifter circuitry220is configured to transition between a logic high and a logic low at a speed greater than the transition of the differential digital input signal. The OOK modulator circuitry225generates the modulated carrier signal on the modulated carrier output terminals240and245based on the shifted differential digital signal and the oscillator input terminals230and235. Advantageously, the modulated carrier signal generated by OOK modulation circuitry116exhibits reduced jitter and ISI compared to if the OOK modulator circuitry225generated the modulated carrier signal based on the differential input, such that the current mirror circuitry215and level shifter circuitry220are disabled. Alternatively, the OOK modulation circuitry116may include the OOK modulator circuitry225without the current mirror circuitry215and/or the level shifter circuitry220. FIG.3is a schematic diagram of the OOK modulator circuitry225ofFIG.2. Alternatively, the OOK modulator circuitry225may be used to implement the OOK modulation circuitry116ofFIG.1. In the example ofFIG.3, the OOK modulator circuitry225includes the first oscillator input terminal230, the second oscillator input terminal235, the first modulated carrier output terminal240, the second modulated carrier output terminal245, an example first current source (I1)305, an example voltage supply (Vdd)310, an example first transistor (MNBIAS1)315, a second transistor320, an example first digital input terminal (signal INP)325, a third transistor330, a second digital input terminal (signal INM)335, a second current source (I1/10)340, a fourth transistor (MNBIAS2)345, a fifth transistor (MN2)350, a sixth transistor355, and a seventh transistor360. The OOK modulator circuitry225is configured to generate a modulated carrier signal on the modulated carrier output terminals240and245by using the digital input terminals325and335) to control the oscillator input terminals230and235. Alternatively, the digital input terminals325and335may be coupled to the digital input terminals205and210ofFIG.2. The first current source305is coupled between the voltage supply310and a first current terminal315A of the first transistor315. A drain terminal and/or a source terminal may be referred to as a current terminal. A gate terminal may be referred to as a control terminal. The first current source305is configured to supply a current of a first magnitude (I1) from the voltage supply310to the first transistor315. The first magnitude of the first current source305is determined based on a first bias voltage (VBIAS1). The first bias voltage is generated based on the inverse of the transconductance (in siemens) of the first transistor315times the first magnitude of the first current source305. For example, VBIAS1is equal to one volt as the result of the first magnitude of the first current source305being equal to 20 milli amps and the transconductance of the first transistor315being equal to 20 milli siemens. The first current terminal315A of the first transistor315is coupled to the first current source305. The control terminal315B of the first transistor315is coupled to a first current terminal320A of the second transistor320. A second current terminal315C of the first transistor315is coupled to common potential (e.g., ground). The first transistor315is configured to allow current to flow from first current source305to common potential. The first transistor315generates the first bias voltage on the first current terminal315A of the first transistor315based on the first magnitude of the first current source310times the inverse of the transconductance (in siemens) of the first transistor315. Alternatively, the first transistor315and first current source305may be replaced with a voltage reference or circuitry configured to generate a reference voltage. The first transistor315is a N-channel metal oxide semiconductor transistor (MOSFET). Alternatively, the first transistor315may be implemented using a diode (e.g., with a reference voltage), an N-channel field-effect transistor (FET), an N-channel insulated-gate bipolar transistor (IGBT), an N-channel junction field effect transistor (JFET), an NPN bipolar junction transistor (BJT) and/or, with slight modifications, a p-type equivalent device. The first current terminal320A of the second transistor320is coupled to the first current source305and the first transistor315, such that the first current terminal320A of the second transistor320is approximately (preferably exactly) equal to first bias voltage generated by the first transistor315. A control terminal320B of the second transistor320is coupled to the first digital input terminal325. A second current terminal320C of the second transistor320is coupled to a first current terminal330A of the third transistor330. The second transistor320is configured to be enabled as a result of a logic high or a logic “1” on the first digital input terminal325. Additionally, the second transistor320is configured to be disabled as a result of a logic low or a logic “0” on the first digital input terminal325. For example, the second current terminal320C of the second transistor320is set to approximately the first bias voltage as the result of a logic high on the first digital input terminal325. The second transistor320is a N-channel MOSFET. Alternatively, the second transistor320may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The first current terminal330A of the third transistor330is coupled to the second current terminal320C of the second transistor320and a control terminal350B of the fifth transistor350. A control terminal330B of the third transistor330is coupled to the second digital input terminal335. A second current terminal330C of the third transistor330is coupled to the second current source340, a first current terminal345A of the fourth transistor345, and a control terminal345B of the fourth transistor345. The third transistor330is configured to be enabled as a result of a logic high or a logic “1” on the second digital input terminal335. Additionally, the third transistor330is configured to be disabled as a result of a logic low or a logic “0” on the second digital input terminal335. For example, the current terminals330A and330C of the third transistor330are approximately the same voltage as the result of a logic high on the first digital input terminal325. The third transistor330is a N-channel MOSFET. Alternatively, the third transistor330may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The second current source340is coupled between the voltage supply310and the first current terminal345A of the fourth transistor345. The second current source340is configured to supply a current of a second magnitude (I1/10) from the voltage supply310to the fourth transistor345. The second magnitude of the second current source340is determined based on a second bias voltage (VBIAS2). The second bias voltage is generated based on the inverse of the transconductance (in siemens) of the fourth transistor345times the second magnitude of the second current source340. For example, VBIAS2is equal to 400 milli volts (mV) as the result of the second magnitude of the second current source340being equal to 8 milli amps and the transconductance of the fourth transistor345being equal to 20 milli siemens. The first current terminal345A of the fourth transistor345is coupled to the second current terminal330C of the third transistor330, second current source340, and a control terminal345B of the fourth transistor345. A second current terminal345C of the fourth transistor345is coupled to common potential (e.g., ground). The fourth transistor345is configured to allow current to flow from second current source340to common potential. The fourth transistor345generates the second bias voltage on the first current terminal345A of the fourth transistor345based on the second magnitude of the second current source340, times the inverse of the transconductance (in siemens) of the fourth transistor345. Alternatively, the fourth transistor345and second current source340may be replaced with a voltage reference or circuitry configured to generate a reference voltage. The fourth transistor345is a N-channel MOSFET. Alternatively, the fourth transistor345may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The control terminal350B of the fifth transistor350is coupled to the second current terminal320C of the second transistor320and the first current terminal330A of the third transistor330. A first current terminal350A of the fifth transistor350is coupled to a second current terminal355C of the sixth transistor355and a second current terminal360C of the seventh transistor360. The fifth transistor350is configured to be enabled as a result of the second transistor320being enabled, such that the first bias voltage, generated by the first transistor315, is coupled to the control terminal350B of the fifth transistor350. The fifth transistor350is configured to be disabled as the result of the second transistor320being disabled and the third transistor330being enabled, such that the second bias voltage, generated by the fourth transistor345, is coupled to the control terminal350B of the fifth transistor350. For example, the fifth transistor350is enabled as the result of a logic high on the first digital input terminal325, a logic low on the second digital input terminal335, and the first bias voltage being greater than a threshold voltage of the fifth transistor350. The first bias voltage and the second bias voltage are configured to be a control voltage applied to the control terminal350B, such that the voltage applied to the control terminal350B is within the range of the bias voltages. The fifth transistor350is a N-channel MOSFET. Alternatively, the fifth transistor350may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The first modulated carrier output terminal240is coupled to a first current terminal355A of the sixth transistor355. The first oscillator input terminal230is coupled to a control terminal355B of the sixth transistor355. The second current terminal355C of the sixth transistor355is coupled to the first current terminal350A of the fifth transistor350. The sixth transistor355is configured to be enabled and/or partially enabled based on the magnitude of the first oscillator input terminal230being greater than or equal to a voltage threshold of the sixth transistor355. For example, a current passing through the sixth transistor355is a half-rectified sinewave of a frequency as the result of the oscillator118ofFIG.1generating a sinewave of the frequency. Additionally, the sixth transistor355is configured to allow current to flow through the transistor based on whether or not the fifth transistor350is enabled. The sixth transistor355is a N-channel MOSFET. Alternatively, the sixth transistor355may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. Advantageously, the sixth transistor355generates positive magnitudes of the modulated carrier signal on the first modulated carrier output terminal240. The second modulated carrier output terminal245is coupled to a first current terminal360A of the seventh transistor360. The second oscillator input terminal235is coupled to a control terminal360B of the seventh transistor360. The second current terminal360C of the seventh transistor360is coupled to the first current terminal350A of the fifth transistor350. The seventh transistor360is configured to be enabled and/or partially enabled based on the magnitude of the second oscillator input terminal235being greater than or equal to a voltage threshold of the seventh transistor360. The second oscillator input terminal235is configured to be coupled to a complementary signal of the first oscillator input terminal230, such that the second oscillator input terminal235is 180 degrees out of phase from the signal coupled to the first oscillator input terminal230. For example, a current passing through the seventh transistor360is a half-rectified sinewave of a frequency as the result of the oscillator118generating a sinewave of the frequency. Additionally, the seventh transistor360is configured to allow current to flow through the transistor based on whether or not the fifth transistor350is enabled. The seventh transistor360is a N-channel MOSFET. Alternatively, the seventh transistor360may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. Advantageously, the seventh transistor360generates negative magnitudes of the modulated carrier signal on the second modulated carrier output terminal245. In example operation, the first bias voltage, generated by the first current source305and the first transistor315, is configured to be of a magnitude greater than or equal to the threshold voltage of the fifth transistor350. The second bias voltage, generated by the second current source340and the fourth transistor345, is configured to be of a magnitude less than the threshold voltage of the fifth transistor350. The second transistor320is configured to be enabled by the first digital input terminal325, such that the first bias voltage is coupled to the control terminal350B of the fifth transistor350as the result of enabling the second transistor320. The fifth transistor350is enabled as a result of the first bias voltage being coupled to the control terminal350B of the fifth transistor350by the second transistor320. The third transistor330is configured to be enabled by the second digital input terminal335, such that the second bias voltage is coupled to the control terminal350B of the fifth transistor350as the result of enabling the third transistor330. The fifth transistor350is disabled as a result of the second bias voltage being coupled to the control terminal350B of the fifth transistor350by the third transistor330. The digital input signal coupled to the digital input terminals325and335is a differential signal, such that the digital input terminals325and335are the inverse of each other. For example, the first digital input terminal325is determined to be a logic low based on the second digital input terminal335being a logic high. Advantageously, the voltage applied to the control terminal350B of the fifth transistor350is configured to be between approximately the first bias voltage and the second bias voltage. Advantageously, the duration to enable the fifth transistor350is reduced compared to disabling the fifth transistor350by coupling the control terminal350B of the fifth transistor350to common potential (e.g., ground). The oscillator input terminals230and235are coupled to an output of the oscillator118, such that the oscillator input terminals230and235are coupled to complementary signals of a carrier frequency. The first oscillator input terminal230enables the sixth transistor355, at a frequency approximately (preferably exactly) equal to that of the carrier frequency, for the magnitudes of the output of the oscillator118greater than zero. The sixth transistor355generates a positive portion of the modulated carrier signal on the first modulated carrier output terminal240based on the portions of the output of the oscillator118that are of a positive magnitude. The second oscillator input terminal235enables the seventh transistor360, at a frequency approximately (preferably exactly) equal to that of the carrier frequency, for the magnitudes of the output of the oscillator118less than zero. The seventh transistor360generates a negative portion of the modulated carrier signal on the second modulated carrier output terminal245based on the portions of the output of the oscillator118that are of a negative magnitude. Additionally, the transistors355and360are configured to generate the modulated carrier signal based on the fifth transistor350, such that enabling the fifth transistor350represents a logic high and disabling the fifth transistor350represents a logic low of the digital input signal. Advantageously, the transistors355and360generate a modulated carrier signal of a carrier frequency equal to that of the frequency of the oscillator118. FIG.4is a schematic diagram of the current mirror circuitry215ofFIG.2and the level shifter circuitry220ofFIG.2. In the example ofFIG.4, the current mirror circuitry215is configured to generate a copy of a digital input signal coupled to the digital input terminals205and210. In the example ofFIG.4, the level shifter circuitry220is configured to generate a digital output signal on the digital input terminals325and335based on the copy of the digital input signal generated by the current mirror circuitry215. Alternatively, the level shifter circuitry220may be coupled to the digital input terminals205and210. In the example ofFIG.4, the current mirror circuitry215includes an example current source402, an example first transistor404, a second transistor406, a third transistor408, and a fourth transistor410. The current mirror circuitry215is configured to generate a copy of the digital input signal coupled to the digital input terminals205and210, such that the value of first digital input terminal205is generated as the gate to drain voltage of the third transistor408and the value of the second digital input terminal210is generated as a gate to drain voltage of the fourth transistor410. Advantageously, the current mirror circuitry215isolates the circuitry coupled to the digital input terminals205and210from the level shifter circuitry220. Advantageously, the current flowing through the transistors404and406, which is based on the digital input terminals205and210, may be replicated in additional circuitry by coupling the gate of an additional transistor to the drain of the transistor408or410. The current source402is coupled between the voltage supply310, a second current terminal404C of the first transistor404, and a second current terminal406C of the second transistor406. The current source402is configured to supply a current from the voltage supply310to the transistors404and406, such that the transistors404and406may be enabled. Alternatively, the current source402may be replaced with a voltage source, or additional circuitry to supply power to the current mirror circuitry215. A first current terminal404A of the first transistor404is coupled to a first current terminal408A of the third transistor408. A control terminal404B of the first transistor404is coupled to the first digital input terminal205. The second current terminal404C of the first transistor404is coupled to the current source402and the second current terminal406C of the second transistor406. The first transistor404is configured to be enabled based on the first digital input terminal205. The first transistor404is a P-channel MOSFET. Alternatively, the first transistor404may be a P-channel FET, a P-channel IGBT, a P-channel JFET, an PNP BJT and/or, with slight modifications, a n-type equivalent device. A first current terminal406A of the second transistor406is coupled to a first current terminal410A of the fourth transistor410. A control terminal406B of the second transistor406is coupled to the second digital input terminal210. The second current terminal406C of the second transistor406is coupled to the current source402and the second current terminal404C of the first transistor404. The second transistor406is configured to be enabled based on the second digital input terminal210. The second transistor406is a P-channel MOSFET. Alternatively, the second transistor406may be a P-channel FET, a P-channel IGBT, a P-channel JFET, an PNP BJT and/or, with slight modifications, a n-type equivalent device. The first current terminal408A of the third transistor408is coupled to the first current terminal404A of the first transistor404and a control terminal408B of the third transistor408. A second current terminal408C of the third transistor408is coupled to common potential (e.g., ground). The third transistor408is configured to be enabled based on the first transistor404, such that the third transistor408is enabled as the result of the first transistor404being enabled by the first digital input terminal205. The third transistor408is a N-channel MOSFET. Alternatively, the third transistor408may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The first current terminal410A of the fourth transistor410is coupled to the first current terminal406A of the second transistor406and a control terminal410B of the fourth transistor410. A second current terminal410C of the fourth transistor410is coupled to common potential (e.g., ground). The fourth transistor410is configured to be enabled based on the second transistor406, such that the fourth transistor410is enabled as the result of the second transistor406being enabled by the second digital input terminal210. The fourth transistor410is a N-channel MOSFET. Alternatively, the fourth transistor410may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. In the example ofFIG.4, the level shifter circuitry220includes a fifth transistor412, an example first capacitor414, an example first resistor416, a sixth transistor418, an example low-dropout (LDO) regulator420, a seventh transistor422, a second resistor424, an eighth transistor426, and a second capacitor428. The level shifter circuitry220is configured to generate a digital output signal on the digital signal terminals325and335, such that the signals at the digital signal terminals325and335are between a minimum and maximum voltage. A first current terminal412A of the fifth transistor412is coupled to the first capacitor414, the first resistor416, and a first current terminal418A of the sixth transistor418. A control terminal412B of the fifth transistor412is coupled to the first current terminal410A of the fourth transistor410and the control terminal410B of the fourth transistor410. A second current terminal412C of the fifth transistor412is coupled to a common potential (e.g., ground). The fifth transistor412is configured to be enabled based on the second transistor406, such that the fifth transistor412is enabled as the result of the second transistor406being enabled by the second digital input terminal210. The fifth transistor412is a N-channel MOSFET. Alternatively, the fifth transistor412may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. The first capacitor414is coupled between the first current terminal408A of the third transistor408and the first current terminal412A of the fifth transistor412. The first capacitor414is configured to isolate the current flowing through the third transistor408from the current flowing through the fifth transistor412. The first resistor416is coupled between the first current terminal412A of the fifth transistor412and the LDO regulator420. The first resistor416is configured to generate a difference in voltage between the first current terminal412A of the fifth transistor412and the LDO regulator420. The magnitude of the first resistor416may be determined based on the magnitude of current flowing through the fifth transistor412. The first current terminal418A of the sixth transistor418is coupled to the first digital input terminal325, the first current terminal412A of the fifth transistor412, the first capacitor414, the first resistor416, and a control terminal422B of the seventh transistor422. A control terminal418B of the sixth transistor418is coupled to the second digital input terminal335, a first current terminal422A of the seventh transistor422, the second resistor424, a first current terminal426A of the eighth transistor426, and the second capacitor428. The sixth transistor418is configured to short the first resistor416as the result of disabling the eighth transistor426. The sixth transistor418is a P-channel MOSFET. Alternatively, the sixth transistor418may be a P-channel FET, a P-channel IGBT, a P-channel JFET, an PNP BJT and/or, with slight modifications, a n-type equivalent device. The LDO regulator420is coupled to the resistors416and424, the second current terminal418C of the sixth transistor418, and a second current terminal422C of the seventh transistor422. The LDO regulator420is configured to supply a supply voltage (Woo), such that the magnitude of the resistors416and424may be configured to set the digital signal terminals325and335. For example, the inverse of the transconductance of the fifth transistor412and the magnitude of the first resistor416are configured, such that the first digital input terminal325is equal to the first bias voltage plus the second bias voltage generated by the current sources305and340and the transistors315and345. Advantageously, the LDO regulator420is configured to set a magnitude of the voltage of the digital signal terminals325and335. The first current terminal422A of the seventh transistor422is coupled to the second digital input terminal335, the control terminal418B of the sixth transistor418, the second resistor424, and the first current terminal426A of the eighth transistor426. The control terminal422B of the seventh transistor422is coupled to the first digital input terminal325, the first current terminal412A of the fifth transistor412, the first capacitor414, the first resistor416, and the first current terminal418A of the sixth transistor418. The second current terminal422C of the seventh transistor is coupled to the resistors416and424, the second current terminal418C of the sixth transistor418, and the LDO regulator420. The seventh transistor422is coupled in parallel with the second resistor424. The seventh transistor422is configured to be enabled as the result of disabling the fifth transistor412. The seventh transistor422is configured to set the second digital input terminal335based on the transistors426and412. For example, the second digital input terminal335is configured to a logic high as the result of enabling the eighth transistor426to disable the sixth transistor418and enable the seventh transistor422. The seventh transistor422is a P-channel MOSFET. Alternatively, the seventh transistor422may be a P-channel FET, a P-channel IGBT, a P-channel JFET, an PNP BJT and/or, with slight modifications, a n-type equivalent device. The first current terminal426A of the eighth transistor426is coupled to the second digital input terminal335, the control terminal418B of the sixth transistor418, the first current terminal422A of the seventh transistor422, the second resistor424, and the second capacitor428. A control terminal426B of the eighth transistor426is coupled to the first current terminal404A of the first transistor404, the terminals408A and408B of the third transistor408, and the first capacitor414. A second current terminal426C of the eighth transistor426is coupled to common potential (e.g., ground). The eighth transistor426is configured to be enabled based on the first transistor404, such that the eighth transistor426is enabled as the result of the first transistor404being enabled by the first digital input terminal205. Additionally, the eighth transistor426is configured to disable the sixth transistor418as a result of being enabled. For example, the eighth transistor426is enabled as a result of the first digital input terminal205enabling the first transistor404. The eighth transistor426is a N-channel MOSFET. Alternatively, the eighth transistor426may be an N-channel FET, an N-channel IGBT, an N-channel JFET, an NPN BJT and/or, with slight modifications, a p-type equivalent device. Advantageously, the eighth transistor426may be enabled to set the second digital input terminal335at approximately (preferably the exactly) the same time as the eighth transistor426enables the sixth transistor418. The second capacitor428is coupled between the first current terminal408A of the third transistor408and the first current terminal426A of the eighth transistor426. The second capacitor428is configured to isolate the current flowing through the third transistor408from the current flowing through the eighth transistor426. In example operation, the current mirror circuitry215is configured to receive a differential digital input signal at the digital input terminals205and210, such that the signals coupled to the digital input terminals205and210are complementary signals. Alternatively, the current mirror circuitry215may be modified to be configured for single ended operation. The digital input terminals205and210are configured to control the transistors404and406, such that a logic low or high may enable or disable the transistors404and406. For example, the first digital input terminal205enables the first transistor404as the result of a logic low. Alternatively, the first transistor404may be replaced with a n-channel MOSFET, such that the first transistor404is enabled as a result of the first digital input terminal205being a logic high. The transistors408and410are configured to be enabled as a result of enabling the transistors404or406. For example, the first transistor404enables the third transistor408as a result of the first digital input terminal205enabling the first transistor404. The transistors408and410are configured to control the transistors412and426, such that the third transistor408may enable the fifth transistor412and the fourth transistor410may enable the eighth transistor426. For example, the fourth transistor410enables the fifth transistor412as a result of the second transistor406enabling the fourth transistor410. Advantageously, the current mirror circuitry215enables the transistors412and426, such that the current flowing through the current terminals is equal to the current flowing through the current terminals of the transistors408and410. The level shifter circuitry220is coupled to the current mirror circuitry215, such that the current mirror circuitry215may enable the transistors412and426based on the digital input terminals205and210. The level shifter circuitry220is configured to shift the voltage level of the digital signal terminals325and335based on the resistors416and424, and the LDO regulator420. The level shifter circuitry220is configured to set the first digital input terminal325to a logic low of a first reference voltage based on the fifth transistor412, such that a magnitude of the current flowing through the fifth transistor412multiplied by the inverse of the transconductance of the fifth transistor412is equal to the voltage level representing a logic low. The fifth transistor412is configured as a voltage divider, such that the inverse of the transconductance and a magnitude of the first resistor416may determine the voltage representing the logic low. For example, a logic low may be equal to 0.5 volts as a result of the LDO regulator420being equal to approximately 2 volts, a magnitude of the first resistor416equal to 150 ohms, and the fifth transistor412having a transconductance of 20 milli siemens. Advantageously, the minimum voltage of the digital signal terminals325and335may be shifted based on the value of the LDO regulator420, the transconductance of the transistors412and426, and the resistors416and424. The level shifter circuitry220is configured to enable the transistors418and422based on the transistors412or426, such that the sixth transistor418is enabled as a result of enabling the eighth transistor426and the seventh transistor422is enabled as a result of enabling the fifth transistor412. The transistors418and422are configured to set the digital signal terminals325and335to a logic high, by coupling the LDO regulator420to the digital signal terminals325and335as a result of enabling the transistors412or426. For example, the second digital input terminal335is coupled to the LDO regulator420as a result of the current mirror circuitry215enabling the fifth transistor412. Advantageously, the maximum voltage of the digital signal terminals325and335may be modified based on the value of the LDO regulator420. Advantageously, the level shifter circuitry220is configured to enable the transistors418or422at approximately (preferably exactly) the same time as the level shifter circuitry220disables the transistors418or422. Advantageously, the digital signal terminals325and335are configured to represent a logic high by setting the digital signal terminals325and335to the voltage of the LDO regulator420and a logic low by setting the digital signal terminals325and335to the voltage determined by the components of the level shifter circuitry220. FIG.5is an example signal diagram including an example digital input signal (DIN)505, an example gate voltage (VG) line510, an example carrier signal current (ITX) line515, and an example modulated carrier signal520over time. In the example ofFIG.5, the digital input signal505represents the digital input terminals (e.g., the digital input terminals205and210ofFIGS.2and4, and the digital signal terminals325and335ofFIGS.3and4) over a sample time represented by the time axis525. The digital input signal505represents a logic low from time530to time535. The digital input signal505represents a logic high from time535to time540. The digital input signal505may be biased by the level shifter circuitry220ofFIGS.2and4, such that the logic high and a logic low may be any voltage above common potential (e. g., ground). The gate voltage line510represents the voltage of the control terminal350B of the fifth transistor350ofFIG.3over the time axis525. The gate voltage line510begins to increase at approximately time535as a response to the digital input signal505representing a logic high. The gate voltage line510approaches a maximum voltage of the gate voltage line510similar to that of a logarithmic curve, such that the control terminal350B of the fifth transistor350is enabled shortly after time535. The magnitude of the voltage of the gate voltage line510enables the fifth transistor350at time545. The gate voltage line510begins to decrease near time540, such that the fifth transistor350is disabled by time540. Advantageously, the gate voltage line510reaches a voltage magnitude to enable the transistor at time545. The carrier signal current line515represents the current of the modulated carrier signal generated by the OOK modulator circuitry225ofFIGS.2and3at the modulated carrier output terminals240and245ofFIGS.2and3. A magnitude of the current of the modulated the carrier signal is greater than zero as the result of the control terminal350B enabling the fifth transistor350at time545. The carrier signal current line515follows the trend of the gate voltage line510. The modulated carrier signal520represents the voltage of the modulated carrier signal generated by the OOK modulator circuitry225ofFIGS.2and3over the time axis525. The modulated carrier signal520represents a logic low from time530to time545, corresponding to the digital input signal representing a logic low from time530to535. The modulated carrier signal520represents a logic high (e.g., a sinusoidal signal with a non-zero magnitude and a frequency that may be the same as the frequency as the signal provided by the oscillator) from time545to time550corresponding to the logic high of the digital input signal505from time535to time540. The current of the modulated carrier signal520is represented by the magnitude of the carrier signal current line515. FIG.6includes example timing diagrams to illustrate signals through the OOK modulation circuit ofFIG.2during an example operation. In the example ofFIG.6, the timing diagrams include an example time axis605, an example voltage axis610, an example first eye diagram615, a second eye diagram620, and a third eye diagram625. The first eye diagram615represents the eye closure of the current generated by the current mirror circuitry215ofFIGS.2and4. The first eye diagram615represents the magnitude of the voltage of the signals at the first current terminals408A and410A of transistors408and410ofFIG.4over time. The first eye diagram615transitions from a maximum voltage at time630to a minimum voltage at time635. The first eye diagram615remains at the minimum and maximum voltages from approximately time635to time640. The duration that the first eye diagram615remains at maximum is approximately 2.5 nano seconds (nS). The second eye diagram620represents the eye closure of the digital signal terminals325and335ofFIGS.3and4, generated by the level shifter circuitry220ofFIGS.2and4. The second eye diagram620transitions from a maximum voltage at approximately time630to a minimum voltage at time645. The second eye diagram620remains at the minimum and maximum voltages from approximately time645to time640. The duration that the second eye diagram620remains at the maximum and minimum voltages is approximately 1.15 nS. Advantageously, the level shifter circuitry220increases the duration that the second eye diagram remains at the minimum and maximum voltages, such that the ISI of the digital signal terminals325and335is reduced compared to the ISI of the digital input terminals205and210. The third eye diagram625represents the eye closure of the modulated carrier output terminals240and245ofFIGS.2and3. The third eye diagram625transitions from a maximum voltage at approximately time630to a minimum voltage at time645. The third eye diagram625remains at the minimum and maximum voltages from approximately time645to time640. The duration that the third eye diagram625remains at the maximum and minimum voltages is approximately 1.15 nS. Advantageously, the level shifter circuitry220increases the duration that the third eye diagram remains at the minimum and maximum voltages, such that the ISI of the digital signal terminals325and335is reduced compared to the ISI of the digital input terminals205and210. FIG.7Ais an example timing diagram of an example gate voltage705of the OOK modulator circuitry225ofFIGS.2and3during example operation. In the example ofFIG.7A, the gate voltage705is represented as a voltage of an example voltage axis710over time of an example time axis715. The timing diagram ofFIG.7Aincludes the gate voltage705, an example conventional gate voltage plot720, an example first bias voltage725, and a second bias voltage730. The gate voltage705represents the voltage of the control terminal350B of the fifth transistor350ofFIG.3over time. The gate voltage705represents the circuitry of the OOK modulator circuitry225ofFIGS.2and3, enabling the fifth transistor350as a result of a logic high on the digital input signal at the digital signal terminals325and335. The first bias voltage725represents the first bias voltage generated by the first current source305ofFIG.3and the first transistor315ofFIG.3. The first bias voltage725may be configured to be any voltage greater than a voltage threshold of the fifth transistor350, such that the fifth transistor350may be enabled as the result of enabling the second transistor320ofFIG.3. The second bias voltage730represents the second bias voltage generated by the second current source340ofFIG.3and the fourth transistor345ofFIG.3. The second bias voltage730may be configured to be any voltage less than the voltage threshold of the fifth transistor350, such that the fifth transistor350may be disabled as the result of enabling the third transistor330ofFIG.3. The gate voltage705begins at the second bias voltage730to generate a modulated carrier signal representing a logic low. The gate voltage705increases towards the first bias voltage725to indicate a logic high. At time740the gate voltage705is approximately equal to the first bias voltage725. The gate voltage705decreases towards the second bias voltage730at approximately time740. The conventional gate voltage720increases from common potential (e.g., ground) towards the first bias voltage725. The gate voltage705and the conventional gate voltage720are approximately equal between the bias voltages725and730. Advantageously, the OOK modulator circuitry225ofFIGS.2and3is able to enable the fifth transistor350faster than a conventional OOK modulator (such as the conventional OOK modulator ofFIG.9). Advantageously, the OOK modulator circuitry225ofFIGS.2and3exhibits reduced rise and fall durations of the gate voltage of the control terminal350B of the fifth transistor350compared to a conventional OOK modulator. FIG.7Bis an example timing diagram of an example carrier signal current745of the OOK modulator circuitry225ofFIGS.2and3during example operation. In the example ofFIG.7B, the carrier signal current745is represented as a current of an example current axis750over time of an example time axis755. The timing diagram ofFIG.7Bincludes the carrier signal current745of the OOK modulator circuitry225ofFIGS.2and3, an example conventional carrier signal current760, an example first bias current765, and a second bias current770. The carrier signal current745represents the current flowing through the current terminals330A and330C of the fifth transistor350ofFIG.3over time. The carrier signal current745represents the circuitry of the OOK modulator circuitry225ofFIGS.2and3, enabling the fifth transistor350as a result of a logic high on the digital input signal at the digital signal terminals325and335. The first bias current765represents a current representation of the first bias voltage generated by the first current source305ofFIG.3and the first transistor315ofFIG.3. The first bias current765may be configured to be any current to generate a voltage greater than a voltage threshold of the fifth transistor350, such that the fifth transistor350may be enabled as the result of enabling the second transistor320ofFIG.3. The second bias current770represents a current representation of the second bias voltage generated by the second current source340ofFIG.3and the fourth transistor345ofFIG.3. The second bias current770may be configured to be any current that generates a voltage less than the voltage threshold of the fifth transistor350, such that the fifth transistor350may be disabled as the result of enabling the third transistor330ofFIG.3. The carrier signal current745begins at the second bias current770to generate a modulated carrier signal representing a logic low. The carrier signal current745increases towards the first bias current765to indicate a logic high. At time775the carrier signal current745is approximately equal to the first bias current765. The carrier signal current745decreases towards the second bias current770at approximately time775. The conventional carrier signal current760increases from common potential (e.g., ground) towards the first bias current765. The carrier signal current745and the conventional carrier signal current760are approximately equal between the bias currents765and770. Advantageously, the OOK modulator circuitry225ofFIGS.2and3increases the current of the carrier signal current745less than the conventional carrier signal current760. FIG.8is an example diagram illustrating an example gate voltage versus an example carrier signal current of the OOK modulator circuitry225ofFIGS.2and3during example operation. In the example ofFIG.8, the diagram includes an example gate voltage axis805, an example carrier current axis810, and an example operation line815. The gate voltage axis805represents the voltage of the control terminal350B of the fifth transistor350ofFIG.3. The carrier current axis810represents the current flowing through current terminals350A and350C of the fifth transistor350. The carrier current axis810is approximately equal to the current of the modulated carrier signal generated by the OOK modulator circuitry225. The operation line815represents how a change in the voltage of the control terminal350C of the fifth transistor350changes the carrier current. An example first range820depicts the difference in the voltage of the control terminal350B of the fifth transistor350. A second range825depicts the difference in the voltage of the gate voltage of a conventional OOK modulator (e.g., the conventional OOK modulator ofFIG.9). Advantageously, the range of the gate voltage of the OOK modulator circuitry225is reduced compared to the conventional OOK modulator ofFIG.9, such that the OOK modulator circuitry225exhibits less ISI as the result of the ability to enable the fifth transistor350over a smaller voltage increase. FIG.9is a schematic diagram of an example conventional OOK modulator900. The conventional OOK modulator900includes a current source905, a supply voltage (Vdd)910, a first transistor915, a second transistor920, a third transistor925, a buffer930, an inverter935, a fourth transistor940, a fifth transistor945, a sixth transistor950, a seventh transistor955, a first modulated output terminal960, and a second modulated output terminal965. In the example ofFIG.9, the conventional OOK modulator900is configured to convert a digital input signal coupled to the buffer930to generate a modulated carrier signal on the modulated output terminals960and965. In the example ofFIG.9, the current source905is coupled between the supply voltage910and the first transistor915. The first transistor915is coupled between the current source905and common potential (e.g., ground). The first transistor915is configured to generate a bias voltage based on a magnitude of current from the current source905. The first transistor915is coupled to the transistors920and925. The transistors920and925are coupled in parallel, such that a current may flow through either of the transistor920or925to contribute to the current generated by the current source905. The second transistor920is configured to be controlled by an output of the buffer930, such that the second transistor920is enabled as a result of a logic high on an input of the buffer930. The third transistor925is configured to be controlled by an output of the inverter935, such that third transistor925is enabled by a logic low on an input of the inverter935. The inverter935is coupled to the buffer930. The inverter935is configured to control the transistors925and940. The fourth transistor940is coupled between the transistors920and925, and common potential. The fourth transistor940is configured to control the fifth transistor945, such that fifth transistor945is disabled as the result of enabling the fourth transistor940. The fifth transistor945is coupled between the transistors950and955, and common potential. The fifth transistor945is configured to enable the transistors950and955to generate a modulated carrier signal. The fifth transistor945may be enabled by enabling the second transistor920, such that the bias voltage generated by the current source905and the first transistor915is coupled to the fifth transistor945. The sixth transistor950is coupled between the fifth transistor945and the first modulated output terminal960. The sixth transistor950is configured to be controlled by an output of an oscillator, such that the sixth transistor950generates a signal similar to that of the output of the oscillator as the result of enabling the fifth transistor945. The seventh transistor955is coupled between the fifth transistor945and the second modulated output terminal965. The seventh transistor955is configured to be controlled by an output of an oscillator, such that the seventh transistor955generates a signal similar to that of the output of the oscillator as the result of enabling the fifth transistor945. In example operation, the conventional OOK modulator900generates a modulated carrier signal on the modulated output terminals960and965based on a digital input signal coupled to the input of the buffer930. The buffer930controls the second transistor920, such that a logic high on the digital input signal may enable the second transistor920. The fifth transistor945is enabled based on the bias voltage, generated by the first transistor915and the current source905, being coupled by the second transistor920to the fifth transistor945. The inverter935may disable the fifth transistor945as a result of enabling the fourth transistor940. The fourth transistor940is configured to couple the fifth transistor945to common potential, such that the fifth transistor945may not be enabled. In example operation, the fifth transistor945is enabled based on a logic high on the digital input signal. The fifth transistor945is disabled based on a logic low on the digital input signal. The fifth transistor945is controlled by the transistors920,925, and940, such that the voltage configured to control the fifth transistor945is between common potential and the bias voltage. Advantageously, the OOK modulator circuitry225ofFIGS.2and3is configured to couple a voltage between a first bias voltage and a second bias voltage to control the fifth transistor350. Alternatively, the current mirror circuitry215ofFIGS.2and4and the level shifter circuitry220ofFIGS.2and4may be coupled to the input of the buffer930. The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A provides a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal provided by device A. A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. As used herein, the terms “terminal”, “node”, “interconnection”, “pin” and “lead” are used interchangeably. Unless specifically stated to the contrary, these terms are generally used to mean an interconnection between or a terminus of a device element, a circuit element, an integrated circuit, a device or other electronics or semiconductor component. A circuit or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described circuitry or device. For example, a structure described as including one or more semiconductor elements (such as transistors), one or more passive elements (such as resistors, capacitors, and/or inductors), and/or one or more sources (such as voltage and/or current sources) may instead include only the semiconductor elements within a single physical device (e.g., a semiconductor die and/or integrated circuit (IC) package) and may be adapted to be coupled to at least some of the passive elements and/or the sources to form the described structure either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party. While the use of particular transistors are described herein, other transistors (or equivalent devices) may be used instead with little or no change to the remaining circuitry. For example, a metal-oxide-silicon FET (“MOSFET”) (such as an n-channel MOSFET, nMOSFET, or a p-channel MOSFET, pMOSFET), a bipolar junction transistor (BJT—e.g., NPN or PNP), insulated gate bipolar transistors (IGBTs), and/or junction field effect transistor (JFET) may be used in place of or in conjunction with the devices disclosed herein. The transistors may be depletion mode devices, drain-extended devices, enhancement mode devices, natural transistors or other type of device structure transistors. Furthermore, the devices may be implemented in/over a silicon substrate (Si), a silicon carbide substrate (SiC), a gallium nitride substrate (GaN) or a gallium arsenide substrate (GaAs). Circuits described herein are reconfigurable to include the replaced components to provide functionality at least partially similar to functionality available prior to the component replacement. Components shown as resistors, unless otherwise stated, are generally representative of any one or more elements coupled in series and/or parallel to provide an amount of impedance represented by the shown resistor. For example, a resistor or capacitor shown and described herein as a single component may instead be multiple resistors or capacitors, respectively, coupled in parallel between the same nodes. For example, a resistor or capacitor shown and described herein as a single component may instead be multiple resistors or capacitors, respectively, coupled in series between the same two nodes as the single resistor or capacitor. While some example embodiments suggest that certain elements are included in an integrated circuit while other elements are external to the integrated circuit, in other example embodiments, additional or fewer features may be incorporated into the integrated circuit. In addition, some or all of the features illustrated as being external to the integrated circuit may be included in the integrated circuit and/or some features illustrated as being internal to the integrated circuit may be incorporated outside of the integrated. As used herein, the term “integrated circuit” means one or more circuits that are: (i) incorporated in/over a semiconductor substrate; (ii) incorporated in a single semiconductor package; (iii) incorporated into the same module; and/or (iv) incorporated in/on the same printed circuit board Uses of the phrase “ground” in the foregoing description include a chassis ground, an Earth ground, a floating ground, a virtual ground, a digital ground, a common ground, and/or any other form of ground connection applicable to, or suitable for, the teachings of this description. As used herein, “common potential” may refer to a potential (such as ground potential) on one or both side of the isolation barrier. The “common potential” on one side of the isolation barrier may be at a different potential than the “common potential” on the other side of the isolation barrier. Unless otherwise stated, “about,” “approximately,” or “substantially” preceding a value means+/−10 percent of the stated value. Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. | 82,994 |
11863361 | It is to be understood that throughout the appended drawings and corresponding descriptions, like features are identified by like reference characters. Furthermore, it is also to be understood that the drawings and ensuing descriptions are intended for illustrative purposes only and that such disclosures do not provide a limitation on the scope of the claims. DETAILED DESCRIPTION The instant disclosure is directed to address at least some of the deficiencies of the current technology. In particular, the instant disclosure describes systems and methods for frequency-domain (FD) local oscillator frequency offset (LOFO) compensation. Unless otherwise defined or indicated by context, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the described embodiments appertain to. In the context of the present specification, unless provided expressly otherwise, the words “first”, “second”, “third”, etc. have been used as adjectives only for the purpose of allowing for distinction between the nouns that they modify from one another, and not for the purpose of describing any particular relationship between those nouns. Thus, for example, it should be understood that, the use of the terms “first processor” and “third processor” is not intended to imply any particular order, type, chronology, hierarchy or ranking (for example) of/between the processor, nor is their use (by itself) intended to imply that any “second processor” must necessarily exist in any given situation. Further, as is discussed herein in other contexts, reference to a “first” element and a “second” element does not preclude the two elements from being the same actual real-world element. Thus, for example, in some instances, a “first” processor and a “second” processor may be the same software and/or hardware, in other cases they may be different software and/or hardware. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly or indirectly connected or coupled to the other element or intervening elements that may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in alike fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). In the context of the present specification, when an element is referred to as being “associated with” another element, in certain embodiments, the two elements can be directly or indirectly linked, related, connected, coupled, the second element employs the first element, or the like without limiting the scope of present disclosure. The terminology used herein is only intended to describe particular representative embodiments and is not intended to be limiting of the present technology. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Implementations of the present technology each have at least one of the above-mentioned objects and/or aspects, but do not necessarily have all of them. It should be understood that some aspects of the present technology that have resulted from attempting to attain the above-mentioned object may not satisfy this object and/or may satisfy other objects not specifically recited herein. The examples and conditional language recited herein are principally intended to aid the reader in understanding the principles of the present technology and not to limit its scope to such specifically recited examples and conditions. It will be appreciated that those skilled in the art may devise various arrangements which, although not explicitly described or shown herein, nonetheless embody the principles of the present technology and are included within its spirit and scope. Furthermore, as an aid to understanding, the following description may describe relatively simplified implementations of the present technology. As persons skilled in the art would understand, various implementations of the present technology may be of a greater complexity. In some cases, what are believed to be helpful examples of modifications to the present technology may also be set forth. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and a person skilled in the art may make other modifications while nonetheless remaining within the scope of the present technology. Further, where no examples of modifications have been set forth, it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. Moreover, all statements herein reciting principles, aspects, and implementations of the present technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, whether they are currently known or developed in the future. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present technology. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo-code, and the like represent various processes which may be substantially represented in computer-readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the figures, including any functional block labeled as a “processor” or a “processing unit”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. In some embodiments of the present technology, the processor may be a general-purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a graphics processing unit (GPU). Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. In the context of the present disclosure, the expression “data” includes data of any nature or kind whatsoever capable of being stored in a database. Thus, data includes, but is not limited to, audiovisual works (images, movies, sound records, presentations etc.), data (location data, numerical data, etc.), text (opinions, comments, questions, messages, etc.), documents, spreadsheets, etc. Software modules, modules, or units which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. With these fundamentals in place, the instant disclosure describes systems and methods for FD LOFO compensation. Typically, an intradyne coherent optical communication system has a laser-based local oscillator (LO) which is free-running at a receiver. Normally, an operating frequency of the LO is not synchronized with the laser-based oscillator at a transmitter. This fact may result in a non-zero frequency offset induced and applied to the received signal. To this end, a local oscillator frequency offset (LOFO) compensation is typically required at the receiver, and more specifically, in the receiver digital signal processing (DSP). Various compensations techniques have been suggested in the art.FIG.1(Prior Art) illustrates a conventional time-domain (TD) LOFO compensation technique100, which may be realized by applying an inverse phase ramp on the received signal. Specifically, a phase compensation term is calculated based on the estimated LOFO, which requires one exponential calculation or equivalent operation for each sample. Then another complex multiplication is required to apply the phase compensation term to the TD signal samples. One of the concerns with the conventional compensation technique100is its integration with certain DSP functionalities which are implemented in the frequency-domain (FD). There were also LOFO compensation techniques that can be implemented in FD. In most current coherent optical communication systems, the initial received TD signals may be converted into FD at certain stages of the receiver DSP mainly for performing chromatic dispersion compensation. The FD LOFO compensation may be employed when the received signal is in FD in the form of signal spectrum. One simple way of this type of compensation is to shift the signal spectrum by samples or FFT bins.FIG.2(Prior Art) illustrates an example of an FD signal spectrum and a shifted FD signal spectrum, in accordance with this conventional FD LOFO compensation technique. Even though this technique is simple in terms of implementation, the compensation resolution may be insufficient especially for future high-speed transceivers when the signal sampling rate is high while the FFT size is limited. With this said, there is an interest in developing FD LOFO compensation techniques having fine resolution. FIG.3illustrates an example of a high-level functional block diagram of an FD LOFO compensation system300, in accordance with various non-limiting embodiments of the present disclosure. As shown, the FD LOFO compensation system300may include an integer FFT bins-based oscillator302, a chromatic dispersion compensator (CDC)304, a fractional FFT bins-based oscillator306and a controller308. It is to be noted that the FD LOFO compensation system300may include other components. However, for the purpose of simplicity, such components have been omitted from theFIG.3. The integer FFT bins-based oscillator302may be configured to receive an FD signal referred to as a received signal spectrum310. The received signal spectrum310may be a digital signal spectrum corresponding to an optical input signal. The optical input signal may be converted to the received signal spectrum310by any suitable hardware, for example, optical-to-electrical convertor, and by any preceding DSP, for example, FFT that converts the TD signal into FD, without limiting the scope of present disclosure. Free-running operation of the LO at the receiver (not illustrated) may result in a non-zero LOFO induced and applied to the received signal spectrum310. The integer FFT bins-based oscillator302may be configured to compensate part of the LOFO that corresponds to integer number of FFT bins in the received signal spectrum310. It is to be noted that how the integer FFT bins-based oscillator302is implemented should not limit a scope of the present disclosure.FIG.4illustrates a representative example400of the integer FFT bins-based oscillator302, in accordance with various non-limiting embodiments of the present disclosure. As shown, the integer FFT bins-based oscillator302may include a frequency shifter402which applies a bin shifting operation on the received signal spectrum310. It is to be noted that the integer FFT bins-based oscillator302may include other components, however, such components have been omitted fromFIG.4for the purpose of simplicity. The frequency shifter402may be configured to provide a shift by k bins to the signal spectrum, where k may be an integer value. The frequency shifter402may generate a processed signal spectrum312from the received signal spectrum310. Returning toFIG.3, the integer FFT bins-based oscillator302may provide the processed signal spectrum312to the CDC304. The CDC304may be configured to compensate chromatic dispersion in the processed signal spectrum312. The CDC304may generate chromatic dispersion compensated signal spectrum314. In various non-limiting embodiments, the CDC304may be optional in the FD LOFO compensation system300. Additionally, the CDC304may be located at other suitable location either inside or outside the FD LOFO compensation system300without limiting the scope of present disclosure. It is to be noted that how the CDC304compensates the chromatic dispersion should not limit the scope of present disclosure. The CDC304may forward the chromatic dispersion compensated signal spectrum314towards the fractional FFT bins-based oscillator306. It is to be noted that how the fractional FFT bins-based oscillator306is physically implemented should not limit a scope of the present disclosure.FIG.5illustrates a representative example500of the fractional FFT bins-based oscillator306, in accordance with various non-limiting embodiments of the present disclosure. As shown, the fractional FFT bins-based oscillator306may include a plurality of spectrum shifters502-1,502-2,502-3,502-4, . . .502-n, a plurality of multipliers504-1,504-2,504-3,504-4, . . .504-n, and an adder506. It is to be noted that the fractional FFT bins-based oscillator306may include other components, however, such components have been omitted fromFIG.5for the purpose of simplicity. The plurality of spectrum shifters502-1,502-2,502-3,502-4, . . .502-nmay be configured to shift (circularly or linearly) the chromatic dispersion compensated signal spectrum314by integer number of bins. The plurality of multipliers504-1,504-2,504-3,504-4, . . .504-nmay be configured to multiply filter coefficients associated with the fractional FFT bins-based oscillator306with the shifted copies of the chromatic dispersion compensated signal spectrum314. The adder506may be configured to add the outputs from the plurality of multipliers504-1,504-2,504-3,504-4, . . .504-nto generate a final processed signal spectrum316. The entire process500achieves an effective frequency shift of fractional number of FFT bins for the reason elaborated below. In certain embodiments, the fractional FFT bins-based oscillator306may be configured to further compensate LOFO in the compensated signal by the integer FFT bins-based oscillator302with a fine compensation resolution of a fractional number of FFT bins. In order to effectively achieve a frequency shift of a fractional number of FFT bins, the fractional FFT bins-based oscillator306may perform a convolution of the chromatic dispersion compensated signal spectrum314with an impulse response of the fractional FFT bins-based oscillator306. In the convolution calculation, the required fractional-FFT-bin frequency shift of the convolution output is realized by frequency shifting the baseline impulse response of the fractional FFT bins-based oscillator306. The baseline impulse response (in FD) of the fractional FFT bins-based oscillator306may have an analytical form g(f), and an arbitrary frequency shift f0may be achieved by computing g(f−f0). In one example, the impulse response of the fractional FFT bins-based oscillator306may be transformed based on a rectangular window function in TD. In this case, the FD impulse response may have a sinc shape. The fractional FFT bins-based oscillator306may perform the convolution between the chromatic dispersion compensated signal spectrum314with a frequency-shifted sinc shaped impulse response. It is to be noted that the sinc function may have an analytical form g(f), so the frequency shift may be realized by offsetting the frequency grids in the analytical computations of the impulse response, and this shift may be any fractional amount of FFT bins. The entire process may be equivalent to applying a phase ramp on the TD signal samples that are selected by the rectangular window. To reduce hardware complexity and for the ease of implementation, the FD impulse response may be truncated to a reasonably long FIR filter. In other words, the convolution between the signal spectrum and the impulse response may be implemented in the form of applying an FIR filter on the signal spectrum, as shown in FIGS. Generally, to shorten the FD impulse response and consequently simplify the FIR filter, the corresponding TD window may need to have a smooth window edge, or in other words, a smooth transition from high power to low power. It is noted that the rectangular window in the previous example may have an abrupt power change at the window edge, so the corresponding sinc shape impulse response may be relatively long. To design a “better” TD window shape that has a shorter FD impulse response, the fractional FFT bins-based oscillator306may rely on the use of the overlap-and-save (OLS) technique. The use of OLS may be referred to as, after the fractional FFT bins-based oscillator306and other applicable FD DSP, a certain percentage of the data samples (for example, 50%) may be discarded when the signal is converted back to TD. For the following illustration, 50% of OLS is assumed unless specified otherwise. It is to be noted that this assumption is only for illustration purpose and should not limit the scope of present disclosure. Considering a significant portion of TD samples may finally be discarded, distortion may be added to those samples during the TD windowing by the fractional FFT bins-based oscillator306without degrading the system performance. Such distortion may be designed specifically to facilitate a smooth transition between high power and low power at the TD window edge, and at the same time, introduce no or insignificant distortion on the samples that are kept for the DSP afterwards. To achieve a smooth TD window edge and an appropriate distortion control at the same time, the definition of RC pulse may be used as an example in various non-limiting embodiments of the present disclosure. To define the window in TD, the RC function that may have the analytical form below (equation 1) may be used. H(t)={1,❘"\[LeftBracketingBar]"t❘"\[RightBracketingBar]"≤1-α2Fref12[1+cos(πFrefα[❘"\[LeftBracketingBar]"t❘"\[RightBracketingBar]"-1-α2Fref])],1-α2Fref<❘"\[LeftBracketingBar]"t❘"\[RightBracketingBar]"≤1+α2Fref0,otherwise(1) where Frefmay be a reference frequency that may be related to the TD window width, and α may be a roll-off factor of the RC definition. Note that in our design, equation 1 may be used to define a TD window, while in many conventional applications the similar form of equation 1 may be used to define an FD passband. In various non-limiting embodiments, the windowing function defined in equation 1 may be applied on TD samples, and to keep the samples in the middle of the window undistorted while introducing controlled distortion on the samples at the window edge. The FD impulse response corresponding to the TD window defined in equation 1 is illustrated as equation 2. h(f)={(π4Frefsinc(12α),f=±Fref2α1Frefsinc(fFref)cos(παfFref)1-(2αfFref)2,otherwise(2) Corresponding to multiplying a window defined in equation 1 with the TD signal samples, the FD impulse response defined in equation 2 may be used for the convolution with the signal spectrum. As mentioned earlier, such convolution may be implemented in the form ofFIG.5after truncating the impulse response to appropriate FIR taps. Based on the properties of RC definition, when α is 0, the TD window function may degenerate to a rectangular window. Also, when the Frefis fixed and the a is increased, more signal samples may get distorted by the TD window, and at the same time, the FD impulse response defined in equation 2 may have less significant tails. To maximize the portion of undistorted TD signal samples within an FFT block given a defined a, a value of Frefmay be set to fs/NFFT, where fsmay be the sampling rate of the received signal and NFFTmay the size of the FFT. FIGS.6and7illustrate representative examples600and700for the TD RC window shapes with α=⅓ and α=½ respectively, in accordance with various non-limiting embodiments of the present disclosure. Specifically, the representative example600illustrates a TD RC window shape with α=⅓. In this example, about ⅔ of the TD signal samples will not be distorted (or only be distorted insignificantly) by the TD window when Frefis set to fs/NFFT. The representative example700illustrates a TD RC window shape with α=½. In this example, about ½ of the TD signal samples will not be distorted (or only be distorted insignificantly) by the TD window when Frefis set to fs/NFFT. In general, when Frefis set to fs/NFFT, α is roughly equal to a ratio of the distorted samples within an FFT block, and 1−α is roughly equal to a ratio of the undistorted (or insignificantly distorted) samples within an FFT block. It is to be noted that with an increase in a, the FD impulse response may have less significant tails. As a result, the fractional FFT bins-based oscillator306with fewer FIR taps may be implemented. FIGS.8and9illustrate representative examples of the FD impulse responses800and900in the fractional FFT bins-based oscillator306with different values of a to compensate 0.1-bin of LOFO, in accordance with various non-limiting embodiments of the present disclosure. It is to be noted thatFIG.9is an augmented version ofFIG.8. When a is sufficiently large, the tail of the impulse responses800and900may be insignificant, resulting in a potential of reducing the number of FIR taps. FIGS.10and11illustrate representative examples of the FD impulse responses1000and1100in the fractional FFT bins-based oscillator306corresponding to the compensation of various LOFOs when α is 0.5, in accordance with various non-limiting embodiments of the present disclosure. It is to be noted thatFIG.11is an augmented zoomed version ofFIG.10. As shown, the taps at indices −4/+4 illustrate close-to-zero magnitude, so the FD impulse response of the fractional FFT bins-based oscillator306may be truncated to 7 FIR taps for the compensation of these LOFOs. Based on values of Frefand α, the controller308(as shown inFIG.3) may compute the FD impulse responses for the required LOFO compensations using equation 2, construct the FIR taps accordingly, and save these taps in a look-up table (LUT) such that the taps may be read directly by the controller308when applying the frequency shift. It is to be noted that the RC FD impulse response for a negative target LOFO compensation may be simply flipped compared to the one for a positive target LOFO compensation, hence, the FIR taps for either positive or negative target LOFO compensations may be saved in a computer-readable memory (not illustrated), and the taps can simply be flipped for the other case when they get applied. It is to be noted that the FD impulse response computed based on equation 2 may only include real taps. In practice, the controller308may select a different location of the TD undistorted window (similar to the undistorted windows602and702illustrated inFIGS.6and7). To achieve this, the controller308may apply a phase ramp to the FD impulse response calculated by equation 2, meaning the FIR taps may not be purely real numbers with arbitrary amount of window location shift. On the other hand, for simplicity in practice, the samples that are kept after OLS may be more likely to be located at a first or second half of a block, the middle half of a block, or a quarter at both ends of a block (combined to get a total length of half a block), assuming 50% OLS. In order to move the undistorted window to these locations, the amount of the window location shift in TD is generally ¼ or ½ of an FFT block. Correspondingly, the controller308may simply need to further toggle the FIR taps calculated by equation 2 between real/imaginary numbers or positive/negative numbers, based on the amount and direction of the window shift. As a result, in these cases the FD impulse response taps may still be purely real or imaginary numbers, thereby saving around 50% of the complexity compared to the operations of complex numbers when applying the FIR taps to the signal spectrum. Combining the aforementioned implementation details, it is contemplated that the LUT that saves the FIR taps may be quite simple. By way of example, to compensate LOFO that is equal or smaller than half FFT bin and achieve a resolution of 0.1 bins, the LUT may include 5 sets of real-number FIR taps corresponding to the frequency shift of (0.1, 0.2, 0.3, 0.4, 0.5) FFT bins, where pure real numbers and pure imaginary numbers are not differentiated in terms of resources for saving. This may result to a 5×7 real-number LUT when assuming the number of FIR taps is 7. When implementing the integer FFT bins-based oscillator302or fractional FFT bins-based oscillator306, an additional phase term may be required to assure a phase continuity. This additional phase term may represent a general phase offset of each processed block to make the phase continuous at the boundaries of adjacent blocks in the final TD signal. The FIR taps of the fractional FFT bins-based oscillator306may be constructed based on other applicable filter design techniques such as Parks-McClellan filter design algorithm, without limiting the scope of present disclosure. It is noted that the TD window in the disclosure can be treated as the filter passband in those filter design methods. It is to be noted that even though in various embodiments of the present disclosure, the fractional FFT bins-based oscillator306has been illustrated to be implemented after the integer FFT bins-based oscillator302, in various non-limiting embodiment, the fractional FFT bins-based oscillator306may be implemented prior to the integer FFT bins-based oscillator302without limiting the scope of the present disclosure. In such embodiment, the fractional FFT bins-based oscillator306may be configured to compensate LOFO in a received signal with a fine compensation resolution of a fractional number of FFT bins, and the integer FFT bins-based oscillator302may be configured to further compensate LOFO in the compensated signal by the fractional FFT bins-based oscillator306by an integer number of FFT bins. Utilizing the fine-resolution benefit of the fractional FFT bins-based oscillator306, in certain embodiments of the present disclosure, the FD LOFO compensation system300may be used for all-FD digital frequency tracking to cope with the LOFO wandering over time. In one example, the integer FFT bins-based oscillator302may initially be set to the optimal operation parameters, but as the effect of LOFO wandering accumulates, these operation parameters may become sub-optimal, and the integer FFT bins-based oscillator302may need a re-tuning by one or more FFT bins. Due to the limited resolution of the integer FFT bins-based oscillator302, such re-tuning means the DSP blocks afterwards, such as the carrier phase recovery (CPR), may experience an abrupt residual LOFO change as large as hundreds of MHz, which may impact the functioning of specific DSP blocks, cause burst errors, and degrade the overall system performance. In one example of the solutions to the aforementioned problem, the controller308may initially turn off the fractional FFT bins-based oscillator306during the “steady state”, and temporarily turn it on when the integer FFT bins-based oscillator302needs a re-tuning. By coordinating the integer FFT bins-based oscillator302and the fractional FFT bins-based oscillator306, an overall smooth transition of the residual LOFO (rather than an abrupt change when the integer FFT bins-based oscillator302is re-tuned alone) may be achieved with the FD LOFO compensation system300. Below is a more detailed example of the re-tuning process. An FD LOFO compensation system300is assumed to merely rely on the integer FFT bins-based oscillator302for LOFO compensation in the “steady states”. The controller308may be configured to determine whether a re-tuning is required for the integer FFT bins-based oscillator302. In the event of determining that a re-tuning is required, the controller308may determine a number of bins by which the integer FFT bins-based oscillator302is to be adjusted. The controller308may re-configure the integer FFT bins-based oscillator302to a new setting that features a different integer number of FFT bin shift. Meanwhile, the controller308may turn on the fractional FFT bins-based oscillator306and configure it accordingly such that the overall residual LOFO change after the FD LOFO compensation system300is sufficiently small. Afterwards, the following DSP may be converged to the new residual LOFO with insignificant or acceptable performance degradation. Then the controller308keeps re-configuring the fractional FFT bins-based oscillator306to gradually release the residual LOFO change coming from the re-tuning of the integer FFT bins-based oscillator302. Such iteration continues, and the entire re-tuning process is completed when the target LOFO compensation of the fractional FFT bins-based oscillator306becomes zero. In other words, the controller308may cause the fractional FFT bins-based oscillator306iteratively be configured to different frequency shifts until the fractional FFT bins-based oscillator306becomes configured to a zero frequency shift. Finally, the controller308turns off the fractional FFT bins-based oscillator306, and the system turns back into steady state. FIG.12illustrates a flowchart of a process1200implemented over the FD LOFO compensation system for compensating LOFO. As shown, the process1200begins at step1202, where an integer Fast Fourier Transform (FFT) bins-based oscillator compensates LOFO in a received signal by an integer number of FFT bins. As previously noted, the integer FFT bins-based oscillator302may be configured to receive an FD signal referred to as a received signal spectrum310. The integer FFT bins-based oscillator302may be configured to compensate part of the LOFO that corresponds to integer number of FFT bins in the received signal spectrum310. At step1204, a fractional FFT bins-based oscillator compensates LOFO in the compensated signal by the integer FFT bins-based oscillator with a fine compensation resolution of a fractional number of FFT bins. As previously noted, the fractional FFT bins-based oscillator306may be configured to further compensate LOFO in the compensated signal by the integer FFT bins-based oscillator302with the fine compensation resolution of a fractional number of FFT bins. It is to be understood that the operations and functionality of the FD LOFO compensation system300, constituent components, and associated processes may be achieved by any one or more of hardware-based, software-based, and firmware-based elements. Such operational alternatives do not, in any way, limit the scope of the present disclosure. It will also be understood that, although the embodiments presented herein have been described with reference to specific features and structures, it is clear that various modifications and combinations may be made without departing from such disclosures. The specification and drawings are, accordingly, to be regarded simply as an illustration of the discussed implementations or embodiments and their principles as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present disclosure. | 32,366 |
11863362 | FIG.1shows part of a digital radio receiver1for receiving continuous-phase frequency-shift-key (FSK)-encoded signals. It may be a Bluetooth™ LE receiver. Conventional features such as an antenna, amplifiers, mixers, filters, analogue-to-digital converters, etc. are omitted for simplicity. These components generate a sampled radio signal from a received analogue radio signal, consisting of a sequence of complex-valued digital samples, I & Q, at baseband. The samples represent the received radio signal at a particular carrier frequency (e.g., in a band in the 2.4 GHz spectrum). The signal may be oversampled by a factor R. In the present examples, R=8, although it could take any suitable value. The principal signal path through the components inFIG.1is shown with solid arrows, while paths that relate to control of timing and frequency corrections are shown with dashed arrows. The complex baseband samples are first input to a frequency correction block2, which performs complex rotation on the samples to compensate for any carrier-frequency offset, based on outputs from a frequency estimator3and from a double correlator unit4. The complex baseband samples are also fed to a double correlator unit4which performs an initial frequency offset as well as timing recovery and frame synchronization, by cross-correlating the received signal against a stored template, which corresponds to a fixed part of the preamble of any data packet that is intended for this radio receiver1. The correlation is performed every sample. This timing information from the double correlator unit4is output to the frequency correction block2and to a matched filter bank5. The frequency estimate needs to be relatively accurate (e.g., to within around 10 kHz) in order to avoid significant sensitivity degradation. The matched filter bank5, in addition to the timing information, also receives the frequency-corrected samples from the frequency correction block2. The matched filter bank5contains a set of filters, each K bits long. InFIG.1, K=5, but K may be 3, 4 or any other length. Each filter performs a complex cross-correlation between a respective filter sequence (a bit pattern) and the sampled signal. At each time step, the matched filter bank5generates a set of complex correlation coefficients, one for each filter. It computes a real-valued modulus of each coefficient and outputs these correlation strength values to a decision unit6. The decision unit6receives this correlation-strength data and processes it to generate a sequence of decoded bits. This processing is described in more detail below. The decision unit6outputs a demodulated bit value at each bit period. In some embodiments, the decoded bits are fed back to the matched filter bank along a feedback path7. In such embodiments, these bits are used by the MFB5to define the filter sequences that the MFB5cross-correlates with the received samples. In other embodiments, however, no feedback to the MFB5is required; instead, decoded bits are fed back internally within the decision unit6. The complex correlation coefficients from the matched filter bank5are also sent to the frequency estimator3, which uses them to estimate any frequency drift, which can influence the operation of the frequency correction block2on an on-going basis. The decoded bit stream, output by the decision unit6, may be stored in memory and/or processed further by the radio receiver1or another device, as appropriate. FIG.2illustrates the behaviour of a matched filter bank200and a decision unit201that implement a conventional majority-voting decoding approach. This is provided to help highlight the novel features of the embodiments shown inFIG.3andFIG.4. The MFB200uses a filter length of K=3. A transmitted frequency shift key (FSK)-encoded radio signal, with modulation index h, and oversampling ratio R, can be defined as: xnR+r=xnRexp{jπhβnrR}=Pxexp{jπh(βnrR+∑l=0n-1βl)} where n indicates the current bit position; βlrepresent the successive bit values; r∈[0, R−1] indexes the current sample offset (in time), from the symbol's timing anchor; and Pxis the power at x0. If the radio signal is binary-FSK modulated, the value βk, for each k between 0 and n, represents the sign (i.e., −1 or 1) of the instantaneous phase shift corresponding to the kthbit in the bit stream bitk; it can be formally defined as βk≙2.bitk−1. The sum inside the exponential represents the accumulated phase offset of all the symbols leading up to the current symbol, accumulated over the whole bit stream thus far. The received signal is then given by ynR+r=hxnR+r+υnR+r where h is a complex number representing the channel gain and phase, and υnR+ris a noise term. The MFB200detects this received signal by non-coherently correlating the sampled radio signal with eight filters that have coefficients corresponding to the modulated signal for all possible 3-bit sequences. Each filter is indexed (labelled), with a unique reference, I. In some cases, the transmitted signal may employ pulse-shaping, such as Gaussian filtering. In such cases the model in equation (1) may not apply; however, it can still be possible for the coefficients of the MFB200to be based on equation (1) and successfully demodulate such a signal. In general output of the MFB200, at symbol time n, for a filter using fixed bit sequence b={b0, b1, . . . , bK−1}, where the bi's here are the signs of the actual bits—i.e., representing the bits {0, 1} as the values {−1, 1} respectively—of the particular filter, is given by dn(b)=Δ∑k=0K-1∑r=0R-1y(n-(K-1)+k)R+rexp{-jπh(bkrR+∑l=0k-1bl)} (The inner sum term over biis defined to be zero when k=0.) The particular bit sequence b that maximizes |dn(b)|2is the noncoherent maximum likelihood estimate. The MFB200receives sample chips that have an up-sampling rate of eight—i.e., receiving eight samples yifor each bit. It cross-correlates each received set of eight chips with stored “zero”-bit filter coefficient Ci0and with stored “one”-bit filter coefficient Ci1, to calculate complex correlation values S0and S1for each bit interval. The filter coefficients may correspond to the complex conjugate of the baseband representation of an FSK symbol with modulation index h, sampled at rate R, and an initial phase offset of 0 radians, corresponding to either a 0 or 1 bit, respectively. These intermediate correlation results S are buffered for three time intervals, and are input to each of the eight filter modules. Each filter module, k=0, . . . , 7, uses these intermediate results to calculate a respective correlation magnitude value, Xk, representing a cross-correlation with a respective 3-bit filter sequence: [0 0 0], [0 0 1], [0 1 0], . . . , [1 1 1]. This is calculated as: Xk=|S0b0k+ejπh(2b0k−1)S1b1k+ejπh(2b0k+2b1k−2)S2b2k| The eight correlation magnitude values, Xk, are then output to the decision unit201. At each bit interval, the decision unit201identifies the index, I, of the filter sequence having the largest correlation magnitude. It then buffers three of these indices, over three time intervals, and uses majority voting logic202to decode the value of the one bit position that appears in all three filter sequences (at three different time offsets). This decoded binary value, F, is output as a hard bit in the decoded bit sequence. FIGS.3and4show two different embodiments of MFBs and decisions units that can be used in the receiver ofFIG.1. In both cases, the value of each bit in the sequence of output decoded bits is determined, in part, based on the values of two earlier decoded bits from the same sequence. This increases the sensitivity of the receivers. In both cases, the decoding assumes that some bits before a current observation window have already been correctly decoded, and feeds these bits back to assist the decoding of the current bit. For example, if using two feedback bits, instead of calculating dn(b), a radio receiver embodying the invention might calculate dn([{circumflex over (b)}n-K-1{circumflex over (b)}n-Kb]) where {circumflex over (b)}n-K-1and {circumflex over (b)}n-Kare outputs from a majority vote detector. FIG.3shows a first embodiment of an MFB300and decision unit301, for use in the receiver ofFIG.1, in which the value of each bit in the sequence of output decoded bits is determined, in part, based on the values of two earlier decoded bits from the same sequence. This is possible because of the addition of two feedback paths303a,303b. This receiver arrangement provides improved performance, compared with that ofFIG.2, without significantly increasing the implementation complexity. The detector shown here has an effective filter length of K=5 (based on an underlying set of eight 3-bit filter sequences), but the idea can readily be extended to other values of K with minor modifications. In this design, when decoding a current bit value, F, the two preceding hard bit outputs, F−1and F−2(which have already been calculated and output by the decision unit301) are buffered and fed back to the MFB300, which effectively appends these bits to the beginning of each of eight 3-bit filter sequences, to generate eight 5-bit filter sequences, against which the sampled signal is then correlated. The two most recent output bits, F−1and F−2, are sent along the feedback path303ato be fed into the MFB300, where they are saved in a two-bit shift register304a. The latest output bit is also sent along a feedback path303bas a control to a selector which selects one of the two latest intermediate correlation results S to write into a two-bit shift304b, according to the value of the latest output bit value. These earlier decoded output bits, F−1and F−2(stored in buffer304a) and their corresponding intermediate correlation results S−1and S−2(stored in buffer304b) are used by the MFB300to calculate a value XFD, which is in turn used by each filter module to generate the final outputs, Xk, of the eight filter modules, according to the following calculations: Xk=|XFD+S0b0k+ejπh(2b0k−1)S1b1k+ejπh(2b0k+2b1k−2)S2b2k| where XFD=e−jπh(2F−1−1)S−1+e−jπh(2F−2+2F−1−2)S−2. In simulations, this design has been found to achieve about 0.9 dB of additional gain, compared with the arrangement ofFIG.2, with K=5, a modulation index h=0.5, and using two feedback bits. However any combination of filter length and number of feedback bits is possible. The decision unit301inFIG.3employs a majority vote detector302; however any mechanism to generate a hard bit decision may be used. FIG.4shows a second embodiment of an MFB400and decision unit401, for use in the receiver ofFIG.1, in which the value of each bit in the sequence of output decoded bits is again determined, in part, based on the values of two earlier decoded bits from the same sequence. However, in contrast withFIG.3, there is no feedback of decoded bits to the MFB400, which applies a conventional bank of eight three-bit filters to the sampled signal. Instead, decoded bits pass along a feedback path403within the decision unit401, in order to increase the effective range of the receiver beyond only three bits. Rather than, at each bit interval, merely identifying the one filter, I2, that has the largest correlation magnitude out of all eight filters and discarding the correlation magnitudes from the other filters, the decision unit401inFIG.4additionally identifies six further filter indices, I10, I11, I00, I01, I02and I03, at each interval, representing filters having the largest correlation magnitude out of respective subsets of the eight filters. These candidate filters are resolved to two specific additional filters, I0, I1, for inputting to the majority-voting block402, once two more bits have been decoded, before the current bit is decoded. The buffered index I1has two possible values, one corresponding to the case that the hard decoded bit two positions before a particular “current” bit will be a zero—i.e., F−1=0—and another corresponding to the case when F−1=1. Similarly, the buffered index I0has four options, covering the four possible combinations of values of the two decoded bit values, F−1and F−2, that immediately precede the current bit. These correspond to the cases [F−2F−1]=[0 0], [0 1], [1 0] and [1 1]. The filters could, of course, be indexed in any arbitrary way. However, assuming a natural binary-value indexing of the eight filters, k=0 to 7: I10is the best-matched filter out of the four filters whose sequences have a zero in the first (earliest-received) bit position—i.e., in the set {[0 X Y], for X,Y=0 or 1}. I10is the best-matched filter out of the four filters whose sequences have a one in the first bit position—i.e., in the set {[1 X Y], for X,Y=0 or 1}. I00is the better-matched filter out of the two filters whose sequences are in {[0 0 X], for X=0 or 1}. I01is the better-matched filter out of the two filters whose sequences are in {[0 1 X], for X=0 or 1}. I02is the better-matched filter out of the two filters whose sequences are in {[1 0 X], for X=0 or 1}. I03is the better-matched filter out of the two filters whose sequences are in {[1 1 X], for X=0 or 1}. FIG.5shows this same information in tabular form. FIG.6shows which of the buffered filter indices, I10, I11, I00, I01, I02and I03, will then be input to the majority-vote block402, through the selector switches in decision unit401, as resolved indices I1and I0, once the two bit positions immediately preceding the current bit have been decoded. The majority-vote block402then uses the relevant bit position in each of the filter sequences corresponding to the resolved indices I1and I0to determine a current hard bit value, F, based on a majority vote across the three bit values, I0, I1and I2. It then outputs this bit value, F, and also sends it along the feedback path403to a two-bit shift-register404, to serve as a selector for the decoding of the next two bit positions. In this way, the decision unit401retains more information from each application of the filter bank. These filter indices are buffered for up to two bit intervals, and used to determine which of the filter sequences to input to a majority-vote block based on the values of the two feedback bits that are decoded during this buffer interval. In this way, additional correlation magnitude information is not simply discarded, but is used subsequently, once these two further bit decisions have been finalised, in inform the decoding of the current bit. In simulations, this design has been found to achieve about 0.6 dB gain, compared with the arrangement ofFIG.2, with K=3, a modulation index h=0.5, and using two feedback bits. It will be appreciated by those skilled in the art that the invention has been illustrated by describing one or more specific embodiments thereof, but is not limited to these embodiments; many variations and modifications are possible, within the scope of the accompanying claims. In particular, the filter sequences could be longer or shorter than is shown in these examples. If the sequences are an even number of bits long, then the majority vote may resolve a tie by an arbitrary selection or using any other suitable information. In some embodiments, the majority vote could be replaced by some other hard bit decision logic, such as a weighted vote, or it could be replaced with soft bit decision logic, e.g. based on the magnitude of the correlation value for the best filter where b=1 minus the correlation value for the best filter where b=0. The number of feedback bits could be larger or smaller than shown in these examples. | 15,613 |
11863363 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS FIG.1shows an exemplary wireless telecommunications network100. The illustrative telecommunications network includes base stations101,102and103, though in operation, a telecommunications network necessarily includes many more base stations. Each of base stations101,102and103(eNB) are operable over corresponding coverage areas104,105and106. Each base station's coverage area is further divided into cells. In the illustrated network, each base station's coverage area is divided into three cells. Handset or other user equipment (UE)109is shown in Cell A108. Cell A108is within coverage area104of base station101. Base station101transmits to and receives transmissions from UE109. As UE109moves out of Cell A108and into Cell B107, UE109may be handed over to base station102. Because UE109is synchronized with base station101, UE109can employ non-synchronized random access to initiate handover to base station102. Non-synchronized UE109also employs non-synchronous random access to request allocation of up-link111time or frequency or code resources. If UE109has data ready for transmission, which may be traffic data, measurements report, tracking area update, UE109can transmit a random access signal on up-link111. The random access signal notifies base station101that UE109requires up-link resources to transmit the UEs data. Base station101responds by transmitting to UE109via down-link110, a message containing the parameters of the resources allocated for UE109up-link transmission along with a possible timing error correction. After receiving the resource allocation and a possible timing advance message transmitted on down-link110by base station101, UE109optionally adjusts its transmit timing and transmits the data on up-link111employing the allotted resources during the prescribed time interval. Base station101configures UE109for periodic uplink sounding reference signal (SRS) transmission. Base station101estimates uplink channel state information (CSI) from the SRS transmission. FIG.2shows the Evolved Universal Terrestrial Radio Access (E-UTRA) time division duplex (TDD) Frame Structure. Different subframes are allocated for downlink (DL) or uplink (UL) transmissions. Table 1 shows applicable DL/UL subframe allocations. TABLE 1Config-Switch-pointSub-frame numberurationperiodicity012345678905 msDSUUUDSUUU15 msDSUUDDSUUD25 msDSUDDDSUDD310 msDSUUUDDDDD410 msDSUUDDDDDD510 msDSUDDDDDDD610 msDSUUUDSUUD Table 1 Sounding Reference Signal Bandwidths Configurations In LTE, a UE can be Radio Resource Control (RRC) assigned either of the four possible sounding bandwidths for a given cell-specific SRS bandwidth configuration CSRSand system bandwidth. For each group of system bandwidths, there are eight SRS bandwidth configurations CSRScorresponding to different system bandwidths and/or ratios of PUCCH/PUSCH region sizes. The larger CSRSthe smaller the total SRS bandwidth. For each SRS bandwidth configuration the four possible sounding bandwidths are denoted mSRS,0, mSRS,1, mSRS,2, mSRS,3ordered by decreasing size and are expressed in physical resource blocks (PRB) of size NscRB=12 sub-carriers. The quantity mSRS,0defines the largest possible SRS bandwidth. The quantity mSRS,0along with the sub-carrier offset k′0defines the bandwidth region. No combination of smaller bandwidths exceeds this region. The quantities mSRS,0, mSRS,1and mSRS,2are defined to allow some kind of dichotomy providing a way to split the total sounding bandwidth into 2, 3, 4 or 6 scheduling bandwidths (FIG.3). This allows splitting the total number of UEs in the scheduler pool into equally spaced bandwidths and running as many parallel schedulers concurrently. The quantity mSRS,3is always 4 PRBs and is mainly for power limited UEs. FIGS.3A and3Btogether illustrate two plots of RRC frequency domain position index (n_RRC) versus starting subcarrier.FIGS.3A and3Billustrate SRS frequency configurations in 20 MHz system bandwidth with CSRS=1 (FIG.3A) and CSRS=2 (FIG.3B). Each plot has curves for 1 SRS band, 2 SRS bands, 3 SRS bands and 4 SRS bands. For scenarios reflecting peak data rates situations, it is safe to assume no power limitation at the UE from the sounding perspective and stick to the combinations of mSRS,0, mSRS,1and mSRS,2. For a 20 MHz spectrum and PUCCH occupying 8 PRBs, an appropriate combination of mSRS,0, mSRS,1and mSRS,2is 80/40/20 PRBs (CSRS=2). This allows multiplexing the largest number of SRSs per sub-frame by splitting the total bandwidth into four 20-PRB scheduling bandwidths each of large-enough size (3.6 MHz) to provide sufficient frequency selective gains. For tougher propagation conditions, such as LTE Case 1, configurations allowing smaller SRS bandwidths for mSRS,0, mSRS,1and mSRS,2might be preferred to provide more flexibility in allocating UEs with different levels of power limitations. For example, CSRS=7 specifies 48/16/8/4 PRBs for respective SRS bandwidths mSRS,0, mSRS,1, and mSRS,3. SRS Design for LTE An LTE sub-frame structure is depicted inFIG.4. Each sub-frame410includes two 0.5 ms slots.401and402. Each slot401and402is made of six Discrete Fourier Transform (DFT) Spread Orthogonal Frequency Division Multiplexing (SOFDM) data symbols and one central demodulation reference symbol (DMRS). When the sub-frame410is configured for SRS transmission, the last symbol number 14 is reserved for SRS transmission. Multiple UEs can be multiplexed in the same SRS symbol. The multiplexing scheme is a combination of FDM and Code Division Multiplexing (CDM).FIG.5illustrates this transmission technique. The sounding signal is built from a pilot root sequence of length NSRS from EZC root sequence unit501. EZC root sequence unit501generates an extended Zadoff-Chu (EZC) sequence constructed by extending the closest prime-length Zadoff-Chu (ZC) sequence to the SRS sequence length NSRSproviding the configured SRS bandwidth. Such sequence has Constant Amplitude Zero Autocorrelation (CAZAC) properties. This property guarantees discrete periodic autocorrelations are zero for all non-zero lags, allowing orthogonal code multiplexing by duplicating and cyclic shifting the same root sequence. The constant amplitude property allows controlling the Peak-to-Average Power Ratio (PAPR) and generates bounded and time-flat interference to other users. In a given sub-frame, all UEs in the same cell and with the same SRS bandwidth share the same root EZC sequence X=(X0, X1, . . . , XNSRS−1)T, defined in frequency domain. Then, the sequence is modified per Equation (1) in time-domain cyclic shift unit502so as to produce a cyclic shift Cu=NSRSm(u)/8 in time-domain, configured for user u, and where 8 is the CDM multiplexing capacity: Xu,k=Xkej2πkm(u)/8;m(u)∈{0…7}(1) The resulting sequence is further mapped to the NSRSsub-carriers allocated to SRS out of NFFTin inverse Fast Fourier Transform (IFFT) unit503. Here NFFTis the total amount of sub-carriers of the system bandwidth. NFFT=2048 for a 20 MHz LTE system bandwidth. The tone mapping also reflects the Single Carrier Interleaved Frequency Division Multiple Access (SC-IFDMA) transmission scheme of the SRS. Within its allocated bandwidth, a UE's SRS sequence is mapped on every other tone, leaving in-between tones to zero. This produces the two combs per SRS bandwidth illustrated inFIG.7. This is one aspect of the FDM multiplex, the other aspect being that different UEs can send their SRS on different bandwidths. As a result, the total SRS multiplexing capacity for a given SRS bandwidth is 8 (CDM) times 2 (FDM)=16. With the IFDM multiplexing scheme, the sequence duration equals half the OFDM symbol duration T. Hence, in LTE where T=66.67 μs, the minimum cyclic shift increment between two CDM'ed users is T/2/8=4.17 μs. Parallel to serial converter504generates that radio frequency (RF) coupled to the antenna (not shown). SRS Receive Structure FIG.6illustrates a SRS receiver. Serial to parallel converter601converts the received RF into serial data streams. Each received time sample sequence r is converted in frequency domain through an NFFT-length FFT (FFT602). EZC root sequence unit603generates a root sequence corresponding to the root sequence of EZC root sequence unit501. Element-wise multiply unit604multiplies corresponding elements of the RF input with the root sequence. This de-maps SRS-relevant sub-carriers to produce a frequency-domain sequence Y carrying all CDM users. Y is then converted back to time domain sequence y through NSRS-length IDFT605. This performs cyclic-shift de-multiplexing for each of the 8 CDM'ed users. In particular, this proposed system takes profit of the SRS OFDM symbol structure and CAZAC sequence to compute each multiplexed UE's channel impulse response (CIR) through a frequency-domain computed periodic correlation (matched filter). Frequency-domain channel estimates are then obtained by extracting each user's relevant samples from the total CIR samples and converting them back to frequency-domain through NSRS-length DFT606. This method is referred to as time-domain based channel estimation. The SRS receiver of this invention (FIG.6) follows the same principle as the prior art with an additional complexity reduction achieved from group-UE cyclic shift de-multiplexing. Rather than correlating y with each UE's sequence, the received frequency-domain sequence Y is element-wise multiplied with the complex conjugate of the expected root sequence X (element-wise multiply unit604) before the IDFT, as illustrated inFIG.6. This provides in one shot the concatenated CIRs of all UEs multiplexed on the same root sequence. Cyclic-shift de-multiplexing reduces to selecting the relevant samples for each UE. This method can be expressed as: Y=FNSRSNFFTr(2)y=FNSRS-1diag(X*YT)(3)yu(0,…,0,yn1,yn2,…,ynL(u),0,…,0)T(4)H^u=FNSRSyu(5) where: NSRSby NFFTmatrix FNSRSNFFTcorresponds to NFFT-point FFT and NSRSsub-carriers de-mapping; NSRSby NSRSmatrixes FNSRSand FNSRS−1correspond to NSRS-point DFT and IDFT respectively; n1(u), . . . , nL(u) are the samples defining the cyclic shift window of user u; and L is the number of time samples corresponding to the maximum expected delay spread among users derived from the delay spread τ, the pad δ taken to account for the delay spread spill-over in the window, the symbol duration T and the number of SRS sub-carriers per comb NSRSas: L=⌈2(τ+δ)NSRS/T⌉(6) Table 2 shows the resulting values of L for different channels and SRS bandwidths examples assuming a spill-over pad δ=0.55 μS (measured empirically). Table 2 shows the number of cyclic shift window samples for various configurations. TABLE 2SRS BANDWIDTH(PRBS)2084Delay spread τ (μS)50.950.950.9(TU)(PA)(TU)(PA)(TU)(PA)L (samples)2068342 FIG.7illustrates the case of four cyclic-shift multiplexed UEs per SRS comb with 5 μS delay spread TU channel. The top part ofFIG.7shows a plot of power delay profile versus time sample for four user windows. The bottom part ofFIG.7shows a plot of demultiplexed power delay profile versus the same time samples. InFIG.7the user CIR extraction and cyclic shift de-multiplexing are performed simultaneously by selecting the appropriate user's cyclic shift window from the concatenated time-domain CIRs sequence y of all multiplexed UEs. This method is compared with the conventional frequency-domain channel estimation approach, where the cyclic shift de-multiplexing is performed directly onto the de-mapped frequency-domain sequence Y across sub-carrier chunks to produce channel estimate chunks as follows: H^u(c)=XuH(c)FNcNFFTr(7) where: Ĥu(c) is the channel estimate across chunk c spanning sub-carriers n1(c), . . . , nc(c); C is the chunk size Xu(c)=(0, . . . , 0, Xn1(c), Xn2(c), . . . , Xnc(d), 0, . . . , 0)T; and the Ncby NFFTmatrix FNcNFFTcorresponds to NFFT-point FFT and Ncsub-carriers de-mapping. Compared to the frequency-domain channel estimation approach, zeroing-out samples outside the user's energy window in this invention achieve multiple benefits. Channel Estimates Per Sub-Carrier The last stage NSRS-length DFT-based frequency interpolation provides channel estimates on each of the NSRSsub-carriers. Per-chunk channel estimates obtained with the frequency-domain approach are averaged arithmetically across the chunk sub-carriers. This disallows harmonic averaging of the user's SINR as requested by the UL scheduler to estimate the user's throughput with MMSE receiver. Channel Estimation MSE Reduction With the last stage NSRS-length DFT, the energy of the Additive White Gaussian Noise (AWGN) samples in the user's window is spread across the NSRSsub-carriers. Since the user's energy is all contained in its cyclic shift window, this represents a reduction factor GσH2on the channel estimation mean square error (MSE) σH2of NSRS/L corresponding to the ratio of half the OFDM symbol duration T/2 (due to IFDM with 2 combs per symbol) over the maximum expected delay spread τ among users: GσH2=T2τ(8) With a LTE symbol duration of 66.67 μS and TU channel delay spread of 5 μS, an MSE improvement close to 8 dB is achieved for the channel estimation. Channel Estimation Performance The following is an evaluation of the performance of the invention in a realistic multi-user SC-FDMA multiplex simulation. The simulator models a number of UEs multiplexed on a configurable SRS bandwidth within the total bandwidth (25 PRBs) available in 5 MHz spectrum. The root sequence, cyclic shift and frequency mapping of the UEs are re-selected randomly every sub-frame. The simulator models timing errors of the UEs chosen randomly within a maximum time uncertainty window. The SNR is measured in time domain and is representative of the average signal power across the SRS bandwidth, not in the user's comb only. Table 3 below includes all parameters of the simulation. TABLE 3ParameterValue or rangeSystem Bandwidth5 MHzNumber of antennas2Number of SRS users2-16SRS bandwidths4-8-20 PRBsScheduled sub-frames per UEAllSRS sequencesEZC with random selectionof ZC index and cyclicshift every sub-frameMax timing uncertainty+/− 1 μswindowChannelsAWGN, TU6, PAUE speed3 km/h This evaluation uses for performance criterions the normalized mean square error of the channel estimates Ĥ per sub-carrier per antenna: σH2=E{❘"\[LeftBracketingBar]"H^-H❘"\[RightBracketingBar]"2}a2(9) where: a2=E{|H|2} is the averaged received power from the user. Channel Estimator Distortions A first simulation assesses the performance of the proposed estimator in absence of noise. The time-domain approach of this invention requires that the channel be first down-sampled to time domain and then interpolated to frequency domain. The former acts as a sinc band-pass filter on the channel, which has two consequences:The narrower the SRS bandwidth, the coarser the CIR and therefore the channel estimates; andSome spill-over effects should be accounted for when designing the user window for cyclic shift de-multiplexing. This spill-over leads to non-perfect orthogonality between cyclic shifts. The latter unavoidably creates interpolation errors at both ends of the interpolation such as SRS bandwidth edges.FIGS.8A,8B,9A and9Billustrate this.FIG.8illustrates two plots of the real and imaginary components for actual data and estimated data versus sub-carrier.FIG.8Ais the TU channel.FIG.8Bis the PA channel.FIG.8illustrates four curves; the X component channel; the X component estimate; the Y component channel and the Y component estimate.FIGS.8A and8Billustrate regions811,812,821and822of larger errors. Due to the larger error regions illustrated inFIGS.8A and8Bit is recommended to reduce the scope of the channel estimation to the inner SRS bandwidth only.FIG.9illustrates two plots of mean squared error of the channel estimates Ĥ per sub-carrier per antenna, known as Channel Quality Index (CQI) value, in dB versus sub-carrier.FIG.9Ais the TU channel.FIG.9Bis the PA channel.FIG.9illustrates four curves; two SRS users; four SRS users; 8 SRS users; and 16 SRS users.FIGS.9A and9Billustrate regions911and921of larger errors. As seen inFIGS.9A and9B, the MSE due to these distortions remains below −20 dB when shrinking the SRS bandwidth by 10%. The rest of the description of these simulations only considers the channel estimation performances in reduced shrunk bandwidth. FIG.9Aillustrates a high error floor when 16 SRS users are multiplexed with TU channel. This is due to the delay profile truncation. In this configuration the cyclic shift increment is 4.17 μS but the delay spread of the channel is 5 μS. It is not recommended to multiplex 16 UEs with TU channel on the same SRS symbol at high SNR. Channel Estimator Performance with AWGN The normalized mean square error performance σH2is plotted inFIGS.10and11for TU and PA channels when varying the number of multiplexed SRS users and the SRS bandwidth respectively.FIGS.10A and10Billustrates plots of Channel estimation Mean Squared Error in Channel Quality Indicator (CQI) estimates of the shrunk bandwidth in dB versus signal to noise ratio (SNR) in dB.FIG.10Ais the TU channel.FIG.10Bis the PA channel.FIGS.10A and10Billustrate four curves: two SRS users; four SRS users; eight SRS users; and sixteen SRS users. As described above an error floor occurs with 16 SRS users with TU channel (FIG.10A) because of the delay profile truncation. The better channel estimation performance with PA channel compared to TU channel is due to the slower channel variations in frequency domain. This provides better interpolation performance and, due to the smaller the delay spread, the larger the SNR improvement ratio described above.FIG.10Billustrates the smaller the SRS bandwidth the narrower the low-pass filter effect discussed above. The TU channel is more sensitive to the SRS bandwidth than PA channel because it is more frequency selective and therefore suffers more from these losses. Non-Biased Estimator A broad use of the SRS allows prediction of the UE's signal to interference plus noise ratio (SINR) information for the UL scheduler to derive appropriate scheduling metric and perform link adaptation. This involves computing the channel gain estimate per sub-carrier per antenna: G^(a)=❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2(10) In absence of other distortion but AWGN, channel estimates Ĥ(a)=Ĥx(a)+j Ĥy(a) are complex values random variables which components follow a non-centered Normal distribution: H^x(a)=axN(1,σH22);H^y(a)=ayN(1,σH22);ax2+ay2=a2(11) As a result, the channel gain estimates Ĝ(a)=|Ĥ(a)|2=|Ĥx(a)|2+|Ĥy(a)|2follow a non-central Chi-square distribution with 2 degrees of freedom and non-centrality parameter a2. The normalized mean and standard deviations are: mG^(a)a2=1+σH2(12) σG^(a)a2=σH2+σH2(13) From equation (12) it is clear that this estimator is biased and that the noise variance component a2σH2should be removed from the gain estimate Ĝ(a) to produce a non-biased estimation: G^0(a)=❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2(14) where: {circumflex over (σ)}N2is an estimate of the noise variance σN2=a2σH2. However, |Ĥ(a)|2and {circumflex over (σ)}N2are independent estimates which cumulative errors may lead to a negative value for Ĝ0(a). Therefore some additional adjustment is needed to prevent negative gain estimates. Three possible options are: G^Abs(a)=❘"\[LeftBracketingBar]"❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2❘"\[RightBracketingBar]"(15)G^Clip(a)=max{❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2;Gfloor}(16)G^Select(a)={❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2if❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2>0❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2if❘"\[LeftBracketingBar]"H^(a)❘"\[RightBracketingBar]"2-σ^N2≤0(17) The comparative performance analysis of the above channel gain estimates is discussed below. Noise Variance Estimation Through Cyclic Shift Reservation The variance of the SRS noise is specific to the SRS signal. The SRS signal in addition to the thermal noise is expected to be interfered by other SRS signals from neighbor cells. This is reflected by the cross-correlation characteristics of EZC sequences. The noise variance can be estimated from the areas where no signal energy is present in the concatenated delay profiles sequence y. For some channel types such as TU and when all multiplexing space is used, there is no such area available for noise variance estimation. In this invention one cyclic shift per comb is reserved for noise variance estimation.FIG.12shows this technique.FIG.12shows a plot of power delay profile versus time samples. Time samples 1 to 30 are reserved for user 1. Time samples 31 to 60 are reserved for user 2. Time samples 61 to are reserved for user 3. Time samples 91 to 120 are reserved for cyclic shift noise estimation. As illustrated inFIG.12, the noise estimation window is designed to maximize the number of noise samples while not including samples carrying adjacent users' energy such as in the spill-over regions. The noise is estimated as: σ^N2=1❘"\[LeftBracketingBar]"IN❘"\[RightBracketingBar]"∑i∈IN❘"\[LeftBracketingBar]"yi❘"\[RightBracketingBar]"2(18) where: INis the noise estimation window; and |IN| is the number of samples in this window. Noise Variance Estimation Performance Simulations of the normalized mean error (bias) mσN2and normalized standard deviation σσN2performance metrics for the noise variance estimator result in the following: mσN2=E{σ^N2-σN2}/σN2(19)σσN2=E{([σ^N2-σN2]/σN2-mσn2)2}(20) FIG.13shows the noise variance estimation performance for both TU and PA channels when varying the number of SRS users at 20 PRB SRS bandwidth.FIG.13illustrates plots of noise power estimation performance versus signal to noise ratio in dB for both the TU channel and the PA channel.FIG.13Aillustrates the mean error andFIG.13Billustrates the standard deviation of the error. The mean error performance shows the noise power estimator is unbiased in the area where it is the most important: at low SNR of less than 0 dB. At high SNR, the estimator has a non-zero mean due to non-ideal cyclic shift separation between users and noise window. The standard deviation performance shows the noise power estimator has a constant variance in the area where it is the most important: at low SNR less than 0 dB. At high SNR, the estimator variance increases with SNR due to non-ideal cyclic shift separation between users and noise window. The PA channel provides a more accurate estimation because the noise estimation window can be made larger thanks to the small delay spread of adjacent UEs as shown inFIG.10. Channel Gain Estimator Performance with AWGN To determine if the modified channel gain estimator Ĝ0(a) is unbiased, simulations measure the normalized linear mean error (bias) mH2defined as: mH2=E{G^0(a)-H(a)2}a2(21) FIG.14shows the channel gain estimation error with and without noise variance estimation removal for both the TU channel (FIG.14A) and the PA channel (FIG.14B) for varying the number of SRS users at 20 PRB SRS bandwidth.FIG.14is two plots of channel gain estimation error versus signal to noise ratio in dB for with no noise removal and with noise removal. The channel gain estimator Ĝ0(a) is not biased across the wide SNR range after removing the estimated noise variance. The link-level simulator allows assessment of performance of the positive channel gain estimators. Because the channel gain is further used for SNR estimation, it is more convenient to express it in the dB scale. The mean mH2dBand standard deviation σH2dBerrors of the channel gain estimations expressed in dB are: mH2dB=E{G^xy(a)dB-(H(a)2)dB}(22)σH2dB=E{(G^xy(a)dB-(H(a)2)dB-mH2dB)2}(23) where: Ĝxy(a) represent the various estimators ĜAbs(a), ĜClip(a) and ĜSelect(a).FIG.15illustrates the mean channel gain estimation error (FIG.15A) and standard deviation of the channel gain estimation error (FIG.15B) versus signal to noise ratio in dB for various gain estimation techniques for the TU channel.FIG.15includes curves for: no noise removal; calculation using absolute value; selective calculation; clipping the channel gain estimation at −20 dB; clipping the channel gain estimation at −23 dB; and clipping the channel gain estimation at −30 dB.FIGS.15A and15Beach employ 2 SRS users.FIG.16illustrates the mean channel gain estimation error (FIG.16A) and standard deviation of the channel gain estimation error (FIG.16B) versus signal to noise ratio in dB for various gain estimation techniques for the PA channel.FIG.16includes curves for: no noise removal; calculation using absolute value; selective calculation; clipping the channel gain estimation at −20 dB; clipping the channel gain estimation at −23 dB; and clipping the channel gain estimation at −30 dB.FIGS.15A and15Beach employ 2 SRS users. The methods providing the best compromise across both mean and standard deviation errors and across the SNR range are the clipping methods with clipping threshold of −20 dB or −23 dB. Further Noise Reduction Techniques Both channel and channel gain estimators performances show rather poor performances at low SNR. This section of the patent application evaluates ways to improve these performances through two noise reduction techniques. The resulting performances on both the TU channel and the PA channel are assessed. This simulation used an SRS configuration with only 2 SRS users per SRS symbol (minimum co-channel interference) and 20 PRB SRS bandwidth in order to isolate the noise reduction performance. The channel gain estimator in dB scale used a clipping threshold of −20 dB for negative gain avoidance. Least Mean Square (LMS) Filtering The least mean square filtering method implements an LMS equalizer on the channel estimates Ĥubefore computing the channel gain: H^eq=CLMSH^u(24) where: CLMSis the NSRSby NSRScoefficient matrix minimizing the mean square error (MSE) and computed as: CLMS=Γ-1ξ(25) where: Γ is the covariance matrix of the sub-carrier samples and; ξ is a matrix which columns are shifted replicas of the frequency domain channel filter coefficients. In the link-level simulator, both Γ and ξ are selected according to the channel model in use. In a practical eNB implementation different UEs may undergo different channels. Thus it can be quite complex to track the channel delay and amplitude profile of each UE independently. This patent application uses only the maximum delay spread information from the channel model and scaled a sinc function accordingly to model both Γ and ξ resulting in a common set of coefficients CLMSfor all SRS users.FIGS.17and18compare the performance of the channel estimators with and without LMS filtering for both the TU channel and the PA channel.FIG.17plots the normalized MSE performance mean square error σH2of the channel estimates Ĥ per sub-carrier per antenna in dB versus signal to noise ratio in dB for two SRS users.FIG.17includes four curves: TU channel with least mean square (LMS) filtering disabled; TU channel with LMS filtering enabled; PA channel with LMS filtering disabled; and PA channel with LMS filtering enabled. At low SNR, the LMS filter reduces the MSE by up to 3 dB for both PA and TU channels. For the TU channel, the LMS filter creates an error floor for positive SNR values. FIG.18is two plots of channel gain estimation mean error (FIG.18A) and channel gain estimation standard deviation of error (FIG.18B) versus signal to noise ratio in dB for systems with two SRS users. Each ofFIGS.18A and18Billustrate four curves: TU channel with LMS filtering disabled; TU channel with LMS filtering enabled; PA channel with LMS filtering disabled; and PA channel with LMS filtering enabled. For the MSE performance (FIG.18A), the LMS filter improves the mean error by 2 and 1.2 dB for TU and PA channel respectively at low SNR and improves the standard deviation performance (FIG.18B) by 1 and 0.5 dB for TU and PA channel respectively. For the TU channel the LMS filter creates an error floor for positive SNR values. There is an SNR threshold for each channel where a crossover occurs between LMS filtering and no LMS filtering. Thus LMS filtering should be only used below these thresholds:TU channel: <0 dB; andPA channel: <10 dB Cyclic Shift Window Shrink Another noise reduction technique shrinks the cyclic shift window n1(u), . . . , nL(u) when de-multiplexing the user thus reducing the value of L. Since L is dimensioned to cope with the maximum expected delay spread of the user, reducing L creates a trade-off between the resulting channel estimation distortion and the achieved noise reduction.FIGS.19and20compare the performance of the channel estimators for various amounts of shrinks for both the TU channel and the PA channel.FIG.19plots the normalized MSE performance σH2of the channel estimates Ĥ versus signal to noise ratio per sub-carrier per antenna for various window shrink amounts.FIG.19Ais for the TU channel.FIG.19Bis for the PA channel.FIGS.19A and19Beach show four curves: window shrink 0%; window shrink 40%; window shrink 60%; and window shrink 80%. Different shrink amounts provide optimal noise reduction in different SNR regions. This is summarized in Table 4. TABLE 4CHANNELMODELTUPASNR[−20]−8]−5]0]−20]−10]0]8region−8]−5]0]20]−10]0]8]20](dB)Cyclic80%60%40%0%80%60%40%0%shiftwindowshrink Given the SNR regions are different for the TU channel and the PA channel, this requires that eNB tracks both the SNR and the channel profile, or at least the delay spread of each SRS user. At low SNR, shrinking the cyclic shift window by up to 80% reduces the MSE by up to 6 dB for both PA and TU channels. FIG.20illustrates plots of the mean errors of the channel gain estimator versus signal to noise ratio in dB.FIG.20Ais for the TU channel.FIG.20Bis for the PA channel.FIGS.20A and20Beach show four curves: window shrink 0%; window shrink 40%; window shrink 60%; and window shrink 80%.FIG.21illustrates plots of the standard deviation of the channel gain estimator versus signal to noise ratio in dB.FIG.21Ais for the TU channel.FIG.21Bis for the PA channel.FIGS.21A and21Beach show four curves: window shrink 0%; window shrink 40%; window shrink 60%; and window shrink 80%. For MSE performance (FIG.20) a 80% cyclic shift window shrink improves at low SNR the mean error by up to 2.1 and 2.5 dB for respective TU channel and PA channel. A 80% cyclic shift window shrink improves the standard deviation (FIG.21) by 1 and 1.2 dB for respective TU channel and PA channel. This method provides the benefit of low complexity but is sensitive to the granularity of the time samples n1(u), . . . , nL(u) of the user's delay profile, the SRS bandwidth. It is clear from Table 1 that some SRS bandwidth configurations lead to such small number L of samples in the cyclic shift window that shrinking further this value will lead to more distortion errors.FIGS.21,22and23check the impact of the SRS bandwidth (4, 8 and 20 PRBs) on the cyclic shift window shrink performance.FIG.21illustrates plots of the standard deviation in the channel gain estimator error versus signal to noise ratio in dB.FIG.21Ais for the TU channel.FIG.21Bis for the PA channel.FIGS.21A and21Beach show four curves: window shrink 0%; window shrink 40%; window shrink 60%; and window shrink 80%.FIG.22illustrates plots of the mean square error in channel estimate error versus signal to noise ratio for various conditions.FIG.22Aincludes six curves for a window shrink of 80%: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs.FIG.22Bincludes 4 curves for a SRS bandwidth of 4 PRBs: TU channel with a window shrink of 0%; TU channel with a window shrink of 80%; PA channel with a window shrink of 0%; and PA channel with a window shrink of 80%.FIG.23illustrates plots of the standard deviation in channel estimate error versus signal to noise ratio for various conditions.FIG.23Aincludes six curves for a window shrink of 80%: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs.FIG.23Bincludes 4 curves for a SRS bandwidth of 4 PRBs: TU channel with a window shrink of 0%; TU channel with a window shrink of 80%; PA channel with a window shrink of 0%; and PA channel with a window shrink of 80%. As can be seen fromFIGS.21A,22A and23Awhere an 80% shrink is applied, at low SNR where it is the most useful, the SRS bandwidth has negligible impact on the noise reduction performance for the TU channel. It is more important for PA channel due to already small cyclic shift window (Table 3). At 4 PRB SRS bandwidth (FIGS.21B,22B and23B) the low end SNR PA channel performance is degraded to such extend that it becomes aligned with the TU channel performance. This should be compared though with the 4 PRB SRS performance without cyclic shift window shrink.FIGS.21B,22B and23Bshow the noise reduction gain resulting from 80% cyclic shift window shrink is preserved, compared to that achieved with 20 PRB SRS. Thus there is better performance in the PA channel with smaller bandwidths at high SNR. This is due to the very limited channel variation of PA across such small bandwidths. Therefore, the per-subcarrier channel estimation matches more easily the actual channel. Shrinking the cyclic shift window makes the estimator less robust to timing errors perhaps requiring compensating them before channel estimation.FIGS.25and26illustrate the impact of a ±0.5 μS timing error on the channel and channel gain estimation performances on the worst-case configuration for this noise reduction technique.FIG.25illustrates mean square error of channel estimates for 4-PRB SRS bandwidth and 80% cyclic shift window shrink versus signal to noise ratio.FIG.25includes four curves: TU channel with no timing error; TU channel with ±0.5 μS timing error; PA channel with no timing error; and PA channel with ±0.5 μS timing error.FIG.26illustrates channel gain estimation mean error (FIG.26A) and standard deviation channel gain estimation error (FIG.26B) versus signal to noise ratio.FIGS.26A and26Beach include four curves: TU channel with no timing error; TU channel with ±0.5 μS timing error; PA channel with no timing error; and PA channel with ±0.5 μS timing error. As shown inFIGS.25and26for a ±0.5 μS timing error the granularity of the TA command is expected to be the maximum residual timing error from the closed-loop UL timing synchronization procedure. At low SNR where it is the most useful (less than −10 dB, see Table 5) the timing errors have negligible impact on the noise reduction performance for both the TU channel and the PA channel. Combined Cyclic Shift Window Shrink and LMS Filtering FIGS.27and28illustrate whether both noise reduction techniques gains could be cumulated.FIGS.27and28provide channel estimation performances when an 80% shrink is applied to the cyclic shift window.FIG.27illustrates plots of mean square error in channel estimation versus signal to noise ratio with and without least mean squared filtering.FIG.27illustrates four curves: TU channel without LMS filtering; TU channel with LMS filtering; PA channel without LMS filtering; and PA channel with LMS filtering.FIG.28illustrates the mean channel gain estimation error (FIG.28A) and standard deviation of the channel gain estimation error (FIG.28B) versus signal to noise ratio in dB for various LMS techniques for the TU channel and the PA channel.FIG.16includes curves for: TU channel without LMS filtering; TU channel with LMS filtering; PA channel without LMS filtering; and PA channel with LMS filtering.FIGS.27and28illustrate that enabling or disabling LMS equalization on top does not have any impact on performance. Thus these two techniques are not cumulative and should be used separately. Noise Reduction Techniques Summary FIGS.27and28show that noise reduction techniques should be used selectively depending on the SNR region. Some rough a-priori knowledge of the UE geometry should be assumed. This knowledge can be derived from either long term SNR tracking for each UE or preliminary instantaneous SNR estimation. Given the higher complexity of the latter option requiring multiple channel estimation steps (preliminary, final), the former approach is preferrable. The SNR regions boundaries are channel or delay spread specific. The eNB should track each user's delay spread for this purpose. Table 5 summarizes the performance comparison between both noise reduction techniques and shows that the cyclic shift window shrink outperforms the LMS filtering. From a complexity view point, the LMS filter is also more costly. This makes the cyclic shift window shrink the best option for noise reduction. TABLE 5NOISE REDUCTIONLMSCYCLIC SHIFTTECHNIQUEFILTERINGWINDOW SHRINKChannel ModelTUPATUPAChannel estimation MSE3 dB3 dB6 dB6 dBChannel gain mean error2 dB1.2 dB2.1 dB2.5 dBChannel gain standard1 dB0.5 dB1 dB1.2 dBdeviation Channel and Channel Gain Estimation Summary FIGS.29to31provide a comprehensive set of channel and channel gain performance plots with TU and PA channels for varying numbers of SRS users and the bandwidth. Since one cyclic shift is reserved for noise variance estimation for each SRS comb, the remaining number of multiplexed users per SRS symbol is 2, 3 and 14 with 2, 4 and 8 cyclic shifts per comb respectively.FIG.29illustrates the mean square error of channel estimation versus signal to noise ratio for various number of SRS users for 20-PRB SRS bandwidth (FIG.29A) and for 6 TU channel users and 14 PA channel users (FIG.29B).FIG.29Ahas six curves: TU channel and 2 SRS users; TU channel and 6 SRS users; TU channel and 14 SRS users; PA channel and 2 SRS users; PA channel and 6 SRS users; PA channel and 14 SRS users.FIG.29Bhas 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs.FIG.30illustrates the channel gain estimation mean error versus signal to noise ratio for various number of SRS users for 20-PRB SRS bandwidth (FIG.30A) and for 6 TU channel users and 14 PA channel users (FIG.30B).FIG.30Ahas six curves: TU channel and 2 SRS users; TU channel and 6 SRS users; TU channel and 14 SRS users; PA channel and 2 SRS users; PA channel and 6 SRS users; PA channel and 14 SRS users.FIG.30Bhas 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs.FIG.31illustrates the channel gain estimation standard deviation versus signal to noise ratio for various number of SRS users for 20-PRB SRS bandwidth (FIG.31A) and for 6 TU channel users and 14 PA channel users (FIG.31B).FIG.31Ahas six curves: TU channel and 2 SRS users; TU channel and 6 SRS users; TU channel and 14 SRS users; PA channel and 2 SRS users; PA channel and 6 SRS users; PA channel and 14 SRS users.FIG.31Bhas 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs. This assumes that: noise reduction from cyclic shift window truncation with SNR-based selective truncation is according to Table 4; and the Channel gain estimator in dB scale, with a clipping threshold Gfloorof −20 dB for negative gain avoidance. FIGS.29to31show that TU channel error floors occur with 14 SRS users or at small bandwidth. This is due to truncated delay spread (14 SRS users) or channel band-pass filtering at small bandwidth due to down sampling at de-mapping/IDFT stage. With the PA channel, the delay spread is small enough to prevent from strong co-cyclic-shift interference, even with 16 users per symbol and down to 4-PRB bandwidth. SNR Estimation This section studies the impact of both noise and channel gain estimators previously described on the signal over noise ratio (SNR) estimation in support of a scheduler. In the current description the focus is upon the SNR. The simulations modeled the thermal noise component. It is expected that interference from SRS users in other cells reflects the good cross-correlation characteristics of EZC sequences which can be approximated as Gaussian noise at first order. SNR Expressions The per sub-carrier SNR vector ρsc,pexperienced at eNB antenna port p is expressed as: ρsc,p=Hp2(a)σN2(26) where: Hp2(a) is an NSRSsize vector reflecting the channel gain experienced on antenna port p on the NSRSsub-carriers; a2is the averaged received power from the SRS user; and σN2is the variance of the AWGN sub-carriers samples. This SNR is then combined across antennas to provide the “MRC'ed” per sub-carrier SNR vector ρsc=(ρf1,ρf2,…,ρfNSRS)T as: ρsc=∑p=1Aρsc,p(27) where: A is the number of receive antennas; and f1, f2, . . . , fNSRSare the sub-carriers allocated to the SRS. Practical MAC schedulers use larger than per-subcarrier SINR frequency granularity corresponding to the minimum frequency band of a user's allocation, referred to as scheduling, and defined in integer number NRBof PRBs. There are typically two methods for SINR computation depending on what type of scheduling unit scheduler supports: a fixed scheduling unit size, referred to as Fixed Transmission Bandwidth (FTB); or a variable scheduling unit size, referred to as Adaptive Transmission Bandwidth (ATB). For FTB, SINR is directly computed from per-subcarrier to per scheduling unit (1-step). ATB typically addresses Recursive Maximum Expansion (RME) scheduling algorithms where different winners can have different allocation sizes depending on the scheduling metric envelope shape. The envelope is computed with per-PRB granularity for the simplest RME algorithm. This results in computing the remaining averaging across PRBs only for the winners. In both cases, the short-term SINR per scheduling unit is computed by averaging ρscacross the sub-carriers of the same scheduling unit, thus providing per scheduling unit effective SNR vector ρeff-su=(ρeff,s1,ρeff,s2,…,ρeff,sMSRS)T where; s1, s2, . . . , sMSRSare the MSRS=⌊2NSRSNscRBNRB⌋ scheduling units in the SRS allocation. The averaging method depends on the OFDM access scheme and the type of equalizer used at the physical layer. With the single carrier property of the UL transmission and when ZF equalization is implemented at the L1 receiver the effective SINR ρeff,sZFacross the NscRBNRB/2 consecutive sub-carriers of a scheduling unit s is computed as: ρeff,sZF=(2NscRBNRB∑f=sNscRBNRB/2(s+1)NscRBNRB/21ρf)-1(28) With the same transmission scheme but with MMSE equalization at the receiver, the effective SINR ρeff,sMMSEis computed through harmonic averaging as: ρeff,sMMSE=[(2NscRBNRB∑f=sNscRBNRB/2(s+1)NscRBNRB/2ρf1+ρf)-1-1]-1=(2NscRBNRB∑f=sNscRBNRB/2(s+1)NscRBNRB/211+ρf)-1-1(29) Given MMSE is the most popular receiver for SC-FDMA, this invention only considers ρeff,sMMSE. Performance of SNR Estimators The performance of the per-subcarrier SNR estimators {circumflex over (ρ)}sc-gen,pand {circumflex over (ρ)}sc,pwith genie-aided and real AWGN variance estimation are respectively: {ρ^sc-gen,p=H^p2(a)σN2;ρ^sc-gen=∑p=1Aρ^sc-gen,pρ^sc,p=H^p2(a)σ^N2;ρ^sc=∑p=1Aρ^sc,p(30) The channel gain estimate Ĥp2(a) is given by Equation (16) with a clipping threshold Gfloorof −20 dB for negative gain avoidance and noise reduction from cyclic shift window truncation with selective truncation according to Table 4. Noise variance estimate {circumflex over (σ)}N2is given by Equation (18). In simulations the measured mean (bias) and centered standard deviation performance of the above estimators, expressed in dB are: {mρsc-gen,p=E{(ρ^sc-gen,p)dB-(ρsc-gen,p)dB};σρsc-gen,p=E{[(ρ^sc-gen,p)dB-(ρsc-gen,p)dB-mρsc-grn,p2]2}mρsc-gen=E{(ρ^sc-gen)dB-(ρsc-gen)dB};σρsc-gen=E{[(ρ^sc-gen)dB-(ρsc-gen)dB-mρsc-gen2]2}mρsc,p=E{(ρ^sc,p)dB-(ρsc,p)dB};σρsc,p=E{[(ρ^sc,p)dB-(ρsc,p)dB-mρsc,p2]2}mρsc=E{(ρ^sc)dB-(ρsc)dB};σρsc=E{[(ρ^sc)dB-(ρsc)dB-mρsc2]2}(31) FIG.32shows the per-subcarrier SNR estimators performance for both TU and PA channels for 20-PRB SRS bandwidth and when running 2 SRS users per symbol.FIG.32shows mean signal to noise error (FIG.32A) and standard deviation of the signal to noise error (FIG.32B) versus signal to noise ratio for various conditions with 20 PRB SRS bandwidth and 2 SRS users per symbol.FIGS.32A and32Beach show eight curves: TU channel SNR per antenna with exact noise; TU channel SNR per antenna with estimated noise; TU channel SNR combined with exact noise; TU channel SNR combined with estimated noise; PA channel SNR per antenna with exact noise; PA channel SNR per antenna with estimated noise; PA channel SNR combined with exact noise; and PA channel SNR combined with estimated noise. FIG.32shows that for low SNR, the per-antenna SNR estimation performance is very much in line with the channel gain performance. This confirms the good performance of the noise variance estimator. At high SNR, the noise variance estimate bias and large standard deviation due to co-channel interference creates both a bias and a standard deviation rise on the SNR estimates. The SNR estimation in support of a scheduler will rather use noise and interference estimation from the DMRS rather than the SRS. This is because it is more representative of the noise and interference experienced by PUSCH. Therefore, this is not a major issue and this invention uses ideal noise estimates in the following SNR performance investigations. The expected SNR estimation standard deviation improvement when combining the estimates across antennas is 2 to 2.5 dB. The performance of the per-chunk SNR estimators {circumflex over (ρ)}ch-Hand {circumflex over (ρ)}ch-Awith harmonic and arithmetic averaging is respectively: ρ^ch-H=(2NscRBNRB∑f=sNscRBNRB/2(s+1)NscRBNRB/211+ρ^sc-gen(f))-1-1(32)ρ^ch-A=2NscRBNRB∑f=sNscRBNRB/2(s+1)NscRBNRB/2ρ^sc-gen(f)(33) From simulations the mean (bias) and centered standard deviation performance of the above estimators, expressed in dB is: {mρch-H=E{(ρ^ch-H)dB-(ρch-H)dB};σρch-H=E{[(ρ^ch-H)dB-(ρch-H)dB-mρch-H2]2}mρch-A=E{(ρ^ch-A)dB-(ρch-A)dB};σρch-A=E{[(ρ^ch-A)dB-(ρch-H)dB-mρch-A2]2}(34) FIGS.33and34show the per-chunk SNR estimators performance for both the TU channel and the PA channels for 20-PRB SRS bandwidth and when running 2 SRS users per symbol.FIG.33is the mean chuck SNR error versus signal to noise ratio for various chunk averaging.FIG.33Ais the TU channel.FIG.33Bis the PA channel.FIGS.33A and33Beach have four curves: 1 PRB chunk arithmetic averaging; 1 PRB chunk harmonic averaging; 5 PRB chunk arithmetic averaging; and 5 PRB chunk harmonic averaging.FIG.34is the standard deviation of the chunk SNR error versus signal to noise ratio for various chunk averaging.FIG.34Ain the TU channel.FIG.34Bis the PA channel.FIGS.34A and33Beach have four curves: 1 PRB chunk arithmetic averaging; 1 PRB chunk harmonic averaging; 5 PRB chunk arithmetic averaging; and 5 PRB chunk harmonic averaging. FIGS.33and34illustrate that there is no difference between arithmetic and harmonic averaging on TU channel for UE geometry below −5 dB and −10 dB for 1-PRB and 5-PRB respectively. There is no difference at all either across the SNR range between arithmetic and harmonic averaging on PA channel. This is due to the flat behavior of PA channel across the averaged sub-carriers, in which case Equation (29) simplifies to an arithmetic mean. At high SNR, arithmetic averaging of TU channel has a bias error of 0.5 dB and 1.4 dB for 1-PRB and 5-PRB chunks respectively as well as a worse standard deviation performance with respect to harmonic averaging of 0.5 dB and 0.9 dB for 1-PRB and 5-PRB chunks respectively. Thus similarly to what was done for the channel gain estimation, harmonic or arithmetic averaging can be applied selectively depending on the UE's SNR. As for the SNR-based selective truncation, some rough a-priori knowledge of the UE geometry can be assumed sufficient to map the UE in either of the two SNR regions (high/low SNR) as per the thresholds noted above. The benefit of this is that the lower complexity arithmetic averaging can be used whenever possible. FIGS.33and34also show the chunk-SNR estimation performance improves with the chunk size as more averaging is performed. Sub-Carrier Decimation Another important complexity reduction comes from the sub-carrier decimation that can be applied when computing per-chunk SNR. The performance loss when applying the three decimation factors possible with 6 SRS sub-carriers per PRB is expected as: 2, 3 and 6. In order to minimize the decimation error, the resulting decimated samples are centered in the PRB, as illustrated inFIG.35. FIG.36shows the performance of the per-PRB SNR estimator {circumflex over (ρ)}ch-H(chunk size=1 PRB) when sub-carrier decimation is applied during the harmonic averaging, for 20-PRB SRS bandwidth and when running 6 and 14 SRS users per symbol for both the TU channel and the PA channel.FIG.36Aillustrates the mean chunk SNR error versus signal to noise ratio for various conditions.FIG.36Billustrates the standard deviation of the chunk SNR error versus signal to noise ratio for various conditions.FIGS.36A and36Beach have 8 curves: TU channel with a sub-carrier decimation factor of 1; TU channel with a sub-carrier decimation factor of 2; TU channel with a sub-carrier decimation factor of 3; TU channel with a sub-carrier decimation factor of 6; PA channel with a sub-carrier decimation factor of 1; PA channel with a sub-carrier decimation factor of 2; PA channel with a sub-carrier decimation factor of 3; and PA channel with a sub-carrier decimation factor of 6.FIG.36shows that a decimation factor of 6 (only one sub-carrier per PRB) should be precluded with TU channel. In all other cases, the performance degradation from decimation factors does not exceed 0.1 dB. Thus sub-carrier decimation factors of up to 3 and 6 can be applied when computing per-PRB SNR with TU and PA channels respectively. SNR Performance Summary FIGS.37and38illustrate a comprehensive set of per-PRB SNR estimation performance plots for the TU channel and the PA channel when varying the number of SRS users and the SRS bandwidth.FIG.37illustrates the mean chunk SNR error versus signal to noise ratio for various number of SRS users for 20-PRB SRS bandwidth (FIG.37A) and for 6 TU channel users and 14 PA channel users (FIG.37B).FIG.37Ahas six curves: TU channel and 2 SRS users; TU channel and 6 SRS users; TU channel and 14 SRS users; PA channel and 2 SRS users; PA channel and 6 SRS users; PA channel and 14 SRS users.FIG.37Bhas 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs.FIG.38illustrates the standard deviation of the chunk SNR error versus signal to noise ratio for various number of SRS users for 20-PRB SRS bandwidth (FIG.38A) and for 6 TU channel users and 14 PA channel users (FIG.38B).FIG.38Ahas six curves: TU channel and 2 SRS users; TU channel and 6 SRS users; TU channel and 14 SRS users; PA channel and 2 SRS users; PA channel and 6 SRS users; PA channel and 14 SRS users.FIG.38Bhas 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs. One cyclic shift is reserved for noise variance estimation for each SRS comb. The remaining number of multiplexed users per SRS symbol is 2, 6 and 14 with 2, 4 and 8 cyclic shifts per comb respectively. From the conclusions drawn in the previous sections, the following estimators are assumed:{circumflex over (ρ)}ch-Hwith harmonic averaging on TU channel for UEs beyond −5 dB SNR;{circumflex over (ρ)}ch-Awith arithmetic averaging on other UEs, and for PA channel;Sub-carrier decimation of 3. Timing Offset Estimation Impact of Timing Errors It is worth understanding first the impact of timing errors on the estimations performed on the SRS and the resulting performance loss of the per-PRB SNR estimation, involving the channel gain estimation from SRS.FIG.39illustrates that in presence of timing errors, the user cyclic shift window n1(u), . . . , nL(u) in Equation (4) andFIG.7must be enlarged to account for the maximum expected timing uncertainty.FIG.39illustrates the case of four cyclic-shift multiplexed UEs per SRS comb with 5 μS delay spread TU channel. The top part ofFIG.39shows a plot of power delay profile versus time sample for four user windows. The bottom part ofFIG.39shows a plot of demultipelexed power delay profile versus the same time samples. The negative time offset samples are folded back at the end of the user window. In addition, for narrow channels such as PA channel inFIG.12, a timing uncertainty window as low as ±0.5 μS is already larger than the channel delay spread, which makes it impossible to implement cyclic shift window shrink. This is not the case of TU channel for which we still retain the noise reduction technique.FIG.40illustrates the performance degradation of the per-PRB SNR estimation with no sub-carrier decimation, in presence of timing errors, for 20-PRB SRS bandwidth and when running 6 and 14 SRS users per symbol for both the TU channel and the PA channel.FIG.40Ais the mean chunk SNR error.FIG.40Bis the standard deviation of the chunk SNR error. BothFIGS.40A and40Billustrate 6 curves: TU channel with a maximum timing error of ±0.0 μS; TU channel with a maximum timing error of ±0.5 μS; TU channel with a maximum timing error of ±1.0 μS; PA channel with a maximum timing error of ±0.0 μS; PA channel with a maximum timing error of ±0.5 μS; PA channel with a maximum timing error of ±1.0 μS.FIG.40illustrates that the degradation is the most severe for the PA channel, with up to 3 dB and 1.7 dB degradation at the low end SNR for the mean and standard deviation respectively. This is because the noise reduction technique based on cyclic shift window shrink must be disabled with PA channel in presence of non-compensated timing errors. For the TU channel, the noise reduction technique remains active and the performance loss due to timing errors is bounded by 1 dB and 1.5 dB for the mean and standard deviation respectively. This is restricted to a small SNR region and is mainly due to the fact that the optimized shrink amounts and SNR regions from Table 4 are not optimal after adjusting the user cyclic shift window for timing errors as shown inFIG.39and should be tuned again. No significant difference is observed on both channels between a timing uncertainty window of ±0.5 μS and ±1.0 μS. FIG.41illustrates the impact of narrowing the SRS bandwidth down to 4 PRBs which further reduces the user cyclic shift window size, in presence of timing errors of ±0.5 μS.FIG.41illustrates the mean chunk SNR error (FIG.41A) and the standard deviation of the mean chunk SNR error (FIG.41B) for various SRS bandwidths for both the TU channel and the PA channel. BothFIGS.40A and40Billustrate 6 curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs. In the worst-case an additional 0.5 dB loss can be seen inFIG.41. Timing Offset Estimation One additional benefit of the time-domain based channel estimation is that it allows implementing a simple timing offset estimator from the concatenated delay profiles sequence y by combining the amplitude delay profiles across antennas and searching for the highest peak in the user's timing offset window: {i^u=argmaxi{pi};i∈Iτ,u;pi=∑a=1Ayi,a2τ^u=(i^u-Cu)TS(35) where: A is the number of antenna; Cuis the cyclic shift of user u; Tsis the sampling period of sequence y; and Iτ, uis the timing offset window of user u, defined as: {Iτ,u={-Nearly,…,-1,0,1,…,Nlate}Nearly=⌈max(0.5μs,τmax)/TS⌉Nlate=⌈[WM+max(0.5μs,τmax)]/TS⌉WM=min(1μs,τ)(36) where: Iτ, u(Nearly+1)=0 coincides with the first sample of the cyclic shift window of user u; ±τmaxis the maximum expected timing error; WMis the main energy region within the user delay spread; and τ is the delay spread of the user.FIG.42illustrates this design principle of a user's timing offset window.FIG.42show a plot of power delay profile versus time samples. The main energy region is enlarged on both sides by the maximum expected timing offset. For the TU channel, the main energy region is the first 1 μS of the user's cyclic shift window. For the PA channel, the main energy region is the delay spread of the channel which is 0.9 μS. FIG.43is the power delay profiles (PDP) of both TU and PA channels as they would appear sampled after the IDFT and the cyclic shift demultiplex in the absence of noise, for 20, 8 and 4 PRBs SRS bandwidths.FIG.43is the average demultiplexed power delay profile for the TU channel (FIG.43A) and the PA channel (FIG.43B) versus delay for various SRS bandwidths.FIG.43Aincludes three curves: a SRS bandwidth of 20 PRBs resulting in a 0.63 μS mean delay; a SRS bandwidth of 8 PRBs resulting in 0.54 μS mean delay; and a SRS bandwidth of 4 PRBs resulting in 0.95 μS mean delay.FIG.43Bincludes three curves: a SRS bandwidth of 20 PRBs resulting in a 0.13 μS mean delay; a SRS bandwidth of 8 PRBs resulting in a 0.093 μS mean delay; and a SRS bandwidth of 4 PRBs resulting in a 0.15 μS mean delay.FIG.43illustrates that the narrower the SRS bandwidth, the coarser the power delay profile sampling. This affects the resulting mean delay, as measured from these samples. FIG.44plots the timing estimation mean and standard deviation errors of the described algorithm for both the TU channel and the PA channel when varying the SRS bandwidth. The timing uncertainty of the SRS users is within ±1 μS.FIG.44Ashows the timing offset mean versus signal to noise ration.FIG.44Bshows the timing offset standard deviation versus signal to noise ratio. Each ofFIGS.44A and44Bshow six curves: TU channel with an SRS bandwidth of 20 PRBs; TU channel with an SRS bandwidth of 8 PRBs; TU channel with an SRS bandwidth of 4 PRBs; PA channel with an SRS bandwidth of 20 PRBs; PA channel with an SRS bandwidth of 8 PRBs; and PA channel with an SRS bandwidth of 4 PRBs. Six and 14 SRS users are multiplexed per symbol with the TU channel and the PA channel, assuming the reserved cyclic shift per comb for noise estimation.FIG.44shows for 20 and 8-PRB SRS bandwidths, the timing estimation mean converges as SNR increases to 0 and 0.35 μs for the PA channel and the TU channel respectively. In the latter case, this corresponds to the average delay of the TU channel in the main energy region, so that the estimator can be considered non-biased in the SNR region greater than or equal to −5 dB. Similarly, the standard deviation performance remains steady and below 0.5 μS in the same SNR region and for the same bandwidth configurations. With a 4-PRB SRS bandwidth, both mean and standard deviation performances are deteriorated due to the resulting coarse granularity of the PDP sampling. The effect of adjacent users' spill-over on the timing offset window generates false alarms resulting in wrong timing estimations irrespective of the SNR value. As a result, the following conclusions can be drawn: The proposed low-complexity timing offset estimation algorithm is non-biased and shows quite steady performance in the SNR region where SNR is greater than or equal to −5 dB. For SNRs below −5 dB, it is recommended to cumulate the PDPs of subsequent SRSs to achieve the steady state performance of the above SNR region. The larger the SRS bandwidth, the better the estimation accuracy (standard deviation). Tracking timing offsets as large as ±1 μS is impractical with SRS bandwidth as small as 4 PRBs. FIG.45plots the CDF of the timing estimation error from the described algorithm for both TU and PA channels at −18, −12, −6 and 0 dB Es/N0, when varying the SRS bandwidth. From these curves, we extracted the % of timing offset estimates within 0.5 μS of the main peak. This is reported in Table 6. The above conclusions are further confirmed and it can be measured that in the steady SNR region (SNR greater than or equal to −5 dB) and for 20-PRB and 8-PRB SRS bandwidth, ˜85% and close to 100% of timing offsets estimates are within 0.5 μS of the main peak for the TU channel and the PA channel. TABLE 6SRS−18 dB−12 dB−6 dB0 dBBWTUPATUPATUPATUPA2066%76%87%97%92%100%93%100%PRBs848%56%72%86%84%98%88%100%PRBs435%30%51%46%60%59%63%62%PRBs This patent application describes in details the design choices for the LTE SRS channel, channel gain, noise variance and timing offset estimators, from theoretical derivations and performance evaluations. In particular, the proposed time-domain based channel estimation with group-UE cyclic shift de-multiplexing is a low-complexity approach that retains the inherent noise reduction performance on channel estimates while allowing sharing the same upfront computation for users' channels, timing offset estimations and noise variance estimation. The unbiased channel gain estimation requires estimating and removing the noise variance by means of one reserved cyclic shift per SRS comb. Different noise removal techniques with negative gain avoidance are assessed. Applying a simple clipping threshold of 0.01 provides the best performance compromise across configurations. Further noise reduction techniques are investigated showing that geometry-based selective cyclic shift window reduction outperforms other approaches such as LMS filtering. Different techniques to derive per-PRB SNR from the achieved per-antenna per-subcarrier channel gain estimates are evaluated and it is shown that low-complexity arithmetic averaging can be used on PA channel but should be restricted to very low SNR (less than −5 dB) on TU channel above which harmonic averaging is mandated. An SRS sub-carrier decimation factor per comb of up to 3 allows reducing the complexity in the harmonic averaging computation without noticeable performance degradation. Comprehensive channel gain and SNR performance results obtained from realistic multi-user link-level simulations over a wide SNR range are presented and can be used for further reference in system simulations to model the measurement errors from SRS. Reviewing the impact of timing errors on the above SNR estimator, a simple timing offset estimator is proposed providing for SNR greater than or equal to −5 dB and SRS bandwidths greater than or equal to 8 PRBs as more than 85% of timing offsets estimates within 0.5 μS of the main peak of the channel. Lower SNRs would need cumulating the Power Delay Profiles of subsequent SRSs to achieve the steady state performance of the above SNR region, and with 4-PRB SRS bandwidth, timing offset estimation should be employed with smaller than ±1 μS timing uncertainty to avoid erroneous estimates due to adjacent cyclic shift users' spill-over. FIG.46is a block diagram illustrating internal details of an eNB1002and a mobile UE1001in the network system ofFIG.1. Mobile UE1001may represent any of a variety of devices such as a server, a desktop computer, a laptop computer, a cellular phone, a Personal Digital Assistant (PDA), a smart phone or other electronic devices. In some embodiments, the electronic mobile UE1001communicates with eNB1002based on a LTE or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) protocol. Alternatively, another communication protocol now known or later developed can be used. Mobile UE1001comprises a processor1010coupled to a memory1012and a transceiver1020. The memory1012stores (software) applications1014for execution by the processor1010. The applications could comprise any known or future application useful for individuals or organizations. These applications could be categorized as operating systems (OS), device drivers, databases, multimedia tools, presentation tools, Internet browsers, emailers, Voice-Over-Internet Protocol (VOIP) tools, file browsers, firewalls, instant messaging, finance tools, games, word processors or other categories. Regardless of the exact nature of the applications, at least some of the applications may direct the mobile UE1001to transmit UL signals to eNB (base-station)1002periodically or continuously via the transceiver1020. In at least some embodiments, the mobile UE1001identifies a Quality of Service (QoS) requirement when requesting an uplink resource from eNB1002. In some cases, the QoS requirement may be implicitly derived by eNB1002from the type of traffic supported by the mobile UE1001. As an example, VOIP and gaming applications often involve low-latency uplink (UL) transmissions while High Throughput (HTP)/Hypertext Transmission Protocol (HTTP) traffic can involve high-latency uplink transmissions. Transceiver1020includes uplink logic which may be implemented by execution of instructions that control the operation of the transceiver. Some of these instructions may be stored in memory1012and executed when needed by processor1010. As would be understood by one of skill in the art, the components of the uplink logic may involve the physical (PHY) layer and/or the Media Access Control (MAC) layer of the transceiver1020. Transceiver1020includes one or more receivers1022and one or more transmitters1024. Processor1010may send or receive data to various input/output devices1026. A subscriber identity module (SIM) card stores and retrieves information used for making calls via the cellular system. A Bluetooth baseband unit may be provided for wireless connection to a microphone and headset for sending and receiving voice data. Processor1010may send information to a display unit for interaction with a user of mobile UE1001during a call process. The display may also display pictures received from the network, from a local camera, or from other sources such as a Universal Serial Bus (USB) connector. Processor1010may also send a video stream to the display that is received from various sources such as the cellular network via RF transceiver1020or the camera. During transmission and reception of voice data or other application data, transmitter1024may be or become non-synchronized with its serving eNB. In this case, it sends a random access signal. eNB1002comprises a Processor1030coupled to a memory1032, symbol processing circuitry1038, and a transceiver1040via backplane bus1036. The memory stores applications1034for execution by processor1030. The applications could comprise any known or future application useful for managing wireless communications. At least some of the applications1034may direct eNB1002to manage transmissions to or from mobile UE1001. Transceiver1040comprises an uplink Resource Manager, which enables eNB1002to selectively allocate uplink Physical Uplink Shared CHannel (PUSCH) resources to mobile UE1001. As would be understood by one of skill in the art, the components of the uplink resource manager may involve the physical (PHY) layer and/or the Media Access Control (MAC) layer of the transceiver1040. Transceiver1040includes at least one receiver1042for receiving transmissions from various UEs within range of eNB1002and at least one transmitter1044for transmitting data and control information to the various UEs within range of eNB1002. The uplink resource manager executes instructions that control the operation of transceiver1040. Some of these instructions may be located in memory1032and executed when needed on processor1030. The resource manager controls the transmission resources allocated to each UE1001served by eNB1002and broadcasts control information via the PDCCH. Symbol processing circuitry1038performs demodulation using known techniques. Random access signals are demodulated in symbol processing circuitry1038. During transmission and reception of voice data or other application data, receiver1042may receive a sounding reference signal from a UE1001. The sounding reference signal is processed by receiver1042to estimate channel state, channel gain, noise power and timing error of UE1001according to the present invention. In this embodiment, the channel state, channel gain, noise power and timing error calculation is embodied by executing instructions stored in memory1032by processor1030. In other embodiments, the channel state, channel gain, noise power and timing error calculation may be embodied by a separate processor/memory unit, by a hardwired state machine, or by other types of control logic, for example. In response to receiving the sounding reference signal, eNB1002may schedule an appropriate set of resources and notifies UE1001with a resource grant as well as a timing advance command. | 70,048 |
11863364 | DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, the following description will be made while focusing on an NR based wireless communication system. However, the present invention is limited thereto. The present invention is applicable to another wireless communication system, for example, 3rd generation partnership project (3GPP) long-term evolution (LTE)/LTE-A (advanced) or institute of electrical and electronics engineers (IEEE) having the same characteristic to be described below. A 5G system is a 3GPP system including a 5G access network (AN), a 5G core network (CN) and user equipment (UE). The UE may be called other terms such as a mobile station (MS), a user terminal (UT), a subscriber station (SS), or a wireless device. A 5G AN is an access network including a non-3GPP access network and/or a new generation radio access network (NG-RAN) connected to the 5G CN. The NG-RAN is a wireless access network having a common characteristic connected to the 5G CN and for supporting at least one of following options.1) Independent type new radio (NR).2) The NR is an anchor having E-UTRA extension.3) Independent type E-UTRA.4) An E-UTRA is an anchor having NR extension. FIG.1illustrates a NG-RAN architecture. Referring toFIG.1, the NG-RAN includes at least one NG-RAN node. The NG-RAN node includes at least one gNB and/or at least one ng-eNB. A gNB/ng-eNB may be called a base station (BS) or an access point. A gNB provides an NR user plane and a control plane protocol termination toward the UE. An ng-eNB provides an E-UTRA user plane and a control plane protocol termination toward the UE. A gNB is connected with an ng-eNB through an Xn interface. The gNB and the ng-eNB are connected with the 5G CN through the NG interface. In detail, the gNB and the ng-eNB are connected with an access and mobility management function (AMF) through an NG-C interface, and are connected with a user plane function (UPF) through an NG-U interface. The gNB and/or ng-eNB host the following functions:Functions for radio resource management: Radio bearer control, radio admission control, connection mobility control, dynamic allocation of resources to UEs in both uplink and downlink (scheduling);Internet protocol (IP) header compression, encryption and integrity protection of data;Selection of an AMF at UE attachment when no routing to an AMF can be determined from the information provided by the UE;Routing of user plane data towards UPF(s);Routing of control plane information towards AMF;Connection setup and release;Scheduling and transmission of paging messages;Scheduling and transmission of system broadcast information (originated from the AMF or operations & maintenance (O&M));Measurement and measurement reporting configuration for mobility and scheduling;Transport level packet marking in the uplink;Session management;Support of network slicing;Quality of service (QoS) flow management and mapping to data radio bearers;Support of UEs in RRC INACTIVE state;Distribution function for non-assess stratum (NAS) messages;Radio access network sharing;Dual connectivity;Tight interworking between NR and E-UTRA. The AMF hosts the following main functions:NAS signaling termination;NAS signaling security;AS security control;Inter CN node signaling for mobility between 3GPP access networks;Idle mode UE reachability (including control and execution of paging retransmission);Registration area management;Support of intra-system and inter-system mobility;Access authentication;Access authorization including check of roaming rights;Mobility management control (subscription and policies);Support of network slicing;Session management function (SMF) selection. The UPF hosts the following main functions:Anchor point for Intra-/Inter-radio access technology (RAT) mobility (when applicable);External protocol data unit (PDU) session point of interconnect to data network;Packet routing & forwarding;Packet inspection and user plane part of policy rule enforcement;Traffic usage reporting;Uplink classifier to support routing traffic flows to a data network;Branching point to support multi-homed PDU session;QoS handling for user plane, e.g. packet filtering, gating, UUDL rate enforcement;Uplink traffic verification (service data flow (SDF) to QoS flow mapping);Downlink packet buffering and downlink data notification triggering. The SMF hosts the following main functions:Session management;UE IP address allocation and management;Selection and control of UP function;Configures traffic steering at UPF to route traffic to proper destination;Control part of policy enforcement and QoS;Downlink data notification. In the NR, a plurality of orthogonal frequency division multiplexing (OFDM) numerologies may be supported. A plurality of numerologies may be mapped to different subcarrier spacings, respectively. For example, a plurality of numerologies mapped to various subcarrier spacings of 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may be supported. Downlink (DL) transmission and uplink (UL) transmission are configured in a frame having a length of 10 ms in the NR. One frame includes 10 subframes having a length of 1 ms. Each frame is divided into two half-frames having the same size. A half-frame 0 is configured by subframes 0-4. A half-frame 1 is configured by subframes 5-9. In a carrier, one frame group is included on UL and one frame group is included on DL. A slot is configured by each numerology in the subframe. For example, in a numerology mapped to a subcarrier spacing of 15 kHz, one subframe includes one slot. In a numerology mapped to a subcarrier spacing of 30 kHz, one subframe includes two slots. In a numerology mapped to a subcarrier spacing of 60 kHz, one subframe includes four slots. In a numerology mapped to a subcarrier spacing of 120 kHz, one subframe includes eight slots. In a numerology mapped to a subcarrier spacing of 240 kHz, one subframe includes 16 slots. The number of OFDM symbols per slot may maintain 14. A start point of a slot in the subframe may be arranged in a start point of an OFDM symbol in time. In the slot, the OFDM symbol may be classified into a DL symbol, a UL symbol, or a flexible symbol. In the DL slot, it may be assumed that DL transmission occurs in only a DL symbol or a flexible symbol. In the UL slot, the UE may perform UL transmission in only the UL symbol or the flexible symbol. FIG.2illustrates an example of a subframe structure in an NR. The subframe structure ofFIG.2may be used in a time division duplex (TDD) of the NR in order to minimize transmission delay of data. The subframe structure ofFIG.2may be called a self-contained subframe structure. Referring toFIG.2, a first symbol of a subframe includes a DL control channel, and a final symbol includes a UE control channel. Symbols from a second symbol to a thirteenth symbol of the subframe may be used for DL data transmission or UL data transmission. As described above, when DL transmission and UL transmission are sequentially performed in one subframe, the UE may receive DL data and transmit UL hybrid automatic repeat request (HARQ)-acknowledgement (ACK) in one subframe. Finally, a time taken for retransmission upon generation of data transmission error may be reduced. Accordingly, transfer delay of final data may be minimized. In such a subframe structure, a base station and the UE may need a gap to convert a transmission mode into a reception mode or from the reception mode into the transmission mode. To this end, a partial symbol of a time point converted from DL to UL in the subframe structure may be configured as a guard period (GP). A physical channel in the NR is described. An antenna port is defined so that a channel on which a symbol is transported on the antenna port may be inferred from a channel on which a different symbol is transported on the same antenna port. If a large-scale characteristic of a channel to which a symbol is transferred on one antenna port may be inferred from a channel to which the symbols is transferred on a different antenna port, two antenna ports may have quasi co-located (QCL) relation to each other. The large-scale characteristic includes at least one of delay spread, Doppler diffusion, Doppler shift, average gain, average delay, and space reception parameter. With respect to each numerology and carrier, a resource grid consisting of a plurality of subcarriers and a plurality of OFDM symbols is defined. The resource grid starts from a specific common resource block indicated by higher layer signaling. There is one resource grid per antenna port, per numerology, and per transmission direction (DL or UL). Per antenna port and per numerology, each element in the resource grid is called resource element (RE). The resource block (RB) is defined as 12 continuous subcarriers at a frequency domain. A reference RB starts from 0 at a frequency domain to be indexed in a gradually increased direction. A subframe 0 of the reference RB is common in all numerologies. A subcarrier of an index 0 of the reference RB functions as a common reference point with respect to another RB grid. A common RB starts from 0 at a frequency domain with respect to each numerology to be indexed in a gradually increased direction. A subcarrier having an index 0 of a common RB having index 0 corresponds to a subcarrier having index 0 of the reference RB in each numerology. A physical RB (PRB) and a virtual RB are defined in a bandwidth part (BWP), and starts from 0 in the BWP to be indexed in a gradually increased direction. The BWP is defined as a continuous group of a selected PRB in a continuous group of common RBs in a given carrier and a given numerology. The UE may be configured with maximum 4 BWPs in DL, and only one DL BWP may be activated at a given time point. It is expected that the UE does not receive a physical downlink shared channel (PDSCH), a physical downlink control channel (PDCCH), a channel state information reference signal (CSI-RS) or a tracking RS (TRS) at an outside of an activated BWP. Further, the UE may be configured with maximum 4 BWPs in UL, and only one UL BWP may be activated at a given time point. When the UE is configured with a supplemental UL (SUL), the UE may be configured with maximum 4 BWPs in SUL, and only one UL BWP may be activated at a given time point. The UE cannot transmit a physical uplink shared channel (PUSCH) or a physical uplink control channel (PUCCH) at an outside of an activated BWP. In a DL transmission scheme at the NR, a closed loop demodulation RS (DM-RS) based spatial multiplexing is supported for a PDSCH. Maximum 8 and 12 orthogonal DL DM-RS ports support type 1 and type 2 DM-RSs, respectively. Maximum 8 orthogonal DL DM-RS ports are supported per UE with respect to single-user multiple-input multiple-output (SU-MIMO). Maximum 4 DL DM-RS ports per UE are supported with respect to multi-user MIMO (MU-MIMO). The number of SU-MIMO code-words is 1 with respect to 1-4 layer transmission and 2 with respect to 5-8 layer transmission. The DM-RS and a corresponding PDSCH are transmitted using the same pre-coding matrix, and the UE does not need to know a pre-coding matrix in order to demodulate transmission. A transmitter may use different pre-coder matrixes with respect to different parts of a transmission bandwidth that results in a frequency selective pre-coding. Further, the UE may assume that the same pre-coding matrix is used through a group of PRBs called pre-coding RB group. DL physical layer processing of a transmission channel is configured by following steps:Transmission block cyclic redundancy check (CRC) attach;Code block division and code block CRC attachment;Channel coding: low-density parity-check (LDPC) coding;Physical layer hybrid HARQ processing and rate matching;Bit interleaving;Modulation: quadrature phase shift keying (QPSK), 16 quadrature amplitude modulation (QAM), 64-QAM and 256-QAM;Layer mapping and pre-coding;Mapping to an assigned resource and an antenna port. The UE may assume that at least one symbol having a DM-RS is included in each layer in which a PDSCH is transmitted to the UE. The number of DM-RS symbols and resource element mapping are configured by a higher layer. A TRS may be transmitted on an additional symbol in order to assist receiver phase track. The PDCCH is used to schedule DL transmission on the PDSCH and UL transmission on the PUSCH. Downlink control information (DCI) on the PDCCH include following information.DL assignment including at least modulation and coding scheme, resource assignment and HARQ information associated with DL shared channel (DL-SCH);UL scheduling grant including at least modulation and coding scheme, resource assignment and HARQ information associated with UL shared channel (UL-SCH). A control channel is formed by a group of control channel elements, and each control channel element consists of a set of resource element groups. Different numbers of control channel elements are collected so that different code rates with respect to the control channel are configured. Polar coding is used for the PDCCH. Each resource element group transporting the PDCCH transports a DM-RS thereof. QPSK modulation is used for the PDCCH. FIG.3illustrates a time-frequency structure of an SS block. A synchronization signal and a physical broadcast channel (PBCH) block (hereinafter referred to as, ‘SS block’) consists of a primary synchronization signal (PSS) and a secondary synchronization signal (SSS), occupying 1 symbol and 127 subcarriers respectively, and a PBCH, which is configured by three symbols and 240 subcarriers but which leaves a unused part at a middle on one symbol for the SSS. A transmission period of the SS block may be determined by a network, and a time position to which the SS block is transmitted is determined by a subcarrier spacing. Polar coding is used at the PBCH. Unless the network configures different subcarrier spacings to the UE, the UE may assume a band specific subcarrier spacing for the SS block. A PBCH symbol transports frequency multiplexed DM-RS thereof. QPSK modulation is used for the PBCH. When supported by the network, a wideband may be used in NR. Further, in the NR, a bandwidth supported from the network may differ from a bandwidth supported from the UE. In this case, there is a need to clearly define how to performing transmission and/or reception between the network and the UE. FIG.4illustrates an example of a system bandwidth and a bandwidth supported from the UE in an NR carrier. It is assumed inFIG.4that a bandwidth supported from a network is a system bandwidth. However, according to a required system bandwidth, the network may combine an NR carrier. Further, the bandwidth supported from the UE may correspond to the BWP mentioned above.FIG.4-(a) illustrates a case where the system bandwidth is the same as the bandwidth supported from the UE.FIG.4-(b) illustrates a case where the system bandwidth differs from the bandwidth supported from the UE. InFIG.4-(b), the bandwidth supported from the UE may be less than the system bandwidth or the bandwidth supported from the UE may be greater than the system bandwidth.FIG.4-(c) illustrates a case where the UE support a wideband using a plurality of radio frequency (RF) elements. Accordingly, the system bandwidth may be the same as the bandwidth supported from the UE. A plurality of RF elements may share a baseband element. An individual baseband element may be assigned in a unit of each RF element. It is assumed in the present specification that a plurality of RF elements may share a baseband element/ability. The above may depend on UE ability. FIG.5illustrates an example of carrier aggregation. If a plurality of NR carrier is aggregated to configure one carrier, the system bandwidth may be changed and a center frequency may be changed. However, a direct current (DC) subcarrier may be changed or may not be changed according to an operation of the network. When the DC subcarrier is changed, the DC subcarrier may be indicated to the UE to suitably process the DC subcarrier. Hereinafter, various embodiment of the present invention are described as follows. 1. Sub-Band Configuration According to a synchronization signal (SS) including a primary synchronization signal/secondary synchronization signal/physical broadcast channel (PBCH), a relationship between an anchor sub-band including an SS block and a sub-band may be changed. In order to dispose the anchor sub-band, followings option may be considered. The sub-band may correspond to the BWP mentioned above. The anchor sub-band may be called another name such as an initial BWP.(1) Option 1: The anchor sub-band may be located at only one of determined sub-bands. The size of the sub-band may be determined based on the system bandwidth. The anchor sub-band may be located at only one of the sub-bands. For example, if it is assumed that the system bandwidth is 400 MHz and a size of the sub-band is 100 MHz, the anchor sub-band may be located at one of 4 sub-bands. The SS block may be located at any position in the anchor sub-band. Meanwhile, if there are different bandwidths supported by the network in the same frequency band, it may be preferred that different bandwidths are arranged. For example, when one cell is operated at a bandwidth of 4*100 MHz and another cell is operated at a bandwidth of 400 MHz, a sub-band of 100 MHz may help to arrange different bandwidths between cells at the same frequency band. However, according to the above arrangement, a position of the SS block may be limited. The sub-band configuration may be defined per frequency range or per frequency band. For example, when a current LTE frequency band is used an NR frequency band as it is or is shared with an NR frequency band, the number of sub-bands may be 1 and a sub-band size may be the same as a system bandwidth. That is, the sub-band may not be supported from a frequency band equal to or overlapping with an LTE frequency band. Meanwhile, when the NR frequency band is redefined through at least one LTE frequency band, partial UEs may not support the system bandwidth. Accordingly, at the frequency band equal to or overlapping with the LTE frequency band, a sub-band size (e.g. 20 MHz or 10 MHz) fixed according to UE minimum bandwidth requirements or a general UE RF bandwidth may be configured. In this case, a position of the SS block may be limited according to the sub-band size. That is, a partial synchronization raster may not be used for mapping of a synchronization signal. It is because the SS block is configured through a sub-band (that is, the SS block is not fully included in one sub-band). Since there is no mapping of the synchronization signal in a corresponding synchronization raster, the UE does not need to discover a corresponding synchronization raster.(2) Option 2: An anchor sub-band may be configured based on initial synchronization. Based on the SS block, it may be assumed that a center of the SS block is a center of the anchor sub-band. The anchor sub-band may be implicitly configured. The size of the anchor sub-band may be previously determined or may be defined by a master information block (MIB) in the SS block. In this case, when frequencies on which the SS block is transmitted differ from each other between neighbor cells, the sub-band may not be arranged between the neighbor cells. Further, the subcarrier and a RB grid may not be arranged.(3) Option 3: An anchor sub-band may be configured separately from another sub-band. That is, a sub-band configuration may be configured based on a system bandwidth or may be pre-configured per frequency range or per frequency band. An anchor sub-band to which the SS block is transmitted may not be associated to a sub-band configuration. Accordingly, the SS block may be transmitted in any place, and the anchor sub-band may be configured to partially or fully overlap with another sub-band. FIG.6illustrates an example of an anchor sub-band configured separately from another sub-band according to an embodiment of the present invention. Referring toFIG.6, the UE is configured to support3sub-bands. However, the anchor sub-band is configured separately from the three sub-bands which are configured. InFIG.6, an anchor sub-band is configured through a sub-band1and a sub-band2, and an SS block is transmitted through an anchor sub-band. If a sub-band is configured/defined, a group of sub-bands may be indicated to the UE through group common signaling. 2. Configuration of Common Search Space (CSS) A plurality of analog beams may be configured to transmit one SS block. After detecting the SS block, it is assumed that the UE uses an optimal combination of beams detected from an SS block to transmit a control channel. The best combination of beams detected from the SS block may be called wide beam. Since there may be a plurality of beams in a wide beam, the same information may be transmitted through different beam. For example, if the UE knows the number of beams in the SS block and detects an optimal beam from a plurality of beams in the wide beam, the UE may monitor only the optimal beam to minimize power consumption for monitoring a control channel. If the network acquires information on an optimal beam, the network may configure a CSS and/or a UE-specific search space (USS) and/or a group common SS based on the corresponding information. That is, the network may define a CSI-RS resource in a QCL relationship for a control channel based on the corresponding information. That is, before a CSI-RS configuration, an SS block for control channel monitoring may be implicitly configured to the UE. After the CSI-RS configuration, a QCL CSI-RS resource for control channel monitoring may be indicated to the UE. 3. Initial Access Procedure and Configuration The present invention describes a method for receiving an SS block including PSS/SSS/PBCH, regarding an initial access procedure and configuration in NR. FIG.7illustrates an example of receiving an SS block by different UEs according to an embodiment of the present invention. An initial BWP (or anchor sub-band) including an SS block may be changed based on a UE procedure. Referring toFIG.7, a BWP1 including an SS block read by UE1 differs from a BWP including an SS block read by UE2, and both of the BWP1 and the BWP is smaller than a system bandwidth. A center of the two BWPs is spaced apart from a center of the system bandwidth by another offset. When a control resource set (CORESET) for minimum system information (SI) or remaining minimum SI (RMSI) does not cover the SS block, a default BWP may be configure to include an SS block according to UE ability. That is, if a UE minimum bandwidth is greater than a sum of an RMSI bandwidth and an SS block bandwidth, a RMSI CORESET and the SS block are continuously multiplexed by frequency division multiplexing (FDM), an initial BWP may cover both of the RMSI CORESET and the SS block. Otherwise, the initial BWP may cover the RMSI CORESET. After the network knows the bandwidth supported from the UE, the network may reconfigure a default BWP capable of including an SS block and a necessary RMSI CORESET bandwidth in the UE. If the UE reads the SS block, it may be assumed that the SS block bandwidth is a UE bandwidth. A PBCH included in the SS block may include at least one of following information. However, following information may be transmitted through RMSI or UE specific signaling as well as a PBCH. In particular, with respect to a secondary cell (SCell), there is a need for UE specific signaling to transmit following information.(1) Carrier bandwidth:Option 1: An MIB transmitted through a PBCH may include information on a carrier bandwidth. The information on a carrier bandwidth may have a size of 3 bits. The information on a carrier bandwidth may include information on a group of carrier bandwidths. For example, 5, 20, 40, 80, 100 MHz may be indicated in a bandwidth of below 6 GHz, and 100, 200, 400 MHz may be indicated at a bandwidth of above 6 GHz. A real bandwidth supported from the network may be also indicated. The information on a carrier bandwidth may include information on a potential maximum bandwidth in which a carrier is operated. That is, since the indicated carrier bandwidth is the potential maximum bandwidth, the UE does not need to assume the system bandwidth. Further, for future forward compatibility, several states and/or reserved fields may be used. The reserved field may indicate an additional maximum system bandwidth. A future UE may assume a sum of a first carrier bandwidth and an additional maximum system bandwidth indicated by the reserved field as a maximum system bandwidth.Option 2: An MIB transmitted through a PBCH may not include information on a carrier bandwidth. However, the carrier bandwidth may be indicated by SI such as RMSI. For future forward compatibility, at least one field may be used to imply system information. In order to support disposal or change of a flexible network, no information on the system bandwidth may be indicated. When information on the system bandwidth is not indicated, a PRB indexing may be performed based on 1 GHz or a maximum bandwidth such as 400 PRB. For a future UE/network supporting 400 PRB or greater, PRB indexing may be performed while being divided into two groups of 0-399 and 400-X. A common data/control signal may be scheduled in a PRB having an index of 0-399, which is shared with a UE supporting a previous release. Another data/control signal may be scheduled at all PRBs. PRB indexing may be performed from a virtually lowest frequency. With respect to a greater subcarrier spacing, the maximum number of PRBs may be changed. For example, when a maximum system bandwidth is 400 MHz, the maximum number of PRBs based on a subcarrier spacing of 120 kHz is 278, and the maximum number of PRBs based on a subcarrier spacing of 240 kHz is 139.(2) Offset between a center of an SS block and a center of a system bandwidth An MIB transmitted through a PBCH may include information on an offset between a center of an SS block and a center of a system bandwidth. Since the center of an SS block differs from the center of a system bandwidth, the above information may be indicated by the UE. The above information may be included in a PBCH regardless of whether information on the carrier bandwidth is included in the PBCH. When the information on the carrier bandwidth is included in the PBCH or an RMSI bandwidth is the same as a PBCH bandwidth, the PBCH may include information on an offset between the center of the SS block and the center of the system bandwidth. Meanwhile, when the system bandwidth is indicated by the RMSI or the RMSI is not located at the same bandwidth/frequency as that of the PBCH, the PBCH may include information on an offset between a center of a PBCH or a RMSI and a center of a system bandwidth instead of the information on offset between the center of the SS block and the center of the system bandwidth. Further, for PRB indexing, an MIB transmitted through the PBCH may also include information on an offset between a PRB of the lowest index of the SS block and a virtual PRB 0. In detail, the MIB transmitted through the PBCH may include a subcarrier (subcarrier 0) of the lowest index of the SS block and a subcarrier (subcarrier 0) of the lowest index of a common RB. Information on an offset between the center of the SS block and the center of the system bandwidth may be expressed as a value with respect to a channel raster (or synchronization raster). If it is assumed that a channel raster is 100 kHz, following options may be considered.Option 1: The option 1 uses a channel raster of {6, 8, 9, 10, 10} bit with respect to {5, 20, 40, 80, 100} MHz bandwidth in a frequency band below 6 GHz.Option 2: The option 2 uses a synchronization raster using a channel raster and an offset.Option 3: The option 3 uses a RB bandwidth using the number of subcarriers and an offset. When a gap between 2 SS blocks is the same as multiple RBs bandwidth based on a numerology of PSS/SSS/PBCH, offset related information may be omitted. If it is assumed that a channel raster is 240 kHz, or a plurality of subcarriers or at least one RB based on a numerology used for RMSI (or PSS/SSS/PBCH), following options may be considered.Option 1: The option 1 uses a channel raster of {9, 10, 11} bit with respect to {100, 200, 400} MHz bandwidth.Option 2: The option 2 uses a synchronization raster (e.g. 1440 kHz) of {7, 8, 9} bit with respect to {100, 200, 400} MHz bandwidthOption 3: The option 3 uses a RB bandwidth using the number of subcarriers and an offset. When a gap between 2 SS blocks is the same as multiple RBs bandwidth based on a numerology of PSS/SSS/PBCH, offset related information may be omitted. Information on an offset between a center of an SS block and a center of the system bandwidth may be expressed as a positive value or a negative value according to whether the center of the system bandwidth is higher or lower than the center of the SS block. Meanwhile, the information on the carrier bandwidth is included in the PBCH, the information on an offset between a center of an SS block and a center of the system bandwidth may be a maximum bit assuming a maximum bandwidth supported by a carrier. As described above, the information on an offset between a center of an SS block and/or a RMSI and a center of the system bandwidth, and/or information on an offset between a PRB (or subcarrier) of the lowest index of the SS block and/or the RMSI and a PRB 0 (or subcarrier 0) of the system bandwidth may be indicated to the UE. Accordingly, the UE may perform common PRB indexing through the system bandwidth as well as PRB indexing in a BWP configured to the UE (i.e. local PRB indexing). A concept of the above local/common PRB indexing is applicable to scrambling of a control signal/data/reference signal (RS) in a BWP of the UE and/or RS generation and/or common data scheduling in an initial CSS. That is, if the UE knows the system bandwidth according to information on the system bandwidth and/or information on an offset between a center of the SS block and a center of the system bandwidth, scrambling of a control signal/data/RS in a BWP of the UE and/or RS generation and/or common data scheduling in an initial CSS may be performed based on the system bandwidth and a common PRB indexing. This means that a sequence for scrambling of a control signal/data/RS and/or RS generation and/or common data scheduling in an initial CSS is generated across whole PRBs in the system bandwidth. If the UE does not know a system bandwidth, scrambling of the control signal/data/RS in a BWP of the UE and/or RS generation and/or common data scheduling in initial CSS may be performed based on a configured bandwidth (i.e. initial BWP) and local PRB indexing. This means that a sequence for scrambling of the control signal/data/RS and/or RS generation and/or common data scheduling in the initial CSS is generated across PRBs in the BWP. If information on an offset for a common PRB indexing is provided from an RMSI instead of RMSI CORESET, common PRB indexing may be used for scrambling of the control signal/data/RS and/or RS generation and/or common data scheduling. When a RMSI CORESET is shared for another radio network temporary identifier (RNTI) monitoring, local scrambling/PRB indexing may be used for RMSI control signal/data monitoring and common scrambling/PRB indexing may be used for monitoring another channel (non-RMSI control signal/data). In order to minimize burden of channel estimation, if a CORESET is configured together with a wideband and a RMSI CORESET is shared with another transmission, local scrambling/PRB indexing may be always used. That is, RS sequence related parameters (e.g. length, an offset and the like) may be configured per CORESET. Such a method may be applicable to only a case of configuring a wideband. That is, if the wideband is configured, RS sequence related parameters (e.g. length, offset and the like) may be explicitly or implicitly configured per CORESET. For example, when a wideband is used as a default, local scrambling/PRB indexing may be used with respect to RMSI CORESET. A similar scheme may be applicable to generation of an RS sequence. With respect to data, different RS sequences may be generated/used according to whether the UE knows a common PRB indexing. For example, a RMSI PDSCH may use an RS sequence based on local PRB indexing. Another PDSCH may use an RS sequence based on common PRB indexing. Or, local scrambling/PRB indexing may be used for transmission of all common control signals. In order to transmit common data, one of local scrambling/PRB indexing and common scrambling/PRB indexing may be used. Common scrambling/PRB indexing may be used to transmit non-common control signal/data such as group common or UE specific signaling. Scrambling and/or DM-RS sequence related parameter/configuration may be performed per BWP, and the initial DL/UL BWP may assume local scrambling/PRB indexing. Scrambling of the control signal/data/RS and/or RS generation and/or common data scheduling at initial CSS may be performed based on a maximum system bandwidth. This is for the purpose of future forward compatibility, and the maximum system bandwidth may be defined as K times of an actual maximum system defined per frequency band or per frequency range. Resource allocation for data scheduling may be performed based on a configured bandwidth (i.e. initial BWP). That is, regardless of common PRB indexing based on a system bandwidth or a potential maximum system bandwidth, resource allocation for data scheduling may be performed based on local PRB indexing. FIG.8illustrates a method for performing PRB indexing by UE according to an embodiment of the present invention. The present invention described above may be applicable to this embodiment. At step S800, a UE receives information on an offset between an SS block and a system bandwidth from a network through an SS block. The information on the offset may include information on an offset between a PRB of the lowest index of the SS block and a PRB of the lowest index of the system bandwidth. In detail, the information on the offset may include information on an offset between a subcarrier 0 of the SS block and a subcarrier 0 of the system bandwidth. The information on the offset may include information on an offset between a center of the SS block and a center of the system bandwidth. The SS block may further include information on the system bandwidth. The information on the system information may include information on a potential maximum bandwidth in which a carrier is operated. The SS block may be included in an initial UL BWP. The information on the offset may be expressed as a value of a channel raster or a synchronization raster. At step S810, the UE may perform the PRB indexing for the system bandwidth based on information on the offset. That is, the UE may perform common PRB indexing. Scrambling of a control signal, data, and a reference signal may be performed based on the PRB indexing for the system bandwidth. Further, the reference signal may be generated based on the PRB indexing for the system bandwidth. FIG.9illustrates an example of reception of an SS block according to an embodiment of the present invention.FIG.9-(a) illustrates a system bandwidth, and a common PRB indexing for PRBs included in the system bandwidth is defined. The center of the system bandwidth does not correspond to the center of the SS block. Accordingly, information on an offset between the center of the SS block and the center of the system bandwidth or information on an offset between a PRB of the lowest index of the SS block and a PRB 0 of the system bandwidth may be indicated to the UE. It is assumed inFIG.9-(a) that a center of the SS block is arranged at a synchronization raster of 15 kHz.FIG.9-(b) illustrates a bandwidth configured to the UE, i.e. BWP, and a local PRB indexing for the PRB included in a BWP is defined. Regardless of common PRB indexing, resource allocation for data scheduling may be performed based on local PRB indexing. PRB indexing/scrambling according to each control signal/data may be as follows.(1) Cell common or UE group common control signal/dataPRB indexing/scrambling in BWP configured for data transmissionPRB indexing/scrambling in a BWP configured for CORESET for control signal, and in a BWP configured for data transmission for dataPRB indexing/scrambling in a system bandwidth or a maximum bandwidth (e.g. virtual PRB based on common PRB indexing)PRB indexing/scrambling in a configured BWP which is or is not the same as a data bandwidth (e.g. bandwidth for sub-band)PRB indexing/scrambling based on a system bandwidth or a BWP (e.g. a carrier bandwidth or a maximum bandwidth) for a control signal/data(2) UE specific control signal/dataPRB indexing/scrambling in at least a BWP configured for UE specific data and USS including a dedicated reference signalPRB indexing/scrambling based on a system bandwidth or BWP (e.g. carrier bandwidth or maximum bandwidth) with respect to a control signal including a shared reference signal, and PRB indexing/scrambling based on configured BWP for a remainder(3) Dedicated reference signal: PRB indexing/scrambling may be performed based on a BWP or an allocated PRB. In case of non-continuous resource allocation, scrambling or sequence generation may be performed based on a bandwidth between a first PRB and a final PRB of resource allocation. Scrambling or sequence generation may be performed based on a BWP or a common PRB indexing in a maximum system bandwidth.(4) Shared reference signal: PRB indexing/scrambling may be performed based on a system bandwidth or CORESET using a shared reference signal or BWP. Scrambling or sequence generation may be performed based on BWP or common PRB indexing in a maximum system bandwidth.(5) Remaining reference signal: PRB indexing/scrambling may be performed based on a system bandwidth or CORESET using a shared reference signal or BWP. Scrambling or sequence generation may be performed based on BWP or common PRB indexing in a maximum system bandwidth. For the purpose of future flexibility and potential extendibility, it may be considered that a sequence of a control signal/data/reference signal starts from a center frequency to be indexed to a maximum bandwidth or a maximum PRB index. The maximum PRB index may be previously determined, or may be indicated by PBCH/SIB. When considering the maximum PRB index, an PRB index close to a center frequency may be close to max_PRB/2. Otherwise, it may be difficult when a UE having different bandwidths shares the same resource for a control signal/data/reference signal. A common scrambling/PRB indexing may be used for at least shared control signal/data/reference signal, and a local scrambling/PRB indexing may be used for UE specific shared control signal/data/reference signal. 4. Relationship Between Carrier Aggregation (CA) and BWP For the CA and a BWP configuration, two options may be considered.(1) A carrier may be defined as a default BWP, and a UE may be configured with a default BWP with respect to each carrier. Further, a plurality of BWPs may be configured based on a default BWP. A default BWP may be defined as a default BWP of a carrier based on the SS block. For example, if an SS block or a different time/frequency synchronization (coarse synchronization) is acquired by an SS block of a different carrier, a default BWP of one carrier may be defined as a BWP including an SS block of the different carrier. That is, a BWP of a different frequency band or a different carrier including a synchronization reference such as an SS block may be used as a default BWP of a carrier. Or, the default BWP may be defined as a group of PRBs. The default BWP may include an SS block or may not include the SS block. When the default BWP does not include the SS block, the default BWP should include a time synchronization reference. Potentially, the default BWP may include a CSI-RS or beam management RS or a different tracking RS. After acquiring coarse time/frequency synchronization, the UE may acquire additional track through a configured RS such as beam management RS/track RS. Alternatively, a default inactivated SCell may be configured, and a configuration of the SCell may include a configuration of a default inactivated BWP upon configuration. The default BWP may be configured regardless of a position of the SS block. However, this may limit a partial measurement related characteristic similar to the primary cell (PCell). Further, a frequency position of DL and UL (or one of two in a case of an unpaired spectrum) may be included in a configuration of a carrier. For activation of a default BWP, following options may be considered.The default BWP may be activated when a carrier is configured. The default BWP may be used to measure a radio resource management (RRM) and basic beam management. Accordingly, the default BWP may be activated when a carrier is configured. The default BWP may be associated with a CORESET in a different carrier or with at least one configured CORESET in a configured default BWP.When the default BWP is configured for the SCell, the UE may not assume that one is automatically activated when at least one BWP is additionally configured. That is, the UE may be implicitly indicated with respect to activation of at least one of the configured BWPs.When the default BWP is configured by at least one CORESET, a period of a monitoring section for each CORESET may be differently configured. In more general, a period of a different monitoring section for a given CORESET may be indicated by downlink control information (DCI) or a media access control (MAC) control element (CE). Accordingly, before a certain active BWP may be used, or after a BWP is activated and before a carrier is activated, or between a discontinuous reception (DRX) inactive timer and active timer, a period of a different monitoring section may be supported for the default BWP. If a period of a monitoring section is changed, a corresponding instruction may be transmitted through the same DCI without a BWP change. That is, a period of a monitoring section of a BWP configured with respect to a given BWP in activation of the BWP may be also indicated. Alternatively, in order to allow a period of a different monitoring section, a separate DCI may be used. A DCI or an MAC CE for changing a beam direction may be used to reconfigure or change a CORESET related parameter. That is, a DCI for dynamically changing a group of parameters for CORESET including the beam direction, a period of a monitoring section, and scrambling.(2) A carrier may be defined as an offset to a center frequency position, or a reference frequency position, and a PRB of the lowest index therefrom, and may be configured in UE through SCell configuration. Further, a reference numerology used in the SCell may be configured, and a corresponding reference numerology may be used for an offset. Further, a reference to an SS block for synchronization or an SS block of a different carrier may be configured. Upon configuring, the UE assumes that the carrier is inactivated. Further, the UE may be configured with a plurality of BWPs, and a switching mechanism of a single carrier or a PCell may be used among the plurality of BWPs. When at least one BWP in a carrier is activated, it is assumed that SCell activation is performed. The difference from the SCell is in that there may not be BWP activated at a certain time point, and at least one BWP may be activated. With regard thereto, since the UE does not monitor the CORESET at an inactive SCell, there is a need for cross-carrier scheduling to activate at least a first BWP. Accordingly, until an activated BWP may be used in a carrier, there is a need for the cross-carrier scheduling. Next, the UE may depend on the same cross-carrier scheduling. In this case, the reference frequency position may include a frequency position of an SS block when a corresponding carrier includes the SS block or a virtual or center frequency position to which the UE will attempt retuning for measurement. Additionally, the UE may be configured with following information.Cell ID: A cell ID may be acquired by an SS block. A reference SS block may use a cell ID different from that used for SCell. That is, the cell ID may be provided to the UE. In order to obtain coarse time/frequency synchronization, a position of the SS block may be used, and a different cell ID may be used in the SS block. For example, a SS block in a different carrier may become a reference. However, a SS block of a different carrier is in perspective of the UE. In perspective of network, the cell ID may be shown as a SS block in the same carrier.An offset between a reference point and PRB 0: a PRB 0 may not be an actual PRB 0 of a carrier. The PRB 0 may be selected so that all numerologies supported by the carrier may be arranged a center of a carrier. That is, the offset may be a multiple of K in an aspect of the PRB, and K=SC_max/SC_0. The SC_max is a maximum subcarrier spacing supported from the carrier, and the SC_0 is a numerology of an SS block. A PRB grid may be configured from PRB 0, which may not be arranged at a center of the SS block.A numerology used in the SCell: Unless indicated otherwise, a corresponding numerology may be used for a control signal and data. The SCell may support a plurality of numerologies. In this case, a default numerology may be configured by a SCell configuration, and another numerology may be additionally configured through RRC signaling. Based on the information, a cell may be defined by a combination of a cell ID, a reference point, a reference of the SS block (or difference from the reference point), and a potential maximum bandwidth. In summary, there are following 3 options for CA and BWP configuration.(1) As a first option, a SCell may be configured and the SCell may maintain an inactive state. In an inactive state, the SCell may not have an active BWP before active BWP is explicitly indicated or the SCell is explicitly activated. Accordingly, the UE does not need to monitor the CORESET in the SCell.(2) As a second option, if the default BWP includes a CORESET configuration in the same carrier, the SCell may be configured with a default BWP to be activated. That is, for a cell other than the PCell, a CORESET capable of transmitting an active DCI may be configured per each default BWP configuration. If the CORESET is included in the same BWP, the UE may consider that the default BWP is activated when configured. Next, the UE may be switched to another BWP. If the CORESET is included in another carrier, the SCell maintains an inactive state. A cross carrier or a cross BWP scheduling may be used to activate a BWP in a SCell of a corresponding different carrier.(3) As a third option, a CORESET associated with a default BWP should be present. Accordingly, a default state of the default BWP may be an active state. That is, when the SCell is activated, there may be at least one BWP which is automatically activated, and a corresponding BWP may include an associated CORESET configuration in a SCell configuration. A corresponding CORESET may be cross-carrier-scheduled by the PCell or another SCell. When the BWP is not configured by the CORESET, whenever the UE needs to retune the default BWP, a configuration/reference CORESET of the default BWP may be used for control channel monitoring. That is, a CORESET configuration for the default BWP may follow one of following options.Explicit CORESET configuration in a default BWPCORESET is configured based a previously configured CORESET in another carrier or another BWPNo assumption with respect to COREST is configured. A PCell CSS or USS may be regarded as a CORESET capable of using a carrier or a BWP in the SCell. A configuration of the BWP may include an associated SS block (may be assumed as an SS block for initial access when it is not given) or a default BWP. The configuration of the BWP may include CORESET information capable of being monitored by self BWP scheduling or cross BWP scheduling with a given BWP. In a SCell configuration, the UE may be configured with at least one BWP, and at least one BWP may be indicated by a default BWP which is automatically activated upon activation. Further, the UE may be configured with a combination of a cell ID of the SCell, a reference point, and a SCell index (if possible, for example, upon cell activation). Moreover, the UE may be configured with a separate CORESET per each BWP or by a CORESET with respect to at least default BWP. In addition, the UE may be separately configured with respect to a measurement target for SCell. In a primary SCell (PSCell), the same configuration as the SCell configuration may be given in a BWP aspect. For activation, instead of configuring the default BWP, an initial BWP for initial access may be used as a default BWP. When assistance information from the PCell is considered, a default BWP may be also indicated. The UE may assume that the initial access is performed in a default BWP. That is, the default BWP may be indicated for the PSCell, and assistance information for initial access may be located in the default BWP. The default BWP needs to include associated CORESET in the same carrier. For the purpose of activation of the SCell, following options may be considered.(1) An MAC CE for activating at least one SCell may be used, and the default BWP may be automatically activated.(2) MAC CE activation for activation of at least one BWP may be simultaneously generated per SCell with respect to configured SCells. If at least one BWP is activated, the UE may assume that the SCell is activated. If it is considered that a specific SCell does not perform physical random access channel (PRACH) transmission, the SCell may be activated only when at least one DL BWP is activated. With respect to a carrier allowing PRACH transmission, there is a need to activate at least one UL BWP before the at least one UL BWP is regarded as an active carrier.(3) In order to activate at least one BWP in a configured PCell/SCell, a scheduling DCI may be used. In order to activate each BWP, a separate scheduling DCI may be used. In order to allow activation between BWPs regardless of a carrier including the BWP, cross carrier or cross BWP scheduling may be configured. That is, for example, if a carrier x includes BWP1 and BWP2, the BWP1 may be activated by BWP3 in a carrier y, and the BWP2 may be activated by a BWP4 in a carrier z. If there is a plurality of BWPs, at least one BWP may be cross carrier or cross BWP scheduled, and remaining BWPs may be self BWP scheduled. That is, separate cross carrier or cross BWP schedule may be supported.(4) A separate DCI instead of a MAC CE may be used in the option (2). 5. Default BWP A BWP accessed during an initial access procedure (reception of SS block, reception of RMSI, reception of random access response (RAR), and the like) may be regarded as a default BWP. A RMSI bandwidth may be regarded as a DL default BWP. A RACH bandwidth may be regarded as UL default bandwidth. The UL default bandwidth may be the same as the DL default bandwidth (addition to TX-RX or duplex gap). If a frequency in which RAR or MSG4 is received is reconfigured, a default BWP may be automatically changed according to the reconfiguration. That is, according to configuration of RACH procedure related message/CORESET, a default BWP during the initial access procedure may be switched. For the purpose of load balance, it may be considered that the default BWP is switched from the initial BWP after connection. In order to support paging of an UE in an idle state, the UE may need to fall back to the initial BWP in which the SS block is firstly acquired. A BWP having an SS block for time/frequency synchronization and SS block based measurement may be configured as a fallback BWP. That is, if the UE is switched to an idle state, the default BWP may become an initial BWP or a separate fallback BWP for the purpose of fallback may be configured. The BWP may be differently configured per UE for load balance of paging. Each BWP may include an SS block which may differ from an initially accessed SS block. If the UE is directly configured with another BWP including an SS block which capable of using another cell ID from that of initially accessed SS block, the UE may maintain that two SS blocks become QCL. That is, if the UE is reconfigured with a BWP different from an initially accessed BWP during RRC connection configuration or idle state, the UE may assume that an initially accessed SS block and the reconfigured SS block have a QCL relationship. The QCL relationship may be explicitly indicated. The UE may reacquire or perform an initial access procedure. If a new SS block and the initially accessed SS block do not have the QCL relationship, the UE may perform handover. An initial BWP may be configured to be activated simultaneously with SCell activation. If it is assumed that measurement is performed before activation, an initial BWP may not be associated with an SS block in a SCell. In summary, there is an initial BWP accessed in RRC connection configuration or idle state, and the initial BWP may include an SS block at a PCell. The SCell may not include the initial BWP. A PSCell needs to include the initial BWP. The initial BWP may be regarded as a default BWP before reconfiguration. The default BWP may be reconfigured. The reconfigured default BWP may not include an SS block. If the reconfigured default BWP includes the SS block, the UE may take into consideration the followings.If the new SS block has a QCL relationship with the initial SS block, the UE may switch to the new SS block. This may be performed by an explicit configuration of QCL relation. If it is indicated that the UE is reconfigured by a default BWP and a new default BWP includes an SS block, the UE may assume that a new BWP has the QCL relationship with the initial BWP.If the new BWP does not have the QCL relationship with the initial BWP, the UE may be indicated that the two BWPs do not have the QCL relationship and may perform rate matching with respect to only the new SS block.If a new BWP does not include the SS block, the UE may automatically assume that the new BWP has a QCL relationship with an initial BWP or a previous BWP. 6. BWP and SUL Carrier In an NR, a DL carrier may be associated with a UL carrier having a band different from that of the DL carrier. Such a characteristic may be considered according to a following cause.The number of UL carriers is smaller than the number of DL carriers. Accordingly, at least one DL carrier may be associated with the same UL carrier.There may be a SUL carrier associated with a paired DL/UL spectrum or a non-paired DL/UL spectrum. The DL carrier may be associated with only one UL carrier (i.e. a UL carrier or a SUL carrier at the same band) or both of 2 UL carriers (like a UL CA). In this case, there is a demand to clearly define BWP configuration/activation.(1) When at least one DL carrier is associated with one UL carrier When the UL carrier corresponds to a UL spectrum in a paired DL/UL spectrum, activation/inactivation of the UL carrier may be independently performed. Otherwise, the UL carrier may be changed automatically or simultaneously with a DL carrier in the same frequency band. That is, a DL carrier in the same frequency band becomes a main carrier. Accordingly, a UL BWP may be changed. A switch command of the UL BWP may be transferred to only the main DL carrier. That is, another DL carrier may depend on the switch command in the main DL carrier. However, the above causes the UE to fail the switch command of the UL BWP, and particularly ambiguity may occur when another DL carrier schedules PUSCH/PUCCH. To this end, another DL carrier may indicate the UL BWP, and the network may select the same BWP between different DL carriers. If the cell transfers a PUCCH, a PUCCH offset may be changed according to change of the UL BWP. Accordingly, if a different DL carrier indicates a different UL BWP at a different time, confusion of the PUCCH resource may be caused. For example, when two UL BWPs are configured and two DL carriers may dynamically indicate switch of a UL BWP, the first DL carrier indicates the UL BWP to switch from a UL BWP1to a UL BWP2, and the UE may fail to receive a corresponding command. In this case, if the second DL carrier transmits a PDSCH, it is ambiguous which PUCCH resource is used. Similarly, it is ambiguous that a case where a DL carrier is mapped to a UL carrier with a ratio of one to one. To this end, a network may monitor both of two PUCCH resources or a scheduling DCI for PDSCH may include PUCCH BWP information as resource indicator. That is, a scheduling DCI for PDSCH may be used to switch a UP BWP. Further, when the UL BWP carrying the PUCCH is changed during accumulation of HARQ-ACK, another issue may occur. For example, DL slots n to n+m may be mapped to an HARQ-ACK of a single PUCCH resource, and a UL BWP carrying the PUCCH may be changed in a middle of DL slots n to n+m. In this case, switch of the UL BWP carrying a PUCCH during accumulation of HARQ-ACK in a plurality of slots may not be allowed. A UL BWP for a new PUCCH during accumulation of HARQ-ACK in a plurality of slots may be used, and a resource selected for a previous UL BWP may be ignored. A DCI of the new UL BWP may include a new resource. Since the UE may fail to receive a switch command of the UL BWP, the following may be considered in this case. First, when a different resource is selected by a DCI different from a previous DCI with respect to the same PUCCH time resource (i.e. among a DCI scheduling a PDSCH among the same PUCCH time resources), a new resource may be selected. If the UE fails to receive a new resource indication, information on an existing UL BWP may be used. If the UE receives a switch command of the UL BWP after a DCI scheduling the PDSCH, a resource indicated in a corresponding DCI may be used for the new UL BWP. A UL BWP carrying the PUCCH and a resource may be dynamically indicated. In this case, this may be used to activate the new UL BWP. A DCI indicating a different UL BWP may not be multiplexed in the same PUCCH. A configuration of a new UL BWP may be always used. Meanwhile, the above description is applicable to other cases including a case of mapping the DL carrier to the UL carrier with a ratio of 1 to 1.(2) When one DL carrier includes an associated SUL carrier, and only one of the SUL carrier or the UL carrier in the same band as that of the DL carrier may be activated In order to efficiently support switch of the carrier, a plurality of BWPs may be configured with respect to each UL carrier, and one BWP may be activated/inactivated. For a BWP configuration, a common PRB indexing for the SUL carrier may be performed. For example, information on a center or a reference point of the SUL carrier and information on an offset between the smallest PRBs (virtual PRB) from a center or a reference point of the SUL carrier may be indicated, and a common PRB indexing for the SUL carrier may be performed based thereon. If the UL BWP is changed, the PUCCH resource may be also changed. It may be assumed that the default UL BWP is a UL BWP used for a RACH procedure. The default BWP may be reconfigured afterward or the default BWP may be changed according to a PRACH trigger in another carrier or another UL BWP. With respect to each UL BWP, a PRACH resource used for at least PRACH trigger may be configured. The trigger message may include a BWP index to switch the UL BWP. The UE may perform a RACH procedure at a new initial/default UL BWP afterward. That is, the default UL BWP may be semi-statically or dynamically changed based on the RACH procedure. Necessary information associated with a cell ID used at the SUL carrier and a UL carrier in the same band as that of the DL carrier may be same as if the SUL carrier and the UL carrier is included in different BWPs but in the same carrier. That is, UL BWP switch between the SUL carrier and a UL carrier having the same band as that of the DL carrier may be used for switch between two UL carriers. In order to support a more robust system performance, a PUCCH carrier/cell and a PRACH carrier/cell may be included in the same carrier. That is, the UE performs the PRACH and a default UL BWP transmitting the PUCCH may be configured in the same UL carrier. That is, with respect to at least PCell, a PUCCH may not be configured in a carrier/cell to which the PRACH is not transmitted. In a case of the SCell, the PUCCH may be configured between 2 UL carriers.(3) When one DL carrier includes an associated SUL carrier, and both of a SUL carrier and a UL carrier having the same band as that of a DL carrier may be activated This case may be regarded as a UL CA including a single DL carrier or a DL CA. In this case, there is a need to support activation of the UL carrier, and activation of the UL carrier may be performed by carrier activation/inactivation. A different carrier may include only a DL carrier, only a UL carrier, or a paired DL/UL carrier. In order to support PRACH transmission at a SUL carrier, upon activation of the carrier, the paired DL/UL carrier and a UL dedicated carrier may be activated. At least one activated UL BWP may be configured in the paired DL/UL carrier and the UL dedicated carrier. The paired DL/UL carrier does not mean a paired spectrum. In a case of a non-paired spectrum, the paired DL/UL carrier is located at the same frequency. After the activation, the UE may transmit a PRACH in an SUL carrier. In a PCell, the UE starts transmission of the PRACH in the SUL carrier, the SUL carrier may be automatically activated together with the paired UL carrier. Or, upon activation of the carrier, one of two UL carriers may be selected. Only a selected UL carrier according to an activation message may be activated. Next, according to an explicit indication, an additional UL carrier may be activated. In the PCell, this may mean a UL carrier is an activated a UL carrier including a UL BWP in which PRACH transmission is initiated. Alternatively, upon activation of the carrier, if a PRACH configuration for a SUL carrier and a non-SUL carrier is given, the UL BWP may be activated in both of the SUL carrier and the non-SUL carrier. The above procedure is applicable to an initial UL BWP at the PCell. When only one UL carrier to which the PRACH is transmitted (i.e. the second case), if a UL carrier transmitting the PUCCH is configured to be different from a UL carrier transmitting the PRACH, the network may indicate the UL BWP to be activated for PUCCH transmission in a PUCCH carrier configuration. The UL BWP indicated in the PUCCH carrier configuration may be activated. If a UL carrier transmitting the PUCCH is configured to be different from a UL carrier transmitting the PRACH, an initial UL BWP configured by an RMSI or a higher layer may be activated in the PUCCH carrier configuration. The activated UL BWP may be changed by RRC reconfiguration or DCI switching. If a non-paired DL/UL carrier and an SUL carrier are configured in one cell, BWP switching for the SUL carrier may be performed through a UL grant for the SUL carrier. If dynamic PUSCH change is not configured and a SUL carrier is selected as a PUCCH carrier, only DL BWP switching may be possible with respect to a non-paired DL/UL carrier regardless of a BWP pair. There is a need to clearly define whether a PUCCH resource is also adapted when the UL BWP is adapted. To this end, the following may be considered.A UL BWP carrying a PUCCH may always be configured based on a UL BWP configuration. When a plurality of UL BWP configurations is provided for UL BWPs including an initial/default UL BWP, different PUCCH resources may be configured per different UL BWP configuration. This may be similar to a case where a CORESET in a DL BWP may be configured per each DL BWP.A UL BWP carrying a PUCCH may always be configured separately from a UL BWP carrying a PUSCH. The UE may ensure that a full bandwidth including a UL BWP carrying a PUCCH and a UL BWP carrying a PUSCH is included in UE capability. Accordingly, the UE may be configured/indicated to switch the UL BWP carrying the PUCCH, which may not request to switch the PUCCH resource. This is supported from a current CA, the UE is configured with a UL BWP in a SCell having no PUCCH resource, and a PUCCH is transmitted from a PCell. Similarly, in the PCell, the PUCCH and the PUSCH may be configured by different UL BWPs. In this case, the activated UL BWP may be defined as a UL BWP carrying a PUSCH instead of a UL BWP carrying a PUCCH.It may be also configured whether each UL BWP includes only a PUCCH, only a PUSCH, only the PUCCH and the PUSCH, or all of PUSCH/PRACH/sounding reference signal (SRS). That is, which signal is transmitted may be configured in the configured UL BWP, and a plurality of BWPs may be configured.A set of approachable PRBs by resource allocation may be configured as well as a UL BWP available for PUCCH/PUSCH transmission. For example, one UL BWP may be configured to have 20 MHz for PUCCH diversity, and scheduling may be achieved at only 5 MHz. In order to reduce scheduling overhead, it may be considered to separately configure a PUSCH PRB zone. Signaling suggested in the above description may be transmitted through a common signaling such as RMSI/on-demand SI (OSI) or UE specific signaling and/or DCI. In some cases, a different signaling may be used. In particular, a different signal may be used depending on how to define a cell. 7. BWP Reconfiguration When the UE supports only one BWP or the UE is reconfigured through an RRC, RRC ambiguity may occur. In order to minimize RRC ambiguity, the following may be considered.(1) When an RRC message is transmitted to change the BWP, a corresponding RRC message may include both of a DL BWP and a UL BWP, and further include an execution time point of the configuration. Before the execution time point, the network may perform retransmission in order to increase reliability.(2) In order to minimize ambiguity, the network may consider that a new configuration is executed after receiving approval from the UE. In this case, if the network fails approval, the ambiguity may be caused. The network may retransmit an RRC message in a previous BWP and a current activated BWP in order to increase reliability.(3) A new configuration may be executed immediately after the UE receives a corresponding configuration. The new configuration may be executed after the RRC message is scheduled and after K slot (or k ms)(e.g. 20 ms from an RRC message). Ambiguity may be processed by a network. For example, the network may transmit a plurality of messages and control signals in a previous BWP and a current activated BWP.(4) When reconfiguring the activated BWP, a CORESET in which a fallback DCI is scheduled may not be changed. That is, a newly activated BWP may include at least one CORESET sharing with a previously activate BWP. In the shared CORESET, resource allocation may be limited to the same as that of a previous BWP. 8. RAR CORESET It may be considered to separately configure a CORESET for an RAR different from an RMSI CORESET by taking into consideration a beam side. If separate CORESETs are configured for the RMSI and the RAR, a RMSI CORESET may be called CORESET 0, and a RAR CORESET may be called CORESET 1. The CORESET 1 of index 1 may be defined as a special CORESET which may be reused after RRC connection. Monitoring SIB/paging may be reconfigured as a CORESET 1 by an RRC configuration. A CORESET 1 for RAR configured in an initial DL BWP may have following characteristics.For a configuration of the CORESET 1, frequency domain information may be configured. When the frequency domain information is not usable, the same frequency domain at that of CORESET 0 may be used for CORESET 1. However, unlike CORESET 0, a resource block group (RBG) may be configured based on a common PRB indexing on the basis of information on a reference PRB 0 signaled at the RMSI. Since a partial PRB of first and/or final frequency domains is smaller than a complete 6 PRB, for convenience, a corresponding fragmented PRB may not be used as the CORESET 1. When the bitmap is provided, a bitmap of the size including only a complete 6 PRB in an initial DL BWP may be indicated. Unless indicated otherwise, information on a QCL may be the same as the CORESET 0. Information on a duration of the CORESET 1 may be explicitly configured.Unless explicitly configured otherwise, a resource element group (REG) bundle size, pre-coder granularity may follow a configuration of the CORESET 0. An interleaver size according to a reduced PRB size due to a fragmented PRB may substantially be 2. The interleaver size may be configured to be arranged.A DM-RS sequence may be created based on a common PRB indexing for the CORESET 1.The UE may not simultaneously monitor the CORESET 0 and the CORESET 1. Accordingly, after the RRC connection, the UE may be configured with a group of RNTIs monitored from a search space set associated with the CORESET 1. If the CORESET 1 is configured once, the UE may monitor SI and paging in a corresponding CORESET 1. That is, only an initial SIB according to beam sweeping may be scheduled at the CORESET 0, and remaining common data may be scheduled by the CORESET 1.Unless indicated otherwise, if the CORESET 1 is configured, the UE may monitor a UE specific RRC message such as Msg 4 in the CORESET 1 instead of the CORESET 0. After RRC connection, this may be reconfigured. In perspective of PRB indexing and PRB grouping, the CORESET 1 may be handled differently from the CORESET 0. In this case, a CORESET configured by an SS block and/or an RMSI may be specially handled. It may be preferred to use a local PRB indexing only before a common PRB indexing is allowed. Accordingly, if a PRB 0 is indicated in a RMSI, a CORESET configured by RMSI and/or UE specific signaling may follow a common PRB indexing. When a CORESET 0 collides with a CORESET 1 by simultaneously monitoring the same search space, the UE may omit monitoring of the CORESET 0. That is, if the CORESET 1 is configured once, the UE may not be requested to monitor the CORESET 0. If the UE is in an idle state, the UE may return an initial DL BWP. Since a paging search space is associated with the CORESET 0, the UE may monitor a search space associated with the CORESET 0. If the UE starts a RACH procedure in an idle state, the UE may monitor the CORESET 1. UE monitoring may be considered as follows.UE in RRC idle state: initial DL BWP and CORESET 0UE for performing a RACH procedure at a DL BWP: CORESET 1 and, if necessary, CORESET 0 for and paging/SIBefore reconfigured as another BWP or another CORESET, the UE may regard a CORESET 1 as a default CORESET for C-RNTI, semi-persistent scheduling (SPS), or transmit power command (TPC). In this case, a UE specific RNTI or a group specific RNTI may be monitored. Further, without explicit indication of an RNTI after reception of Msg 4, system information RNTI (SI-RNTI) or paging RNTI (P-RNTI) may be monitored at a CORESET 1. FIG.10shows a block diagram of a wireless communication system to implement an embodiment of the present invention. An UE1000includes a processor1010, a memory1020, and a transceiver1030. The memory1020is operatively coupled with the processor1010and stores a variety of information to operate the processor1010. The transceiver1030is operatively coupled with the processor1010, and transmits and/or receives a radio signal to and from the network node1100. The processor1010may be configured to implement proposed functions, procedures and/or methods described in this description. In detail, the processor1010may perform steps S800to S810inFIG.8or control the transceiver1030to perform the steps. A network node1100includes a processor1110, a memory1120, and a transceiver1130. The memory1120is operatively coupled with the processor1110and stores a variety of information to operate the processor1110. The transceiver1130is operatively coupled with the processor1110, and transmits and/or receives a radio signal to and from the UE1000. The processors1010,1110may include application-specific integrated circuit (ASIC), other chipset, logic circuit and/or data processing device. The memories1020,1120may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and/or other storage device. The transceivers1030,1130may include baseband circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The modules can be stored in memories1020,1120and executed by processors1010,1110. The memories1020,1120can be implemented within the processors1010,1110or external to the processors1010,1110in which case those can be communicatively coupled to the processors1010,1110via various means as is known in the art. FIG.11illustrates a processor of a UE shown inFIG.10. The processor1010of the UE includes a conversion pre-coder1011, a subcarrier mapper1012, an inverse fast Fourier transform (IFFT) unit and a cyclic prefix (CP) insertion unit. In view of the exemplary systems described herein, methodologies that may be implemented in accordance with the disclosed subject matter have been described with reference to several flow diagrams. While for purposed of simplicity, the methodologies are shown and described as a series of steps or blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the steps or blocks, as some steps may occur in different orders or concurrently with other steps from what is depicted and described herein. Moreover, one skilled in the art would understand that the steps illustrated in the flow diagram are not exclusive and other steps may be included or one or more of the steps in the example flow diagram may be deleted without affecting the scope of the present disclosure. | 74,724 |
11863365 | DETAILED DESCRIPTION FIG.1illustrates an embodiment of a wireless communication network100including a base station110that services one or more mobile terminals120. The base station110includes a baseband processor130. A parameter generator140included in or associated with the baseband processor130determines different sets150of configuration parameters for sounding signal transmissions for the mobile terminal120, e.g., as illustrated by Step200ofFIG.2. The baseband processor130transmits the different sets150of configuration parameters to the mobile terminal120over a downlink communication channel152, e.g., as illustrated by Step202ofFIG.2. The sets150of configuration parameters enable the mobile terminal120to generate different sounding signals160for different uses by the base station110such as channel-quality estimation and timing estimation. The mobile terminal120has a baseband processor170for receiving the sets150of configuration parameters transmitted from the base station110, e.g., as illustrated by Step300ofFIG.3. A sounding signal generator180included in or associated with the mobile terminal baseband processor170generates different sounding reference signals160based on the different sets150of configuration parameters, e.g., as illustrated by Step302ofFIG.3. The mobile terminal120transmits the sounding signals160to the base station110over an uplink communication link162. This way, multiple sounding reference signal configurations having different frequency-domain and/or time-domain parameters can be used by the same mobile terminal120to generate different sounding reference signals160. According to one embodiment, one set150of the sounding signal configuration parameters causes the mobile terminal120to generate a first one of the sounding reference signals160with a relatively narrow bandwidth, but high rate in the time domain. A different set150of the sounding signal configuration parameters causes the mobile terminal120to generate a second one of the sounding reference signals160having a wider bandwidth, but lower time-domain rate. The first sounding signal can be used by the base station110for channel-quality estimation while the second sounding signal can be used for timing estimation. Under some conditions, the different sets150of configuration parameters may create signal transmission conflicts at the mobile terminal120in that different sounding reference signal transmissions may occur within the same subframe or even within the same symbol, e.g., as illustrated inFIG.4. Different priorities can be established or otherwise defined for the sets150of configuration parameters. The priorities allow the mobile terminal baseband processor170to determine which set150of configuration parameters should be used in the event of a sounding signal transmission collision. The configuration having the highest priority controls when more than one sounding reference signal transmission is expected to occur simultaneously, e.g., as illustrated inFIG.5where the second configuration (#2) has the highest priority. The prioritization may be explicit such that each sounding reference signal configuration is explicitly assigned a priority at configuration. Alternatively, the prioritization can be implicit, e.g., depending on the different configuration parameters. According to one embodiment, the configuration having the widest bandwidth (consisting of the largest number of transmitted sub-carriers) is given the highest priority. Other implied priorities may also be implemented by the mobile terminal baseband processor170. The embodiments described herein provide for the configuration, use and transmission of multiple sounding reference signal configurations to the same mobile terminal120. The configurations may differ in bandwidth and/or the number of transmitted frequency sub-carriers. Additionally, or alternatively, the configurations may differ in the spacing between the transmitted sub-carriers (i.e., in the repetition factor), and/or in signal transmission rate. Additionally, or alternatively, the configurations may have different explicit or implied priorities for avoiding conflicting sounding reference signal transmissions expected to occur simultaneously (or just in the same sub-frame). In one embodiment, the base station110explicitly signals the configuration priorities to the mobile terminal120. Of course, other variations are contemplated. Thus, the foregoing description and the accompanying drawings represent non-limiting examples of the methods and apparatus taught herein for the transmission of system information. As such, the present invention is not limited by the foregoing description and accompanying drawings. Instead, the present invention is limited only by the following claims and their legal equivalents. | 4,808 |
11863366 | Unless otherwise indicated, the drawings provided herein are meant to illustrate features of embodiments of this disclosure. These features are believed to be applicable in a wide variety of systems including one or more embodiments of this disclosure. As such, the drawings are not meant to include all conventional features known by those of ordinary skill in the art to be required for the practice of the embodiments disclosed herein. DETAILED DESCRIPTION In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where the event occurs and instances where it does not. Approximating language, as used herein throughout the specification and claims, may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Here and throughout the specification and claims, range limitations may be combined and/or interchanged; such ranges are identified and include all the sub-ranges contained therein unless context or language indicates otherwise. As used herein, the terms “processor” and “computer” and related terms, e.g., “processing device”, “computing device”, and “controller” are not limited to just those integrated circuits referred to in the art as a computer, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit (ASIC), and other programmable circuits, and these terms are used interchangeably herein. In the embodiments described herein, memory may include, but is not limited to, a computer-readable medium, such as a random access memory (RAM), and a computer-readable non-volatile medium, such as flash memory. Alternatively, a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), and/or a digital versatile disc (DVD) may also be used. Also, in the embodiments described herein, additional input channels may be, but are not limited to, computer peripherals associated with an operator interface such as a mouse and a keyboard. Alternatively, other computer peripherals may also be used that may include, for example, but not be limited to, a scanner. Furthermore, in the exemplary embodiment, additional output channels may include, but not be limited to, an operator interface monitor. Further, as used herein, the terms “software” and “firmware” are interchangeable, and include any computer program storage in memory for execution by personal computers, workstations, clients, and servers. As used herein, the term “non-transitory computer-readable media” is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer-readable instructions, data structures, program modules and sub-modules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. Moreover, as used herein, the term “non-transitory computer-readable media” includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal. Furthermore, as used herein, the term “real-time” refers to at least one of the time of occurrence of the associated events, the time of measurement and collection of predetermined data, the time for a computing device (e.g., a processor) to process the data, and the time of a system response to the events and the environment. In the embodiments described herein, these activities and events occur substantially instantaneously. As described herein, signal equalization in the time domain may be performed at a receiver by capturing symbols of the received symbol into an overlapped compound block and circularly convolving the compound block with an inverse channel response to produce an equalized compound block. End portions of the equalized compound block may then be discarded to produce a narrow equalized block, and a plurality of such narrow equalized blocks may further then be cascaded to form a de-ghosted signal stream. Alternatively, the overlapped compound block may be processed in the frequency domain using frequency domain equalization. In an exemplary embodiment, cyclic prefixes are not sent on OFDM or OFDMA multicarrier transmissions. In alternative embodiments, the present techniques may be implemented on any type of linearly distorted modulation transmission without a cyclic prefix (also known as guard intervals or cyclic extensions), or transmissions using relatively short cyclic prefixes. In the exemplary embodiment, systems and methods herein are configured to transmit OFDM and OFDMA signals through wired and wireless channels without considering, or having to create, cyclic prefixes. In some embodiments, overlapped circular convolutions are performed at the receiver end of the system to eliminate linear distortion, such as echoes. In such embodiments, the overlapped circular convolution may be programmed with equalization coefficients obtained, for example, from pilot subcarriers or from a training-synchronization signal, such as a constant amplitude zero autocorrelation (CAZAC) signal. Each received frequency coefficient may then be corrected by a single complex multiplication to correct the respective magnitude and phase. In at least one embodiment, a Zadoff Chu sequence functions as the training-synchronization signal. In other embodiments, overlapped Fourier transforms and inverse Fourier transforms are performed at distinct signal conversion stages, and the pseudo-prefixes are determined at the receiver end of the system according to predetermined pilot/training signals, or from other signal energy received from the transmitter. For example, irrespective of whether a transmitter creates a new cyclic prefix to serve as a guard interval between transmitted data, the present systems and methods utilize the receiver to determine a pseudo-prefix from random or noise-like portions of the transmission (by performing an autocorrelation), or from training signals, or pilots. Therefore, the present systems and methods are equally effective for receiving digital transmissions that include, or do not include, cyclic prefixes from the transmitter. Accordingly, by eliminating (or ignoring) the need for cyclic prefixes, the bandwidth efficiency of transmissions may be increased to allow a frequency band to transport more data within a time-frequency resource, and thereby reduce power consumption and increase the battery life of system components. The embodiments herein are of also of particular use with respect to OFDM signals, which are sent in blocks, and include a number of harmonics that are orthogonal to each other because of their integer relationship to a fundamental frequency. By varying the phase and the magnitude of the harmonics, information can be transmitted while preserving the orthogonality between each of the harmonics. As described in greater detail below, echoes may affect OFDM signals, but can be corrected in the frequency domain. Conventional systems only correct for the effects of inter-block distortion by using a time domain cyclic prefix. Correction schemes using cyclic prefixes, however, are only fully effective if the delay of the echo is shorter than the duration of the cyclic prefix/guard interval. In multi-carrier transmission schemes though, the length of echo delay can vary from receiver to receiver, or according to the respective distances from reflection points in a signal path. The present embodiments address and solve this multicarrier echo variance problem by pseudo-prefix determination at the receiver end. As described herein, transmission efficiency is increased by eliminating the cyclic prefix from transmitted OFDM or OFDMA blocks. Alternatively, reception efficiency may also be increased by configuring the receiver such that the cyclic prefix may be disregarded, if a CP is included in a digital transmission from the transmitter. In an exemplary embodiment, equalization is accomplished in the time domain through use of an overlapped circular convolution process, which overlaps a selected block with surrounding information to form a larger compound, or “fat,” block. The compound block is then circularly-convolved with programmed time domain coefficients to remove distortion, and thus equalize the compound block. After equalization, the compound block is “trimmed” into an equalized narrow block, which is then converted into the frequency domain. OFDM symbols may then be determined from the equalized, narrow, converted frequency domain block. In an alternative embodiment, the cyclic prefix is eliminated in the frequency domain through use of an overlapped Fourier transform. In this alternative process, a large compound time domain block of an OFDM carrier is transformed into the frequency domain, and then equalized by complex multiplication FDE sub-process. This equalized compound frequency domain block is transformed back into the time domain, and then trimmed to form an equalized, time domain, narrow block. In an exemplary embodiment, because the original signal is an OFDM transmission, and additional Fourier transform (e.g., an FFT) is performed on the narrow block so that the OFDM symbols may be determined. Use of cyclic prefixes is known in the art to provide more than only block-to-block isolation. For example, if a received signal is cross-correlated with a delayed copy of itself, a phase shift on a resulting correlation peak will reveal any offset frequency between the transmitter and the receiver, and correction may then be applied. However, as described further below, this additional functionality from the cyclic prefix is advantageously replaced by a constant amplitude zero autocorrelation (CAZAC) function signal, such as a Zadoff Chu sequence. In some embodiments, the Zadoff Chu sequence is implemented together with channel characterization for further equalization purposes, as well as block timing information. Overlapped Circular Convolution FIG.3is a schematic illustration depicting an exemplary multi-carrier transmission system300. System300includes a first transmitter302having a first transmitting antenna304and a second transmitter306having a second antenna308. First and second transmitters302,306transmit first and second carrier signals310,312, respectively, over communication channels314to a first receiver316and a second receiver318. In the example shown inFIG.3, two receivers and two transmitters are illustrated for simplicity of explanation. In operation, many more transmitters and/or receivers may be implemented within system300. Alternatively, the embodiments described herein advantageously operate in the case where multiple receivers are receptive to different carrier signals from multiple transmitters and also in the case where a multiple transmitters deliver a number of different carrier signals to a plurality of antennas (e.g., multiple-input/multiple-output (MIMO). Accordingly, communication channel314may include wired signal paths, wireless signal paths, or combination of both. In the example shown inFIG.3, first receiver316receives a composite signal320. In at least some embodiments, composite signal320further includes at least one echo322of first carrier signal310(1), reflected off of a reflecting object324. For ease of explanation, potential reflections of first carrier signal310(2) and second carrier signal312are not shown. FIG.4is a schematic illustration depicting an exemplary equalization scheme400for equalizing an OFDM symbol without a cyclic prefix, utilizing overlapped circular convolution. In an exemplary embodiment, equalization scheme400is implemented by a receiver or a processor thereof (e.g., first receiver316,FIG.3), and equalization is performed in the time domain. A received time domain OFDM carrier signal402includes a series of time domain blocks404,406,408, which are labeled herein as “W” block404, “X” block406, “Y” block408, respectively. Each of W block404, X block406, and Y block408contains potential distortion, and includes OFDM data symbols, but no cyclic prefixes. For ease of explanation, only three blocks are shown in this example; however, OFDM carrier signal402may include significantly more blocks in the series, or be continuous in the time domain. In operation, the receiver/processor selects X block406, and forms a compound block410, which includes all of X block406, an end portion412of W block404, and a beginning portion414of Y block408. In this example, end portion412and beginning portion414are illustrated to be one fraction (k) of W block404and a different fraction (m) of Y block408, respectively. Nevertheless, a person of ordinary skill in the art will understand that end portion412and beginning portion414may constitute larger or smaller portions of the respective origin blocks, equal portions, or portions of unequal length. End portion412thus functions as a pseudo-prefix, in the time domain, for X block406. Pseudo-prefixes function differently from conventional cyclic prefixes. Whereas a conventional cyclic prefix for X block406would require a repetition of information from the trailing portion of the X block itself, the present pseudo-prefix is based on information from the preceding W block404, and therefore requires no repetitive data. In a similar manner, beginning portion414of the following Y block408functions as a “pseudo-suffix” in the time domain for X block406. Pseudo-prefixes and pseudo-suffixes are generally referred to herein as “pseudo-extensions.” In this example, blocks W, X, and Y are all subject to a same linear distortion. In further operation, compound block410is convolved with equalization coefficients416(eight coefficients, C0-C7, are illustrated; however much larger numbers of coefficients, e.g., 64 or more, may be processed according to this embodiment) to form an equalized convolution block418, which remains in the time domain. That is, each term in compound block410is multiplied by the respective coefficient416determine beneath it, and the products are summed A circular shift (to the right, as illustrated inFIG.4) of the top row (i.e., compound block410) is performed, in the process is repeated until the circular convolution is complete. Data from equalized convolution block418is subjected to one or more circular convolutions. Nevertheless, the size of equalized convolution block418remains the same before and after the circular convolution process(es). After circular convolution, symbols from end portion412and beginning portion414(associated with W block404and Y block408, respectively) are trimmed from equalized convolution block418to form a narrow equalized block420, or X′ block420, which represents a de-ghosted version of X block406. That is, the time domain pseudo-prefix and pseudo-suffix information is removed, and X′ block420may then be converted into the frequency domain, after which the OFDM symbols may be determined (step not illustrated). In an exemplary embodiment, equalization scheme400is implemented sequentially for each of the series of distorted time domain blocks404,406,408. That is, after the formation of X′ block418, equalization scheme400similarly forms a compound block (not shown) from Y block408(e.g., using portions from preceding X block406and a following (Z) block, also not shown), and performs circular convolution thereupon to create an equalized Y′ block (not shown) nominally using the same coefficients. In an alternative embodiment, equalization scheme400may be performed on individual blocks of OFDM carrier signal402non-sequentially, that is, Y block408may be equalized before W block404. In at least one embodiment, some blocks of OFDM carrier signal402may be equalized, while equalization is not performed other blocks according to equalization scheme400. Conventional (non-overlapped) multi-carrier technology of OFDM encounters problems in the implementation of circular convolutions, in that extraneous energy from one block contaminates adjacent blocks. According to the embodiments described herein, this inter-symbol interference is eliminated. According to equalization scheme400, which uses an overlapped circular convolution, linear distortion may be removed from any received signal in the time domain irrespective of whether the received signal contains a CP, and irrespective of the underlying modulation used. The received signal may be viewable in the time domain (e.g., single-carrier or SC-FDMA), or in the frequency domain (e.g., OFDM and OFDMA), spread-spectrum or wavelet based. In some embodiments, pseudo-extensions412,414may be or include quiet time, another type of signal suffering the same linear distortion, an unused cyclic prefix, or a training signal. The signal may even be a baseband, such as an audio signal having an echo. In the exemplary embodiment, the pseudo-extensions are longer than the relevant encountered echo and all recursions in the echo solution. That is, the respective pseudo-prefix is longer than the trailing echo(es), and the respective pseudo-suffix is longer than the leading echo(es) plus recursions. FIG.5is a flow chart diagram of an exemplary circular convolution process500for equalization scheme400,FIG.4. In the exemplary embodiment, process500is implemented by a receiver or the processor thereof (e.g., first receiver316,FIG.3), and the equalization similarly occurs in the time domain with an overlapped circular convolution. Process500begins at step502, in which a digital transmission signal (e.g., OFDM carrier signal402,FIG.4) is received, including a series of sequential OFDM W, X, Y blocks (e.g., time domain blocks404,406,408,FIG.4). In step504, a compound block (e.g., compound block410,FIG.4) is created from the entirety of the X block, an end portion of the W block, and the beginning/lead portion of the Y block. In step506, circular convolution is performed on the compound block and its respective coefficients (e.g., coefficients416,FIG.4). In step508, an equalized narrow block (e.g., narrow block418,FIG.4) is extracted from the compound block by trimming/discarding the W (pseudo-prefix) and Y (pseudo-suffix) portions of the convolved compound block. In step510, the equalized narrow block is converted into the frequency domain (e.g., by an FFT), and the OFDM symbols are read from the frequency-converted narrow block. In step512, steps502through510are optionally repeated for the next block in the sequence. That is, in step512, a compound block is formed from the entirety of the Y block, together with portions of the unequalized X block and “Z” block, to create an equalized Y′ block. Alternatively, in step512, a distorted block other than the Y block may be equalized. If step510is optionally eliminated, and the signal being received is single-carrier, the narrow block will contain the equalized time domain symbols. The present embodiments thus overcome the known problems associated with the use of cyclic prefixes, described above. By implementing the present processing techniques at the receiver end, a digital signal transmission system is able to overcome failures that would occur in a multi-carrier scenario when a cyclic prefix is too short, that is, has a shorter duration than an echo/reflection in the time domain. When the cyclic prefix is too short, the conventional receiver may fail. In contrast, by implementing a pseudo-prefix that is adjustable in time and adjustable according to echo duration, the receiver according to the present systems and methods will continue to function irrespective of the length—or existence—of the cyclic prefix. Overlapped Fourier Transform FIG.6Ais a schematic illustration depicting an alternative equalization scheme600utilizing an overlapped Fourier transform. Similar to equalization scheme400,FIG.4, equalization scheme600may be implemented in accordance with system300,FIG.3, and eliminates the need for cyclic prefixes, implementing frequency domain equalization for computational efficiency on large transforms. In operation, a receiver (e.g., first receiver316,FIG.3) of scheme600receives a series of OFDM symbols602displayed in the time domain. As described above, in conventional OFDM transmission, the energy transportation problem between blocks of data, also known as inter-block interference (IBI) or “leakage,” is addressed by the use of cyclic prefixes. In single-carrier optical implementations, overlap frequency domain equalization (OFDE) has been proposed to eliminate the cyclic prefixes. However, such single-carrier OFDE implementations have not accounted for the variable length of echoes, and particularly the type of echo variability and dispersion that can occur in a multi-carrier system. The conventional OFDE techniques set arbitrary values to the amount of time domain overlap (CP length) based on anticipated chromatic dispersion, and thus may not obtain leakage-free transforms because the echo may be longer than anticipated. Echo energy, for example, may persist longer than the discarded portion412. In contrast to these conventional techniques, scheme600forms a compound block604from OFDM time domain symbols602such that compound block604is of sufficient size to allow an encountered echo (plus significant recursions, if any) to die out within portions of compound block604that may be subsequently discarded after equalization has been performed. Specifically, each compound block604of time domain information includes a data portion606, a pseudo-prefix portion608, and a pseudo-suffix portion610. In the exemplary embodiment, the respective sizes of extension portions608,610may be determined independently from the size of data portion606. Where cyclic prefixes are utilized, the conventional system is required to optimize the trade-off between the package size of the data portion and the amount of the transmission utilized to include the cyclic prefix. According to the present system, on the other hand, this trade-off is eliminated. The pseudo-extensions may be set to any length that is needed to address an encountered echo. The pseudo-extensions may utilize portions, or the entirety, of the data portions of adjacent blocks, and may even include data from more than one adjacent block if such length is necessary. Furthermore, the size/duration of pseudo-prefix portion608may be determined independently from the size of pseudo-suffix portion610. That is, in some cases, a trailing echo might be of a significantly different duration than a leading echo. A receiver configured according to the present embodiment is thus advantageously configured to be capable of dynamically adjusting each extension portion608,610according to the actual echoes encountered in real time for the respective data portion606. Extension portions608and610thus form the overlapping regions of the compound block and, although these portions are not themselves cyclic prefixes, these overlapping portions are further capable of advantageously performing the same functional guard purpose as the cyclic prefix, but without requiring additional transmission time. The overlapping portions608,610represent an extension period of time for echoes to die out that need not be cyclic/cyclical. In further operation, after OFDM time domain symbols602have been formed into compound blocks604, compound blocks604are converted into the frequency domain (e.g., by an FFT), to form compound frequency domain blocks612, upon which frequency domain equalization (FDE) is then performed, for example, using a complex multiplication on each subcarrier with an inverse channel coefficient. Equalized compound frequency domain blocks612are then transformed back into the time domain (e.g., by an IFFT) to form equalized compound time domain blocks614. Each equalized compound time domain block614includes equalized data616, an equalized pseudo-prefix618, and an equalized pseudo-suffix620. The respective equalized pseudo-prefixes618and equalized pseudo-suffixes620are cut from the equalized compound time domain blocks614to extract equalized data616. Blocks of equalized data616may then be pasted together to form a single equalized time domain signal622, if desired. The single equalized time domain signal622may then be transformed into the frequency domain to form a composite stream of equalized OFDM data, constituted of individual OFDM blocks624,626, and628. That is, the frequency domain composite stream is made up of individual frequency domain blocks (e.g., block624), which each correspond with a respective data portion (e.g., portion606). In an exemplary embodiment, individual frequency domain blocks624,626, and628are pasted together as one sequenced composite block. In an alternative embodiment, each individual frequency domain block624,626,628, etc. is processed separately to determine OFDM signals therefrom. In the exemplary embodiment, equalization scheme600considers each compound block604in the order of transmission, and thus repeats the process on an ongoing overlapped basis for each following block of the received OFDM time domain signals. In some embodiments, data portion606is anchored according to the center time of the frequency duration equalization signal (e.g., blocks612), and a de-ghosted time domain signal (e.g., equalized time domain signal622) is formed by combining the equalized-and-transformed odd and even time domain blocks corresponding to data window portions606. That is, after frequency domain equalization, the trimmed odd and even blocks of equalized data can be put back together to form an equalized time domain signal. For single carrier, such as pulse amplitude modulation (PAM) signals, the final time domain symbol sequence is considered “clean,” and may then be subject to “slicing” and forward error correction (FEC). For OFDM symbols, the final equalized time domain symbol sequence is converted into the frequency domain before slicing and optionally implementing FEC. In DOCSIS implementations, elimination of the cyclic prefix allows for faster data transfer, which would be approximately a 15% improvement for a symbol period of 20 μs and a cyclic prefix of 3 μs. The techniques described herein may additionally be implemented by the receiver irrespective of what type of signal is, or signals are, received from the transmitter(s). The pseudo-prefixes and pseudo-suffixes are made sufficiently long relative to the duration of the longest echo, and its recursions, and thus the linear distortion from the echo(es) can be completely removed. As described above, because leading and trailing echoes are not necessarily of equal length, the present embodiments realize the additional advantage of allowing the receiver to dynamically adjust the pseudo-prefix independently of the pseudo-suffix to address the actual distortion encountered, and without sacrificing the size of the data package. In some embodiments, an optional windowing function, such as a raised cosine window, may be placed on pseudo-prefix portion608and/or pseudo-suffix portion610prior to performing FDE. In at least one embodiment, the size of the pseudo-extensions may be set to a predetermined threshold value that represents a duration of the longest echo expected to be encountered among a system including multiple transmissions from different transmitters (OFDMA). The echo may be determined, for example, from the channel response associated with the respective signal path. In an alternative embodiment, the size of one or more pseudo-extension is set to be greater than a size necessary to remove the distortion from an echo dynamically measured along the path by the receiver during signal characterization. In a further alternative embodiment, the receiver is configured to implement pseudo-extensions of sufficient size to eliminate the distortion from expected echoes on the signal path, and also measure the transmitted signals and dynamically adjust the predetermined size of the pseudo-extensions when encountering an echo having an unexpected length greater than the predetermined threshold. In at least one embodiment, a frequency domain equalizer is programmed to utilize pilot signals for training prior to frequency domain equalization. In this embodiment, magnitude and phase correction are calculated to render the pilot signals sufficient for use. In one example, continuous pilots are used for synchronization, and timing for the start of a block may be determined by subjecting a set of captured pilots to an IFFT, with a zero value inserted for all data subcarriers, and a resulting time domain impulse response will indicate early or late timing. In at least one alternative embodiment, the time domain signal equalizer is programmed blindly, or using conventional training signals prior to frequency domain equalization. Interpolation of channel response between subcarrier pilots may be used, provided that the pilot spacing is closer than a predetermined minimum value. When the echo duration is longer, for example, the frequency domain ripple will have a shorter period, and should therefore implement closer pilot spacing. In some embodiments, as explained further below in greater detail, cyclic prefixes, pilot signals, and training signals may be completely eliminated through a novel use of constant amplitude zero autocorrelation (CAZAC) function. In the exemplary embodiment, the channel response is determined prior to slicing and FEC. The final equalized and trimmed frequency domain blocks back together to determine the OFDM symbols need not be pasted together (e.g., element622). FIG.6Bis a graphical illustration of a histogram630depicting a distribution of echo duration for an increasing population of receivers. More particularly, histogram630illustrates, for a large population of receivers, the considerations that must be addressed for echoes of increasing time delays for the receivers to sufficiently function. The distribution curve of histogram630might be encountered, for example, in a broadcast signal distribution network, or in the case where a large number of receivers are disposed within a relatively small geographic distance from one another (e.g., a large number of mobile telephones brought into a football stadium or public event). Histogram630depicts a first echo time interval632, a second echo time interval634, and a third echo time interval636. First time interval632represents a time delay that could be addressed by a typical conventional cyclic prefix. Second time interval634represents the duration of a time delay that could be addressed by a cyclic prefix having a compromise value of increased duration according to the trade-off optimization discussed above. The length of the compromise cyclic prefix might be increased, but at the sacrifice of the size of payload data and the transmission. Beyond the length of second time interval634, the trade-off between the payload and the guard interval is too great to make further increases in the guard interval practical. Second time interval634thus represents the maximum time delay that a conventional system is capable of addressing utilizing a cyclic prefix. Third time interval636therefore represents the worst-case echo delay that a system is likely to encounter for receivers in the network, and which require equalization for all of the receivers to properly function. A significant percentage of receivers in the network (represented by hashed area638inFIG.6B) will encounter echoes beyond the maximum capability, i.e. compromise interval length634, of the conventional system, and will therefore experience equalization difficulties. Systems and methods according to the present embodiments though, are capable of fully equalizing all of the receivers in the network, and without requiring any consideration of the cyclic prefix length, or the trade-off of the cyclic prefix length against length of the data package. That is, in the conventional system, if a transmitted cyclic prefix is made to sufficiently long enough to equalize the longest echo that can be encountered in the network, the size of the transmitted payload will be too small to make the transmission efficient. As the length of the cyclic prefix is shortened for better transmission efficiency, the number of receivers that will encounter equalization problems increases. Conventional OFDM/OFDMA systems therefore are required to strike a balance between the number of reception sites and poor transmission efficiency. According to the present systems and methods, on the other hand, this balance may be completely ignored. Although the pseudo-extensions of the present embodiments may perform the functions of cyclic prefixes, the pseudo-extensions are not bound by any of the cyclic prefix limitations. The pseudo-extensions are determined independently from the payload/data package, and may even utilize adjacent blocks of payload data as the pseudo-extensions (but discarded after equalization of the desired block). Some conventional systems have been proposed to eliminate cyclic prefixes from single carrier optical transmissions. These single carrier proposals, however, are unable to cope with channels that fade to zero. That is, division by zero cannot be performed where channel response is zero. In wireless channels, signals from multiple antennas can be combined to produce a composite signal without frequency response nulls. That is because, it two antennas are spaced apart, it is very unlikely that a complete fade will occur at a same frequency for both antennas. Moreover, conventional techniques of eliminating cyclic prefixes only been proposed for single carrier transmissions exhibiting chromatic dispersion on fiber optic lines. Fiber optics, however, do not typically encounter discrete echoes, as described above. Furthermore, OFDM transmission is multi-carrier, and may be implemented on both wired and wireless networks, where echoes may be commonly encountered. FIG.7is a flow chart diagram of an exemplary overlapped Fourier transform process700for equalization scheme600,FIG.6A. In the exemplary embodiment, process700is implemented by a receiver or the processor thereof (e.g., first receiver316,FIG.3). Process700begins at step702, in which a digital transmission signal (e.g., OFDM carrier signal602,FIG.6) is received, including a series sequential OFDM time domain samples, and formed into overlapping series of compound time domain blocks (e.g., blocks604,FIG.6A). In some embodiments, process700is particularly useful if the received time domain symbols are from a single carrier transmission, or from a direct sequence spread spectrum transmission. In an optional embodiment, the narrow blocks, such as data portions606, are centered in the respective compound block604, and are of sufficient size to allow echoes to die out (e.g., go to zero) in between bursts. In step704, the compound time domain blocks are converted into the frequency domain (e.g., by an FFT). In step706, the frequency domain blocks are equalized using frequency domain equalization. In step708, the equalized compound frequency domain blocks are converted into the time domain (e.g., by an IFFT) to create equalized compound time domain blocks (e.g., blocks614,FIG.6). In step710, the “early” (e.g., pseudo-prefix618,FIG.6A) and “late” (e.g., pseudo-suffix620,FIG.6A) time symbol ends are cut/discarded from the equalized compound time domain blocks to extract narrow equalized time domain blocks (e.g., blocks616,FIG.6). Step712is a decision step. In step712, a processor of the receiver determines whether received input symbols are single carrier or multicarrier (OFDM). If the received time domain symbols are determined to be single carrier, process700proceeds to step714, where the narrow equalized time domain blocks are pasted together to form an equalized time domain signal (e.g., signal622,FIG.6) and, for single carrier signals (or the equivalent) then sliced and forward error corrected. In at least one example of step714, where the input symbols are, for example, a direct sequence spread spectrum (not shown), step714further includes an optional sub-step of de-spreading the symbols. If, however, in step712, the processor of the receiver determines that the input signals are multicarrier signals (e.g., OFDM/OFDMA), process700proceeds to step716, where the equalized time domain signal is converted into the frequency domain, after which slicing/FEC may be implemented, and/or OFDM symbols are read therefrom. In some embodiments, narrow blocks may include multiple OFDM transforms, which are separated prior to performing the FFT of step716. In other embodiments, the narrow blocks may alternatively or additionally include partial OFDM transforms, which are combined to perform the FFT of step716. In an alternative embodiment, process700further includes optional step718. Step718is implemented in the case where compound time domain blocks form a continuous stream, as opposed to burst mode reception. In such cases, optional step718proceeds to the next block in sequential time order and repeats process700for the next block. According to process700, frequency domain equalization can be performed on multiple blocks the same time, and across block boundaries. Alternatively, or additionally, a large block can also be broken into smaller sub-blocks for separate equalization. According to this advantageous process, a receiver may be configured or programmed to dynamically adjust the size of the compound block according to the length of an encountered echo on the signal path. For example, if a very long echo is encountered in the signal path, the size of the overlapping compound blocks can be enlarged at the receiver end to effectively create longer pseudo-prefixes having a greater duration than the encountered echo. Such dynamic adjustability is particularly advantageous with respect to terrestrial broadcast signals, which are known to suffer from very long echoes in the signal path. The equalization processes described herein are thus effectively modulation-indifferent to the type of data that is being linearized, and thus is fully and simultaneously adaptable to single carrier, multi-carrier, or spread spectrum transmissions. According to the embodiments herein, the data may be continuous, or formed into individual blocks. Similarly, the size of the transform block may also be dynamically made larger or smaller as desired, and each separate path may be converted by the transform of a different size than that implemented on a different path. This ability to dynamically alter the transform size provides a receiver with superior versatility over conventional OFDE techniques, which implement a one-size-fits-all transform approach. In conventional OFDM transmissions, for example, FFT length is made large in order to prevent the overhead of the cyclic prefix from becoming too burdensome as a percentage of the total transmission time. However, phase noise becomes more unstable as the length of the transform increases. The present embodiments address this problem by both (i) rendering the transform size dynamically adjustable, and (ii) eliminating the need for cyclic prefix transmission at the transmitter end. Zadoff-Chu Sequence as a Pseudo-Prefix FIG.8is a graphical illustration of a plot800of an exemplary transmitted signal802having in-phase (I) components804(black) and quadrature (Q) components806(gray). Signal802may be used, for example, as a demonstration signal. Plot800depicts linear voltage over time of captured signal802. The I and Q components are also sometimes referred to as the real and imaginary samples, respectively, and orthogonal to one another. In this example, plot800is obtained from a test system using a software-defined radio (SDR, not shown) as a transmitter, which is an Ettus (National Instruments) model B200. Another B200SDR is used as a receiver (also not shown), which will introduce frequency error due to the different oscillators of the SDRs. The test system receiver captures at 8 million samples per second (i.e., an 8 MHz channel), and transmitted signal802includes a group of 15 contiguous OFDM blocks812, each 64 symbols wide (i.e., subcarrier spacing of 125 kHz). In the example illustrated inFIG.8, testing was performed at a center frequency of 840 MHz and 2.4 GHz, with bandwidths of both 8 MHz and 16 MHz. In the test system this example, transmitted signal802further includes a first CAZAC function808and a second CAZAC function810. As illustrated inFIG.8, first and second CAZAC functions808,810are Zadoff Chu sequences placed on either end of the 15 OFDM data sequences812therebetween. A Zadoff Chu sequence is a particular type of CAZAC function that has no crest factor, and operates as a constant envelope time domain waveform. In operation, the Zadoff Chu sequence waveforms808,810are captured from the antenna periodically from transmitted signal802, which then allow the determination of the offset frequency, the start of the OFDM block, and the channel characterization. More particularly, the Zadoff Chu sequences808,810operate as a sync/timing signal to establish an exact offset frequency difference between the transmitter and receiver by a time domain cross-correlation process. Cross-correlation provides exact timing and establishes the precise start of each of the 15 OFDM blocks808. Accordingly, the use and placement of the Zadoff Chu sequences808,810serves a threefold function: (i) offset frequency measurement; (ii) start of block detection; and (iii) channel response determination for equalization coefficients. In operation, plot800is representative of either a wireless or wired transmission, and measurement of the frequency differences between the transmitter and the receiver carriers is performed by cross-correlating the time domain Zadoff Chu sequences808,810to produce a cross-correlation with real and imaginary components measured at peak values. Accordingly, a time domain phase error between Zadoff Chu sequences808,810is the arctangent of the imaginary value divided by the real value. Frequency error may then be eliminated by de-rotating the captured complex samples, and then the OFDM block group812may be parsed to create 15 complex time domain blocks, which, as described with respect to the embodiments above, may include the selected OFDM block, part of a previous OFDM block, and part of the subsequent OFDM block. As also described above, such portions of the previous and subsequent blocks may include other OFDM data signals themselves, all or part of the Zadoff Chu sequences808,810, or no energy (quiet time) etc. In an alternative embodiment, data sequences812include one or more of single carrier blocks, dead-air (quiet) time, direct-sequence spread spectrum, or any other type of mixed signals experiencing the same linear distortion. That is, linear distortion can be removed from any signal transmitted in the block allocated to data sequences812. Similar to the processes described above (see e.g.FIGS.6-7), all 15 of the compound time domain blocks may then be further converted into the frequency domain, equalized, converted back into the time domain, trimmed to become narrow, equalized, time domain blocks, and finally (e.g., for OFDM signals, as described above) converted into the frequency domain where the component OFDM symbols may be determined. This example, frequency domain equalization coefficients may be determined by performing a Fourier transform/FFT on first Zadoff Chu sequence808, and then performing a complex division by a stored complex coefficient for each symbol (in frequency). In some embodiments, the compound blocks will require more coefficients than the particular frequency domain equalization coefficients provided in the Zadoff Chu sequence (64 in this example), and therefore the equalization solution is interpolated to create at least twice the number (e.g., 128) of frequency domain coefficients. In an exemplary embodiment, the combination of the Zadoff Chu sequence808and OFDM block group812forms a trained block group (not separately numbered). In this example, a continuous transmission may therefore be created by transmitting a series of such trained block groups. Alternatively, a burst mode transmission may be created by transmitting a single trained block group, followed by a single ZC sequence for frequency offset estimation. In at least one embodiment, a plurality of quiet time symbols (e.g., 8 symbols) are placed on either side of both Zadoff Chu sequences810,812, to further prevent energy from some data symbols contaminating the channel characterization results. In at least some embodiments, because an initial Zadoff Chu sequence (e.g., sequence808) is effectively repeated (e.g., sequence810) for a single block of OFDM signals, the CAZAC functions/Zadoff Chu sequences according to plot800may perform further advantageous utility as a substitute for cyclic prefix, or pseudo-prefix, for transmission implementations that still intend to utilize cyclic prefixes. That is, the Zadoff Chu sequence may be utilized as a substitute cyclic prefix (because it is repeated energy), in addition to the functionality described above, namely, offset frequency measurement, block start detection, and use as a training signal. Accordingly, implementation of the present Zadoff Chu sequence techniques allows for not only the substitution for other types of transmissions, but also for the improvement of OFDM transmissions that still desire to utilize cyclic prefixes. Therefore, cyclic prefix elimination provides greater data throughput to the transmission system as a whole, and the utilization of the Zadoff Chu sequences instead of the cyclic prefixes provides a more accurate offset frequency estimation. Additionally, utilization of the high-energy, no crest factor Zadoff Chu sequences instead of pilot signals provides a cleaner, less noisy channel model. Accordingly, systems and methods according to the present embodiments realize significantly improved performance over conventional OFDM transmission schemes that utilize cyclic prefixes and/or pilots, because the Zadoff Chu sequence has a zero dB crest factor (as compared with the 10-16 dB factor of OFDM), and therefore allows a stronger signal that can be used for timing, characterization, and frequency offset, thereby resulting in longer battery life, greater range, and greater throughput of system receivers, which is particularly important for portable handheld devices (e.g., cellular phones, tablets, portable computers, etc.). Moreover, elimination of the cyclic prefix is of particular importance at the transmitter and as well, because it will result in significant power savings, thereby delaying the need for plant upgrades, while allowing for improved service tiers. FIG.9is a computer program listing900demonstrating an exemplary coding for implementing the Zadoff Chu sequence depicted inFIG.8. According to listing900, a computer code generates 63 complex Zadoff Chu values. A 64thZadoff Chu point is created by repeating the 63rdZadoff Chu value. Accordingly, a radix 2 FFT operation may be implemented as listed. The person of ordinary skill in the art will understand though, that in FFT operation may be implemented for other base values, or prime numbers, without departing from the scope of the embodiments described. In an exemplary embodiment, computer program listing900is based on a C code programming language, but other programming languages, including Matlab may also or alternatively be used. The Zadoff Chu sequences herein represent complex-valued mathematical sequences which, when applied to radio signals, give rise to an electromagnetic signal of constant amplitude. Cyclically shifted versions of the Zadoff Chu sequence, imposed on a transmitted signal, thereby result in zero correlation with one another at the receiver. A generated Zadoff Chu sequence that has not been shifted is referred to as a “root sequence”. The Zadoff Chu sequences described herein exhibit a further useful property namely, that cyclically-shifted versions of the sequences are orthogonal to one another, provided, that is, that each cyclic shift, when viewed within the time domain of the transmitted signal, is greater than the combined propagation delay and multi-path delay-spread of that transmitted signal between the transmitter and receiver. In wireless implementations, such as MIMO, equalization of several signals, received from several antennas, is performed to produce one or more equalized data streams. Such MIMO operations are more advantageously performed utilizing the Zadoff Chu sequence embodiments described herein, since application of the present Zadoff Chu sequences to different signals from the different MIMO antennas will assist in matrix construction. In the exemplary code listed inFIG.9, the particular set of values used for Nzc, u, n1, and n2 (e.g., C code) produces a complex signal with a constant magnitude in the time domain, and nearly-constant values in the frequency domain Accordingly, Zadoff Chu sequences further operate similarly to a useful training signal that can be used in place of the pilots that are conventionally used in OFDM. Because of the constant amplitude property, the Zadoff Chu sequences, the resultant time domain signal of the sequences has a crest factor of 0 dB, as described above. The present embodiments are therefore of particularly advantageous use with respect to power-limited transmitters, such as found in cellular phones, because the radiated energy for characterizing the signal path is much greater, while the resulting noise contamination of the channel estimate is significantly reduced. FIG.10is a graphical illustration depicting a frequency domain spectral plot1000of the Zadoff Chu sequence depicted inFIG.8, and generated according to computer program listing900,FIG.9. Plot1000depicts magnitude, real, and imaginary values (y-axis) over frequency (x-axis) of captured spectral data of an FFT of a generated time domain Zadoff Chu sequence. Plot1000includes a real subplot1002, and imaginary subplot1004, and a magnitude subplot1006. As described above, such as Zadoff Chu sequences have ideal autocorrelation functions, low crest factors, and relatively flat spectral energy. That is, magnitude subplot1006remains relatively flat over significant frequency range, thereby rendering the Zadoff Chu sequence particularly advantageous as a channel characterization signal. The Zadoff Chu sequence therefore may be used, according to the embodiments described above, as a substitute for conventional OFDM pilot signals and training signals. In some embodiments, the spectral energy of the transformed time domain Zadoff Chu sequence can be created from other values of Nzc, including 127, 255, 511, etc., where Ncz=2{circumflex over ( )}n−1. As described above, OFDM blocks814and816were included as portions of the previous and subsequent signal, respectively, and each illustrate the time and shape of OFDM time domain energy in a single block. It may be further noted that the OFDM energy has a large crest factor. FIG.11is a flow chart diagram of an exemplary process1100for simulating an idealized transmitter-receiver chain for an OFDM transmission having no cyclic prefix (or pilot signals), including any of the embodiments described above. Process1100includes a configuration subprocess1102, and a frequency domain equalization subprocess1104. Process1100begins at step1106of configuration subprocess1102. In step1106, process1100performs system configuration, including without limitation the symbol time (e.g., 20 μs), the FFT size (e.g., 4096), sub symbol times, the number of sequential OFDM symbols to simulate (in some cases, process1100will discard the first and last simulated OFDM symbols). The modulation order (e.g., 1024 QAM) is set. In step1108, process1100performs channel configuration, including without limitation the channel type (e.g., AWGN channel with a single echo), signal to noise ratio, echo amplitude relative to the direct signal amplitude, and/or echo time delay (e.g., in seconds). In step1110, process1100calculates subcarrier values, including without limitation the number of unique complex symbol values, the average symbol energy, and/or the average noise energy per subcarriers symbol. In step1112, process1100generates the transmitted signal. The generated transmitted signal includes generated random symbols for the real component, and random generated symbols for the imaginary component. In some embodiments, step1112includes the further substeps of combining the real and imaginary components, performing an IFFT of each symbol to convert to the time domain, and reorganizing into a one-dimensional time sequence. Step1114, process1100generates the channel, including without limitation a conversion of the echo (in dB) into a linear quantity, and a conversion of the echo delay into a sub-symbol index. In step1116, process1100establishes the channel impulse response. In an exemplary embodiment, establishing the channel impulse response includes substeps of starting with all zero values, adding a 1 value at the zero-lag tap, and adding the echo at a desired or appropriate tap. In step1118, process1100establishes the noise sequence (e.g., AWGN), and in step1120, process1100calculates the received signal. In an exemplary embodiment, step1120includes substeps of convolving the transmitted signal with a channel impulse response, and trimming the resulting convolution down to its original length and adding the noise sequence thereto. In step1122, process1100displays the resulting signal as if no equalization had been performed thereupon. In an exemplary embodiment, step1122includes the additional substeps of performing an FFT to convert the signal to the frequency domain, and plotting the constellation with associated formatting. Step1124, process1100calculates the average subcarrier symbol error energy. In an exemplary embodiment, step1124includes the additional substep of calculating the MER with no equalization. After step1124, process1100proceeds to frequency domain equalization subprocess1104. Frequency domain equalization subprocess1104begins at step1126. In step1126, process1100pads the channel impulse response to equal two times more than the FFT size of the OFDM transmission. In an exemplary embodiment, step1126limits the echo delay to no more than two times the FFT size of the OFDM. In step1128, process1100converts the channel response into the frequency domain. In step1130, process1100generates overlapping FFT blocks. In an exemplary embodiment, step1130generates overlapping FFT blocks that are two times the FFT size of the OFDM transmission. In step1132, the overlapping blocks are converted into the frequency domain. In step1134, the channel response is equalized. In step1136, the equalized channel response is converted back to the time domain In step1138, the overlapping block portions are discarded. In step1140, the equalized time sequence is converted to the frequency domain. In step1142, the resulting constellation is plotted and displayed. In step1144, additional statistics are displayed with the resulting constellation, including without limitation the MER, which is calculated for each OFDM symbol. According to the example he process ofFIG.11, a transmitter-receiver chain for a digital transmission system is successfully simulated to demonstrate implementation, for optimization purposes, of the several embodiments described above. FIG.12is a flow chart diagram of an alternative process1200for utilizing Zadoff Chu sequences according to the embodiments described above. Process1200begins at step1202in which the receiver (e.g., receiver316or318,FIG.3) captures a wireless signal (e.g., signal800,FIG.8) and stores captured time domain symbols into a memory (not shown inFIG.3) of the receiver. In step1204, process1200determines the frequency offset between the transmitter and receiver from the detected Zadoff Chu sequences (two Zadoff Chu sequences in the example depicted inFIG.8), and cross-correlates the Zadoff Chu sequences to produce a peak, and the phase between real and imaginary components identifies the offset frequency. In step1206, the captured symbols are de-rotated in the time domain to remove the frequency error from the captured symbols. That is, the received complex time domain samples are de-rotated to remove the frequency offset. In step1208, the first of the two Zadoff Chu sequences is processed to determine the channel response. In at least one example of step1208, process1200further interpolates the determined channel response in the frequency domain. In step1210, process1200utilizes the timing of the cross-correlation peak of the Zadoff Chu sequences to identify, or locate, the boundaries of each of the 15 OFDM blocks (e.g., blocks812,FIG.8). In step1212, overlapped compound blocks are formed (e.g., 15 in the example illustrated inFIG.8). In at least one example of step1212, the first and last blocks of the compound blocks include portions of the respective Zadoff Chu sequences, which may be later discarded as needed pseudo-extensions, according to the embodiments described above. In step1214, the compound blocks (e.g., 15) are converted into the frequency domain by a Fourier transform. In step1216, frequency domain equalization is performed on each the 15 compound blocks, and then the FDE-equalized blocks are converted into the time domain by an inverse Fourier transform. In step1218, overlapping portions of the equalized, compound time domain blocks are discarded to extract 15 narrow, equalized blocks in the time domain. In step1220, the extracted narrow blocks are converted into the frequency domain by Fourier transform, after which, the symbols thereof may be sliced and read. Similar to the embodiments described above, upon completion of step1220, process1200may return to step1202, in which processing is repeated on the next captured burst, or the next group of previously captured symbols, stored in the receiver memory. As also described above, and an optional embodiment, process1200may be performed out of sequence, and/or in near simultaneity, on two or more symbols stored in the memory. According to the advantageous systems and methods herein, receiver-based processing techniques may be implemented such that the transmitter may eliminate cyclic prefixes from multi-carrier digital signal transmissions. These techniques are thus applicable for equalizing/de-ghosting not only OFDM and OFDMA signals, but also for a variety of other digital transmissions, including without limitation SC-FDMA, single-carrier transmissions, spread spectrum signals, MIMO, and wavelet-based signals. The techniques of the present embodiments are also particularly applicable to transmission systems that utilize pre-distortion, such as upstream DOCSIS 3.1. That is, in such transmission systems, the cyclic prefix may be entirely eliminated from the pre-distorted transmission, which would not generally be processed by the CMTS receiver. DOCSIS 3.0, for example, utilizes pre-distortion for single carrier transmissions, and DOCSIS 3.1 utilizes pre-distortion for OFDMA transmissions. In conventional examples of these transmission systems, the transmissions are pre-distorted at the cable modem (CM), and after passing through the linear distortion of an upstream cable network, arrive at a CMTS receiver fully equalized. In one illustrative example, a human eye lens pre-distorts an image such that the image will be in generally perfect focus on the eye retina. In further examples, pre-distortion coefficients are determined in a training process, referred to as “ranging,” using pilots. According to the embodiments herein, for OFDMA transmissions, the CP may be eliminated at the CM end, thereby providing up to an additional 25% more upstream throughput. Further to this example, no additional equalization processing would be necessary at the CMTS receiver, utilizing the techniques described herein. As described above, the several embodiments use different types of overlapped extensions as pseudo-prefixes/pseudo-suffixes, and these pseudo-extensions may include one or more of several other types of transmissions, such as training signals, pilots, signals with other modulation formats, quiet time, unused or too-short cyclic prefixes, or CAZAC functions/sequences. These pseudo-extensions are multi-functional, and may substitute for cyclic prefixes or other types of guard transmissions and pilot signals. Systems and methods according to the present embodiments represent further significant improvements over conventional transmission schemes by providing dynamically adaptive equalization schemes that allow for different transform sizes to be applied to different signals traveling along different signal paths of varying lengths. That is, size of an FFT is adjustable for longer echo/reflection delays, as opposed to the shorter path of the direct signal. Similar advantages apply to the dynamically adjustable lengths of the pseudo-extensions as well. In the case of OFDMA transmissions where multiple transmitters, having different respective signal paths, contribute to a composite received signal, the frequency domain symbols of each transmitter may be equalized with frequency domain coefficients specifically configured to correct for the signal path of that respective transmitter. An example of relevant equalization processing is described below with respect toFIGS.13A-13B. FIG.13Ais a flow chart diagram of an alternative process1300for operating a receiver, e.g., first receiver316,FIG.3, that receives approximately simultaneous signals (e.g., composite signal320) from two different signal paths, e.g., first carrier signal310(1) and second carrier signal312(1). The respective signal paths of carrier signals310,312may be different for a number of reasons, such as due to a reflector in one path (e.g., reflecting object324) that is not in the other path. In this example, first transmitter302and second transmitter306are OFDMA transmitters, no cyclic prefixes are included in the respective carrier signals. FIG.13Bis a graphical illustration of alternative frequency plots1302utilizing an odd-and-even subcarrier scheme1304and an upper-and-lower frequency band scheme1306, respectively. In the embodiments illustrated inFIGS.13A and13B, receiver316is pre-programmed to know which subcarriers (A and B, respectively, each OFDMA transmitter302,306is using. In subcarrier scheme1304, first transmitter302utilizes odd-numbered subcarriers A and second transmitter306utilizes even-numbered subcarriers B. In alternative subcarrier scheme1306, first transmitter302utilizes the lower half of the OFDMA frequency band, and second transmitter306utilizes the upper half of the OFDMA frequency band. Referring back toFIGS.3and13A, process1300begins at step1308. In step1308, receiver316forms a combined (e.g., from composite signal320) compound overlapped block that includes subcarriers from at least two separate transmitters, that is, first transmitter302and second transmitter306, in this example. In step1310, the combined compound overlapped block is converted from the time domain into the frequency domain (e.g., by a Fourier transform or an FFT), and the respective frequency domain subcarriers A and B are separated according to which transmitter sent the respective subcarriers. The separated frequency domain subcarrier symbols are then processed similarly, but separately, as follows. In step1312, FDE is applied to the symbols from both of the A and B subcarriers using equalization coefficients corresponding to each of the respective signal paths310(1) and312(1). In step1314, the equalized A and B symbols are converted, separately, into the time domain (e.g., by an inverse Fourier transform or IFFT). In an embodiment, for the A subcarriers, a value of zero may be inserted for all subcarriers from B, and similarly, for the B subcarriers, a value of zero may be inserted for all subcarriers from A. In step1316, still in the time domain, the overlapped portions (pseudo-extensions) of the equalized and converted composite A and B blocks are discarded to create two separate narrow A and B blocks. Step1318is an optional step, which may be implemented in the case where the processed symbols are from an OFDMA transmission. In an example of step1318, the narrow A and B blocks are transformed again into the frequency domain (e.g., by Fourier transform, FFT) and the symbols are read. The techniques ofFIGS.13A-Bare further advantageous with respect to SC-FDMA transmissions that do not utilize cyclic prefixes. In such cases, where the input signal is SC-FDMA, optional step1318would not be needed. The present embodiments are particular useful in the case of multiple transmissions of different types. That is, for example, where first transmitter302implements an OFDMA transmission and second transmitter306implements an SC-FDMA transmission, receiver316is configured to receive a composite signal containing different A and B subcarriers, and then separately process the respective symbols thereof. In this example, optional step1318would be implemented to read the symbols of the narrow A blocks (OFDMA), but would not need to be implemented for the narrow B blocks (SC-FDMA). This could happen, for example, if B blocks came from a battery-powered device, and A blocks came from a device connected to the AC power supply, and was not therefore power-constrained. Thus, in the case of a composite signal having multiple transmissions, which may or may not individually include cyclic prefixes, the cyclic prefixes become irrelevant. The significant factor will be the duration of the reflections (e.g., the longest echo) among the multiple transmissions. More importantly, where cyclic prefixes are implemented, the duration of a cyclic prefix for particular transmission would not be expected to change for the particular carrier in which it is implemented. Echoes and reflections, on the other hand, are subject to change from path to path, and over time within a single path. According to the present embodiments though, a receiver can be configured to dynamically adjust, in real-time, the length of the overlap/pseudo-extensions according to the conditions that are actually encountered. U.S. Pat. No. 5,886,749 describes a system where repeated energy (an NTSC horizontal sync signal) is used to enable an overlapped transform with FDE. The present embodiments advantageously utilize a periodically repeated ZC sequence instead of the NTSC sync signal to more reliably enable an overlapped transform with FDE. Nevertheless, as described above, the present embodiments avoid the need for repeated (e.g., cyclic) energy. The present systems and methods advantageously utilize energy adjacent to the target block, such as an adjacent block itself, which does not affect the equalized target block because the adjacent signal portions are discarded after processing. Further advantages of the present embodiments become readily apparent in the case where one transmitter's signal arrives early or late relative to the other transmitter's signal. If there was a timing offset between the start of the A block and the B block, this timing offset would be automatically corrected by the FDE subprocess, because the time shift will appear as a rotation in the frequency domain correction coefficients. The present embodiments offer still further advantages with respect to the use of the present Zadoff Chu sequences as training signals. That is, equalization of conventional OFDM or OFDMA blocks is more effectively executed by substituting a CAZAC functions/Zadoff Chu sequences for pilot subcarriers. According to the present embodiments, even when the substitute (ZC) training signal is shorter in duration than the compound block that is intended to be transformed and equalized in the frequency domain, the frequency domain coefficients may nevertheless be interpolated in order to equalize the overlapped frequency domain extensions. In some embodiments, in the case of a series of OFDMA block transmissions, the present systems and methods advantageously allow for use of the cyclic prefix (or quiet time) on the first block on the first transmitter, so that the first transmitter is protected from a previous signal echo from another signal path. Successive blocks from this first transmitter may then advantageously omit the use of the cyclic prefix, thereby saving significant transmission time for substantive data. The present embodiments that eliminate cyclic prefixes from digital transmission signals are still further useful beyond simply reducing transmission time to improve system efficiency. Indeed, the present embodiments are particularly valuable for implementations where the length of echoes on a signal path render the use of cyclic prefix impractical, thereby limiting the types of transmission schemes that may be utilized on such signal paths. For example, in the case of a particular signal path having a duration of 50 μs for its longest echo, and for a maximum allowed overhead of 10%, the OFMD symbol period would have to be greater than 500 μs. A 500 μs symbol, however, would require an inordinately expensive precision local oscillator. Such precision local oscillators become even more costly at very high frequencies, such as millimeter wave frequencies (e.g., 60 GHz). According to the present systems and methods though, by eliminating the need for cyclic prefixes, OFDM transmission technology may be implemented on this exemplary signal path without requiring such costly hardware outlays. Duobinary Modulation for OFDM Transmission Duobinary modulation is a transmission scheme for transmitting N baud using a bandwidth of less than capital N/2 Hz. However, since the minimum bandwidth required of a transmitted pulse is N/2 Hz, adjacent duobinary pulses experience ISI. A data communication system that implements duobinary modulation includes a duobinary encoder, which implements the duobinary code from original symbols, and a duobinary decoder, which recovers the original symbols from the duobinary signal. The duobinary decoding process is prone to error propagation, since an estimate of a given sample relies on the estimate of previous sample. One conventional technique to mitigate this error propagation implements a precoder before the duobinary encoder at the transmitter. However, when duobinary coding is applied in the frequency domain, conventional encoding that is designed for sequential time-domain duobinary signals are known to experience compromised efficiency. Accordingly, it is desirable to create an improved encoder design for use with frequency domain duobinary OFDM transmissions. In an exemplary embodiment of duobinary transmission, a data communication system has a total number N of OFDM subcarriers. Because of the duobinary modulation scheme, the number of independent subcarriers carrying original symbols will then be N−1. At the transmitter side of the communication system, the duobinary encoding can be expressed as: y=Ax(Eq.1) Where x=(x1, x2, . . . xN−1)Tand represents a vector including the N−1 original or precoded symbols, y=(y1, y2, . . . yN)Tand represents a vector including the N duobinary OFDM subcarriers, and A is a N×(N−1) matrix. Matrix A can be further expressed according to: A=(IN10)+(0IN1)(Eq.2) Where IN−1constitutes a (N−1)×(N−1) identical matrix. In one example, where the total number of subcarriers is 4, three of the subcarriers are independent, and will contain QPSK symbols 1+j, 1−j, and −1+j. Thus, the duobinary encoding that will correspond to (Eq. 1) can be expressed as: y=Ax=(111111)(1+j1-j-1+j)=(1+j20-1+j)(Eq.5) FIG.14is a graphical illustration depicting a comparative overlay1400of an OFDM sinc function implementing a duobinary technique according to the present embodiments. In the example illustrated inFIG.14, one subcarrier1402is shown, and represents a sinc function (sampling function) in the frequency domain. Implementing a duobinary operation on subcarrier1402, a copy1404of subcarrier1402is obtained, but having a frequency shift of pi (π). A combination subcarrier1406is then obtained by combining subcarrier1402with copy1404. Combination subcarrier1406can then be seen to have significantly reduced sidelobes, as compared with subcarrier1402and copy1404, because the respective sidelobes values of subcarrier1402and copy1404substantially cancel each other out at all sidelobes. This example is particularly illustrative of how the present embodiments implement duobinary modulation on OFDM transmissions to significantly reduce the OOB leakage in comparison with a conventional OFDM transmission. The significant improvement realized by the present embodiments over the conventional techniques is further illustrated with respect toFIG.15. Duobinary OFDM represents an innovative modulation technique that has a characteristic of low adjacent channel interference with respect to neighboring frequencies. As described further below with respect toFIGS.17A-B, the energy of a conventional single symbol is spread between two adjacent symbols. In the time domain, this spread creates an OFDM symbol with a half-cosine envelope. In the frequency domain though, the spectrum is rectangular, with low OOB interference. In an exemplary embodiment, this duobinary OFDM symbol is transmitted without a cyclic prefix, and then be demodulated with a CP-elimination receiver (described above). In an alternative embodiment, the duobinary OFDM symbol may be demodulated by a conventional OFMD receiver, and a CP can be optionally added. In the case where a CP is added to the duobinary OFDM symbol, the time domain waveform envelope takes on the shape of a “fish” (described further below with respect toFIGS.17A-B), and some of the OOB performance will then be compromised by the abrupt drop of the “tail” of the fish. Duobinary processing is conventionally done with a filter having an impulse response extending over two symbols. In at least one embodiment, OOB leakage of the OFMD transmission may be reduced utilizing raised cosine tapering in the time domain, however, this alternative method will result in an increased transmission time. Raised cosine tapering is used, for example, in the DOCSIS 3.1 modulation standard. FIG.15is a graphical illustration depicting a comparative overlay1500of a duobinary OFDM block transmission1502with a conventional OFDM block transmission1504. As can be seen from overlay1500, an amount of adjacent channel interference1506(dark shaded area) from duobinary OFDM block transmission1502is significantly reduced with respect to an amount of adjacent channel interference1508(light shaded area) of conventional OFDM. The present duobinary techniques thus provide a dramatic improvement with respect to conventional OFDM transmissions. In the example shown inFIG.15, the vertical scale per division is 10 dB. In some embodiments, the crest factor (seeFIG.17A, below) may be reduced by interleaving the upper and lower sidebands in the time domain. That is, while the lower sideband is going to zero, the upper sideband is cresting, and vice versa. Additionally, the principles described herein are not limited to only the example shown, but may be also advantageously implemented for modulation formats without uniform subcarrier amplitudes, including but not limited to 64-QAM. Furthermore, the duobinary modulation techniques of the present embodiments represent only one example of the new modulation method that is created by interchanging time and frequency axes of signal duals. In an exemplary embodiment, adaptation techniques from single carrier to multicarrier utilize a 90 degree rotation in a time-frequency plot. A duality between time and frequency exists, which can be observed from discrete Fourier transform and discrete inverse Fourier transform equations (see e.g., Eqs. 6 and 7, below). Differences between such equations typically involve only a scale factor and a negative sign in front of the complex exponential. Substantively, the equations are quite similar. Given a set a transform pairs, it is often difficult to identify which graph of a plotted signal is a time domain plot, and which is a frequency domain plot. For example, with a single carrier signal, such as a pulse amplitude modulation (PAM), each symbol is considered to be short in time, but wide in bandwidth, and having a next symbol occurring sequentially in time. An OFDM subcarrier, on the other hand, is considered to be narrow in frequency, but long in duration. Additionally, many OFDM subcarriers operate simultaneously in time. An illustrative comparison of these two exemplary signal types is shown below with respect toFIG.16. FIG.16is a graphical illustration depicting a block time-frequency plot1600. In this example, plot1600is illustrated as a 32×32 block of 32 PAM single carrier symbols1602(a single PAM symbol illustrated for ease of explanation) extending in the time direction (vertical) and 32 multicarrier OFDM symbols1604(a single OFDM symbol also illustrated). That is, all 32 PAM symbols1602are time domain symbols in the row direction, and can be transmitted in time over the duration of a single OFDM symbol1604. Similarly, all 32 OFDM symbols1604are frequency domain symbols in the column direction, and can fit within the bandwidth of a single PAM symbol1602. As can therefore be seen fromFIG.16, a 90 degree rotation of plot1600(visually, about the plot “center point”) demonstrates the present modulation technique that effectively turns the PAM transmission into an OFDM transmission, and vice versa. By this modulation technique, a 32×1 symbol can be mathematically rotated to become a 1×32 symbol, and vice versa. This principle may be expanded, for example, to rotate a plurality of time division multiple access (TDMA) single carrier sequential transmissions, from a plurality of transmitters, to effectively obtain a plurality of OFDMA simultaneous transmissions from the plurality of transmitters. Furthermore, although OFDM (without time domain tapering, e.g., raised cosine) exhibits OOB energy splatter, this characteristic is analogous to the sin(x)/x response (in time) of PAM signals if the channel rolloff factor (alpha) is small, or zero (the “brick wall”). Therefore, according to this embodiment, an OFDM transmission utilizing time domain tapering will perform an operation analogous to a PAM transmission utilizing a roll-off factor. This rotational illustration of the present modulation techniques demonstrates still further advantages that may be realized over conventional duobinary transmissions, as illustrated below with respect toFIGS.17A-B. FIG.17Ais a graphical illustration depicting a time domain plot1700of a duobinary block transmission, andFIG.17Bis a graphical illustration depicting a frequency domain plot1702of the duobinary block transmission depicted inFIG.17A. As illustrated inFIG.17A, time domain plot1700generally includes an envelope shaped like a half-cosine. When time domain plot1700is rotated 90 degrees, a “conventional” duobinary result is produced, and the time axis can then be relabeled as “frequency.”. This rotational modulation technique, when performed on a conventional duobinary transmission, is referred to herein as “FD duobinary” or “duobinary OFDM.” As illustrated inFIG.17B, frequency domain plot1702is substantially flat, and exhibits an abrupt drop of OOB energy, which is desirable to reduce interference with the neighboring channels. In some embodiments, the present FD duobinary modulation techniques may be implemented with respect to an OFDM transmission utilizing cyclic prefixes. An optional cyclic prefix1704is illustrated inFIG.17A, and represents the front portion of the envelope of time domain plot1700being utilized as a CP. When cyclic prefix1704is so utilized, the sine-shaped envelope of the time domain duobinary signal resembles the shape of a fish, with cyclic prefix1704observable as the “tail” of the “fish.” As described further below, the use of cyclic prefixes with the present FD duobinary modulation techniques is optional, and unnecessary when implementing a “no-CP receiver” according to the embodiments described above. When a cyclic prefix is not so implemented, the “fishtail” of time domain plot1700disappears, but the frequency domain plot1702improves, as the abrupt drop of the fishtail causes some OOB leakage. As can be seen with respect to frequency domain plot1702, three different power levels are visible: (i) the peak subcarrier level (flat portion); (ii) an intermediate subcarrier level (drop off portions); and (iii) a zero-power subcarrier level (where the energy drops to the origin). In conventional time domain duobinary implementations, the occupied bandwidth of a QPSK signal, relative to partial response signaling (PRS, or 9-PRS), is greater according to the channel roll-off factor, alpha. For DOCSIS single carrier modulations, this bandwidth increase is approximately 5% greater for the downstream transmission and approximately 25% greater for the upstream transmission. According to the present FD duobinary techniques though, if the number of subcarriers is the same, with equal subcarrier spacing, the occupied bandwidth will remain the same, and not experience this increase. This bandwidth advantage occurs as result of the FD duobinary techniques producing the more abrupt drop of OOB energy (see e.g.,FIG.15), which thereby enables closer carrier spacing. The FD duobinary modulation techniques of the present embodiments demonstrate a time-frequency swapping technique that advantageously relates OFDM and single carrier modulation transmissions as duals of one another, after the respective time and frequency axes are rotated and “relabeled.” This time-frequency swapping technique allows new modulation candidates to be created, such as duobinary (e.g., PRS) OFDM, which has valuable properties for the cable plant. Conventional modulation techniques are used at carrier frequencies to send digital data over a distance, either by wires, wireless, or optically. Three known modulation techniques include single carrier, multi-carrier, and code division multiple access (CDMA). All three techniques have been used on cable networks at one time or another. The orthogonality between signals is a property that allows one signal, which includes a plurality of symbols, to be clearly received without interference from the symbols of another signal. This orthogonality can be expressed, for example, according to the following equation: ∑x(n)*y(n)=0(Eq.7) Where the variables, x and y, are orthogonal over a range if the sum of all x*y products are equal to zero over a range where y≠x. Where the signals represent complex numbers, for example, (Eq. 7) would consider the sum of all x*conj(y) products. Different modulation techniques are known to achieve orthogonality by other means or calculations. A comparative example of impulse and resulting spectral responses are illustrated below with respect toFIGS.18A-D, for a single carrier QPSK transmission and a duobinary 9-PRS transmission FIG.18Ais a graphical illustration depicting an impulse response1800of the single carrier transmission.FIG.18Bis a graphical illustration depicting a spectral response1802of the single carrier transmission depicted inFIG.18A.FIG.18Cis a graphical illustration depicting an impulse response1804of a conventional duobinary transmission (e.g., an impulse response of 1.0 for two symbols, in this example).FIG.18Dis a graphical illustration depicting a spectral response1806of the duobinary transmission depicted inFIG.18C. As illustrated inFIG.18A, a basic modulation technique, such as BPSK, can be created by connecting a periodic series of positive or negative impulses to a lowpass filter (not shown) having a sine(x)/x impulse response (see alsoFIG.14, above). A raised cosine frequency response on the modulated signal may then be produced therefrom, as represented by spectral response1802ofFIG.18B. The abruptness of the frequency domain roll-off, as illustrated inFIG.18B, represents the “alpha” factor, and is affected by damping applied to the sine(x)/x waveform. In contrast, duobinary modulation employees a different impulse response, as illustrated by duobinary impulse response1804ofFIG.18C. In comparison with impulse response1800of the single carrier transmission (FIG.18A), duobinary impulse response can be seen to last over two symbol periods, as opposed to one symbol period, as illustrated inFIG.18C. Moreover, the frequency domain of the resultant spectral response1806,FIG.18D, has a cosine shape, as opposed to the raised cosine shape of spectral response1802of the single carrier transmission. FIG.19Ais a graphical illustration of a constellation1900depicting relative power calculations for error thresholds of the QPSK single carrier transmission depicted inFIGS.18A-B, andFIG.19Bis a graphical illustration of a constellation1902depicting relative power calculations for error thresholds of the 9-PRS duobinary transmission depicted inFIGS.18C-D. The 9-PRS signal may be produced, for example, by passing a two-level complex (i.e., I and Q) signal through a duobinary filter (not shown inFIG.19B). As illustrated inFIG.19A, the QPSK signal has four equally probable states, whereas, as illustrated inFIG.19B, the 9-PRS signal has a single state (middle of plot) having a probability of 0.25, four high-power corner states with a combined probability of 0.25 (i.e., 0.0625 each), and four intermediate power levels between the corners having a combined probability of 0.5 (i.e., 0.125 each). Thus, if the voltage difference between points A and B on constellation 1902 is assumed to be 1.0V, the power of the 9-PRS constellation will be 0.25*0+0.5*1.0+0.25*1.414{circumflex over ( )}2=1 watt. In comparison, if the voltage difference on the QPSK constellation1900is set to be0.707V between points C and D, the QPSK power it will also be 1 watt. Accordingly, a noise vector that would be required to make a slicing error on the QPSK signal will be 0.707V, whereas 0.5V would be required for the 9-PRS signal, resulting in a difference of 3 dB between the two constellations. This difference is realized for the duobinary transmission over the QPSK signal by taking advantage of the fact that not all states are equally probable for the duobinary signal, despite the fact that both signals have a same RF power. FIG.20is a graphical illustration depicting a diagram2000of single carrier voltage versus time. Diagram2000is comparable to overlay1400,FIG.14, and impulse response1800,FIG.18A. In cable systems, single carrier modulation, e.g., 64-QAM and 256-QAM, has been used extensively on downstream signal paths, and upstream signal paths have utilized advanced time division multiple access (ATDMA), which is essentially a burst-mode single carrier transmission technique. Single carrier modulation includes a time series of voltage impulses (symbols) that have been filtered to limit interference with other frequency bands. Diagram2000thus illustrates five different sine(x)/x impulses with uniform time shifts. That is, five different symbols represented on diagram2000have a same value, but are shifted in time with respect to one another. In operation, the symbols of diagram2000may have any positive or negative values, and/or have real-only values or include complex values. The vertical lines appearing in diagram2000represent five separate sampling instants. The five separate waveforms are not considered to interfere with each other because, at each sampling instant, a particular symbol reaches its peak the value (1), while the other symbols are passing through zero. Accordingly, orthogonality is maintained throughout diagram2000. In further operation, the system represented by diagram2000may be optimized to remove linear distortions, such as echoes, prior to sampling (e.g., using an adaptive equalizer). Without such optimization, the responses from the other respective symbols may not be zero at the particular sampling instant. In such instances, the non-zero symbols may contribute to distortion energy to the selected symbol. FIG.21is a graphical illustration depicting a timing diagram2100for a spread spectrum signal. Direct sequence spread spectrum (DSSS) technology has been used in military applications to hide or mask communications and radar signals by making the signals appear noise-like. A related technology, known as synchronous code division multiple access (S-CDMA), has been used in cable transmission systems to provide upstream noise immunity, and for multiple access implementations. In an exemplary embodiment, timing diagram2100includes a low speed data input2102, a pseudo-noise (PN) sequence2104, an output2106, and a chip rate2108. In the example illustrated inFIG.21, multiple orthogonal codes are assigned to one or more users, and simultaneous transmissions may thus occur on different codes without interference. In an exemplary embodiment, the DSSS/S-CDMA technique of timing diagram2100further utilizes equalized signals to prevent loss of orthogonality between codes. In operation, a low speed data input2102is clocked against high-speed PN sequence2104(e.g., by use of an exclusive-OR gate, shown below with respect toFIG.22) to produce output2106, which appears noise-like. Further operation of timing diagram2100is explained further below with respect toFIG.22. FIG.22is a schematic illustration depicting an exemplary block diagram of a system2200for direct sequence spread transmission and reception. System2200includes a transmitter portion2202and a receiver portion2204, and is configured to implement operation of timing diagram2100,FIG.21. In operation of system2200, a random PN sequence (i.e., PN sequence2104) is created using a plurality of cascading shift registers2206and an exclusive-OR gate2208. The clocking rate of the several components represents chip rate2108. A signal to be transmitted, i.e., output2106, is generated by a data source inverting or not inverting, i.e., through implementation of exclusive-OR gate2208, the pseudo-random output (PN sequence2104) of shift registers2206. At receiver portion2204, a PN generator2210synchronizes with transmitter portion2202, and also uses the same code (PN sequence2104) to reproduce the signal of low speed data input2102. Where system2200includes S-CDMA DOCSIS functionality, each circular time shift by shift registers2206(excluding the initial chip, which would not the shifted) produces another basis function that is orthogonal to all other shifts. In at least one embodiment, system2200further implements a pre-coder (not shown inFIG.22), as described above, to eliminate error propagation at receiver portion2204, as well as the FD duobinary OFDM techniques also described above. In some embodiments, receiver portion2204is configured to consider only one received symbol at a time. Referring back toFIG.2A-B, OFDM is used in DOCSIS 3.1 technology, as well as in many wireless standards. Some OFDM implementations obtain orthogonality through further utilization of different subcarriers that are all harmonics of the same fundamental.FIG.2Ais an illustrative example of an OFDM waveform with only four such subcarriers. Each of the harmonically-related subcarriers depicted in the illustration has a different magnitude and phase value from each other. When all four subcarriers are combined (i.e., summed) for transmission, the result is a single composite signal. However, orthogonality allows the original subcarriers to be separated at the receiver, typically using an FFT. Accordingly,FIG.2Bas an illustrative example of the same OFDM signal ofFIG.2A, but shown in the frequency domain. FIG.23is a graphical illustration depicting a plot2300of a received OFDM signal in the frequency domain. Specifically, plot2300represents a spectral plot of an OFDM signal that is affected by a deep frequency-selective fade. This fading phenomenon occurs frequently in wireless channels where a sum of echo components cancels the signal entirely, or at least at some subcarrier frequencies (e.g., the second harmonic carrier HC2, in the example illustrated inFIG.23). Accordingly, the techniques and principles of the present embodiments are particularly applicable to such OFDM environments, where faded subcarriers that are essentially lost in the noise floor may be recovered using forward error correction (FEC) operations. FIG.24is a graphical illustration of a plot2400depicting time and frequency relationships between common transmission impairments. Plot2400is particularly illustrative when considered together with time-frequency plot1600,FIG.16, above. Plot2400illustrates the relative effects from common transmission impairments in both the time domain and the frequency domain. As illustrated inFIG.24, plot2400enables the visual understanding of the various effects from the different impairments on the different types of modulation. For example, plot2400illustrates how random thermal, or Gaussian, noise is present at all frequencies and all times, and thus there is no present modulation technique that has a relative advantage to address additive white Gaussian noise (AWGN). The maximum data capacity in a channel having AWGN can be determined, for example, according to the Shannon Hartley Theorem. If the noise is not white noise (i.e., the spectrum of the noise is not flat), the maximum capacity of the channel may be determined according to the “water-pour” method of transmit power distribution. In cable systems, having a non-flat SNR may result from cable loss varying with frequency, as well as nonlinear distortion products, which are random noise-like if the distortion was created by digital carriers. Plot2400further illustrates that burst noise come on the other hand, occurs locally in time, but often has a wide spectrum. Single carrier modulation, with FEC, may therefore be effective to address burst noise in order to correct corrupted time domain symbols. However, it should be noted that a typical OFDM receiver will perform an FFT on the corrupted sequence, and thus spread the burst noise contamination to all related frequency domain symbols. In contrast, plot2400illustrates that a continuous wave (CW) interference source may be continuous in time, but relatively localized in frequency. OFDM modulation techniques having FEC might be utilized to repair such localized damage, to a limited number of subcarriers. However, use of single carrier modulation with a CW interferer will affect all symbols, thereby turning a constellation point (see e.g.,FIGS.19A-B) into a “donut” shape. Plot2400still further illustrates that a deep frequency-selective fade may be addressed in a similar manner as with CW interference, namely, by OFDM modulation with FEC. There are other important considerations for selecting a modulation technique for a particular RF signal path, such as (i) tolerance to frequency offsets and tolerance to phase noise, both of which increase the cost of local oscillators, and (ii) peak-to-average power ratio, which makes transmitters consume more power, thereby decreasing battery life. Additionally, some receiver designers are known to further implement a number of design tricks, sometimes referred to as “secret sauces,” to mitigate the effects of impairments, such as noise cancelers. In consideration of the embodiments described above, a signal E for transmission, includes a plurality of individual component symbols en, and may be represented as follows: E=⌊e1,e2,e3,e4...ej⌋(Eq.8) For this signal E, a modulation matrix C may be formulated. The modulation matrix C includes rows and columns, and is optimally configured such that the respective rows are orthogonal to one another. Accordingly, in this mathematical representation, each of the three modulation techniques described above can be seen as essentially representing merely a different set of row functions, which are also referred to as orthogonal basis functions. For single carrier modulation, the modulation matrix C may constitute simply an identity matrix having a single diagonal row of 1s and 0s elsewhere. (See e.g., Eq.5, above). For a DSSS signal modulation, the respective rows may be provided from a Walsh matrix, implementing a single circular shift between rows. The modulation matrix C may also include complex components, which may be represented as follows: C=[c(1,1)c(1,2)...c(1,k)c(2,1)c(2,2)...c(2,k)c(3,1)c(3,2)...c(3,k)..................c(j,1)c(j,2)...c(j,k)](Eq.9) The general principle of orthogonality between rows may then be restated, for all rows where x≠y, according to: ∑n=1n=kc(x,n)*c(y,n)=0(Eq.10) Accordingly, the unmodulated signal E may be transmitted as a modulated signal F by multiplying the input sequence of the unmodulated signal E by the modulation matrix C, represented by: F=E*C=⌊f1,f2,f3,f4...fj⌋(Eq.11) With respect to OFDM modulation, the respective rows of the modulation matrix C may also represent complex exponentials (e.g., sine and cosine waves), where the first row would represent, for example, the first harmonic, the second row would represent the second harmonic, etc., as illustrated below with respect toFIG.25. FIG.25is a graphical illustration depicting an OFDM modulation matrix2500.FIG.25is a graphical illustration depicting an OFDM modulation matrix2500. In the example illustrated inFIG.25, modulation matrix2500is an eight-row matrix of signs and cosines for an OFDM modulation, where the cosine waves are depicted as solid lines, and the sine waves are depicted as dashed lines. Further in this example, row X[1] is depicted to form an upper sideband, and row X[7] is depicted to form a matching lower sideband. The innovative principles of the embodiments described above advantageously coordinate the time and frequency relationships between transmitted symbols to effectively render single carrier modulation and multicarrier OFDM modulation as separate aspects of the same modulation technique, wherein the separate aspects are distinguished by a 90-degree rotation in the time-frequency plot. The duality between time and frequency according to the present embodiments can be further observed in consideration of the following discrete Fourier transform (DFT) and inverse discrete Fourier transform (IDFT) equations: f[k]=1N∑n=0N-1F[n]e+j2πNnk(Eq.12)F[k]=∑k=0N-1f[k]e-j2πNnk(Eq.13) That is, the essential differences between these two equations are the scale factor and the polarity sign in front of the complex exponential. The equations are otherwise very similar. Observing only a plotted set of transform pairs, for example, it would be difficult to identify which plot represents time and which represents frequency, as illustrated above in time-frequency plot1600,FIG.16. Without knowing the rotational frame of reference, that is, which axis represents time and which axis represents frequency, the PAM transmission is essentially indistinguishable from the OFDM transmission. In some cases, non-orthogonal signals are utilized for communication transmissions. For example, a non-orthogonal spread-spectrum signal may operate in the presence of other signal types by taking advantage of spreading gain. In such1710instances, some level of interference will be experienced with the non-orthogonal signal, however, such interference is optimally kept within tolerable levels. According to the rotational principles of the modulation techniques described above, the same symbol may be observed as either a 32×1 symbol or as a 1×32 symbol, depending on the rotational frame of reference. It should be noted though, that dispersion, or other linear distortion, occurs along the time axis, but not along the frequency axis. Dispersion along the frequency axis would be instead indicate non-linear distortion. Thus, according to the embodiments described herein, performing a 90 degree rotation on a particular sequence is effectively the same as performing a FFT on that sequence. Similarly, performance of a −90 degree rotation on the particular sequence effectively accomplishes an IFFT on that sequence. Thus, the rotational operations of the present embodiments may be implemented using FFTs and IFFTs (or DFTs and IDFTs, respectively). Referring back toFIGS.17A-B, the desirable characteristic of a gentle rise and fall in transmit power level is a result of duobinary modulation summing each subcarrier with in an adjacent subcarrier, where that the adjacent subcarrier has a same magnitude component. The duobinary OFDM modulation techniques of the present embodiments are of particular advantage with respect to very narrow bandwidth OFDM transmissions having a relatively small number of subcarriers, e.g., Internet of Things (IoT), ham radio operation, etc. The techniques described herein effectively resolve the spectral splatter experienced in such environments, which cause adjacent channel interference. The present embodiments are also of particular use in signaling operations that utilize a small number of bits in a narrow bandwidth, such as “acks” or acknowledgements. The present embodiments may also be implemented with respect to conventional pre-coding techniques (see e.g., “Digital Telephony,” 3d Ed, by John Bellamy), which allow allows a symbol to be decoded without referencing a preceding symbol (e.g., correlative level encoding). The duobinary OFDM techniques herein are described, by way of example, for 2-level I and Q signals (QPSK), which are illustrated to be converted into a 3×3 constellation. The person of ordinary skill in the art though, will understand that these examples are provided for illustration purposes, and are not intended to be limiting. According to the principles described herein, the present embodiments may be implemented with respect to other modulation orders. A 16-QAM modulation order, for example, when duobinary-filtered, will create a 49-point constellation. The duobinary modulation techniques of this description may also utilize impulse responses other than those with 2 adjacent symbols. For example, an impulse response according to the present embodiments include more than 2 symbols. In other instances, where two symbols are utilized, the 2 symbols need not be on adjacent subcarriers As described above, time-frequency duals are conventionally known, but the modulation techniques thereof are conventionally considered to be separate from one another. That is, OFDM is considered to be the dual of the single carrier, and OFDMA is considered to be the dual of single carrier TDMA, but the conventional modulation techniques therefor are considerably different from one another. Windowed OFDM to Reduce ISI and OOB Emissions As described above, a raised cosine function may be produced on the spectral response of some of the embodiments. In an exemplary embodiment, an alternative process applies a time domain window function, such as a Hamming window, to reduce ISI and OOB emissions. FIG.26Ais a graphical illustration depicting a Hamming window function2600in the time domain.FIG.26Bis a graphical illustration depicting an inverse Hamming function2602in the time domain.FIG.26Cis a graphical illustration depicting an implementation of a Hamming window on a time domain waveform2604. In the exemplary embodiment, the time domain window function utilizes Hamming window functions according to the examples shown inFIGS.26A-C. In operation, the time domain window function is applied to an OFDM or OFDMA waveform, and the windowed waveform therefrom is then transmitted. The implementation of the window function will create some ISI, but reduce the OOB emissions. However, the window function may be selected, as described below, to cancel the created ISI in the frequency domain with a frequency domain convolution, as described below with respect toFIGS.27A-B. Implementation of the windowing/window function operation may be performed at either or both of the transmitter and the receiver of the system. FIG.27Ais a graphical illustration depicting a Hamming impulse response2700in the frequency domain.FIG.27Bis a graphical illustration depicting an inverse convolution impulse response2702in the frequency domain. In operation, by reversing the Hamming window (i.e., inverting in time domain, for example, by inverse Hamming function2602,FIG.26B) any noise-plus-interference that occurs may be multiplied while a transmitted signal is weak. In an exemplary embodiment, this problem may be alleviated by implementing a maximum likelihood estimation for the frequency domain symbols, based upon the knowledge of what the original Hamming impulse response (e.g., impulse response2700) looks like, as illustrated below with respect toFIGS.28A-B. In this exemplary embodiment, ISI created utilizing a Hamming impulse response2700may be corrected through frequency domain convolution with an inverse filter H(f), representing inverse convolution impulse response2702. FIG.28Ais a graphical illustration depicting an unequalized constellation2800on which a Hamming window (e.g., Hamming window function2600,FIG.26A) has been implemented.FIG.28Bis a graphical illustration depicting an equalized constellation2802on which a Hamming window has been implemented. Unequalized constellation2800demonstrates the effect of the created ISI and OOB emissions on the transmission from applying only the time domain window function without further correction. Equalized constellation2802demonstrates how the created ISI and OOB emissions can be corrected through the coordinated frequency domain equalization techniques described above, which implement the inverse impulse response of the selected time domain window function, or Hamming window. FIG.29Ais a graphical illustration depicting an implementation of a raised cosine voltage function on a time domain waveform2900.FIG.29Bis a graphical illustration depicting an unequalized constellation2902on which a raised cosine function has been implemented.FIG.29Cis a graphical illustration depicting a constellation2904, after equalization, on which a raised cosine function has been implemented.FIGS.29A-Cillustrate, by way of comparison, the significant improvement in windowing presented by the present techniques, as compared with conventional raised cosine operations. The resulting constellations produced according to the present techniques may be much more carefully controlled than with implementation of the raised cosine window, which, although exhibiting more uniform power, will also experience the “division-by-zero” problem in its inverse time response. According to these embodiments, ISI damage to the waveform can be more easily repaired, and performed with significantly improved precision. These advantageous techniques have particular applicability to cable transmission operations, where, for example, one OFDM signal may be cresting, while another OFDM signal is passing through a minimum power. OFDM signals, for example, constitute orthogonal basis functions, which include sines and cosines, each having an integer number of cycles. The basis functions must be orthogonal to prevent energy leakage into neighboring signals. The present embodiments advantageously implement a time domain Hamming function on the time domain waveform of the OFDM signal, but in the frequency domain, utilize a Hamming impulse response, and inverse impulse response for convolution. In an exemplary embodiment, the Hamming window is implemented on a QPSK OFDMA transmission exhibiting ISI. Implementation of the time domain windowing operation/Hamming window results in significant reduction in an amount of leakage from adjacent blocks, while also reducing OOB splatter. In at least one embodiment, implementation of the Hamming window is made more uniform through utilization of a second carrier out of phase. Although specific features of various embodiments may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the systems and methods described herein, any feature of a drawing may be referenced or claimed in combination with any feature of any other drawing. For example, the following list of example claims represents only some of the potential combinations of elements possible from the systems and methods described herein. a(i). A signal equalizing receiver, configured to: capture a plurality of OFDM symbols transmitted over a signal path adding linear distortion to the plurality of OFDM symbols; form the plurality of captured OFDM symbols into an overlapped compound data block, wherein the compound data block includes at least one pseudo-extension in addition to payload data from at least one of the plurality of OFDM symbols; process the overlapped compound data block with one of (i) a circular convolution having an inverse channel response in the time domain, and (ii) a frequency domain equalization in the frequency domain, to produce an equalized compound block; discard at least one end portion of the equalized compound block to produce a narrow equalized block, wherein the at least one end portion corresponds with the at least one pseudo-extension, and wherein the narrow equalized block corresponds with the payload data; and cascade two or more narrow equalized blocks to form a de-ghosted signal stream of the plurality of OFDM symbols, wherein the plurality of OFDM symbols includes one or more of an OFDM transmission and an OFDMA transmission, wherein the plurality of OFDM symbols includes one or more of a cyclic prefix and no cyclic prefix, and wherein a length of the at least one pseudo-extension is different than a length of the cyclic prefix. a(ii). A digital transmission receiver having a processor and a memory, configured to: receive a digital signal transmission from a signal path including a plurality of data blocks having linear distortion; determine, from the signal path of the digital signal transmission, a duration of at least one reflection of the digital signal transmission on the digital signal path; attach a pseudo-extension to a first data block of the plurality of data blocks, wherein the length of the pseudo-extension in the time domain is greater than the duration of the at least one reflection; process the first data block, together with the pseudo-extension attached thereto, to remove linear distortion from the first data block; and discard the processed pseudo-extension from the processed first data block after the linear distortion has been removed. b(ii). The receiver of claim a(ii), wherein the signal path includes a first signal subpath and second signal subpath different from the first signal subpath. c(ii). The receiver of claim b(ii), wherein the first signal subpath is a wired subpath and the second signal subpath is a wireless subpath. d(ii). The receiver of claim b(ii), wherein the first signal subpath is a direct signal path and the second signal subpath is an indirect signal path. e(ii). The receiver of claim b(ii), wherein the reflection is transmitted along the second signal subpath. f(ii). The receiver of claim b(ii), wherein the second signal subpath is longer than the first signal subpath. g(ii). The receiver of claim b(ii), wherein the receiver is further configured to receive (i) a real component of the digital signal transmission from the first signal subpath, and (ii) an imaginary component of the digital signal transmission from the second signal subpath. h(ii). The receiver of claim a(ii), wherein the receiver is further configured to process the first data block using an overlapped circular convolution process. i(ii). The receiver of claim a(ii), wherein the receiver is further configured to process the first data block using an overlapped Fourier transform process. j(ii). The receiver of claim a(ii), wherein the pseudo-extension comprises at least one of a pseudo-prefix obtained from a second data block preceding the first data block, and a pseudo-suffix obtained from a third data block succeeding the first data block. k(ii). The receiver of claim a(ii), wherein the receiver is further configured to process the digital signal transmission from one or more digital transmission schemes, including orthogonal frequency-division multiplexing (OFDM), orthogonal frequency-division multiple-access (OFDMA), data over cable service interface specification (DOCSIS), multiple input/multiple output (MIMO), and single-carrier frequency-division multiple-access (SC-FDMA). a(iii). A digital transmission system, comprising: a transmitter configured to transmit orthogonal frequency-division multiplexing (OFDM) symbols having no cyclic prefix attached thereto; a receiver for receiving the transmitted OFDM symbols from the transmitter; and a signal path for communicating the transmitted OFDM symbols from the transmitter to the receiver, wherein the OFDM symbols received by the receiver include linear distortion from the signal path, and wherein the receiver is configured to process the received OFDM symbols and linear distortion using an overlapped circular convolution function to produce equalized OFDM symbols. a(iv). A digital transmission system, comprising: a transmitter configured to transmit orthogonal frequency-division multiplexing (OFDM) symbols having no cyclic prefix attached thereto; a receiver for receiving the transmitted OFDM symbols from the transmitter; and a signal path for communicating the transmitted OFDM symbols from the transmitter to the receiver, wherein the OFDM symbols received by the receiver include linear distortion from the signal path, wherein the receiver is configured to process the received OFDM symbols and linear distortion by an overlapped Fourier transform function to produce equalized OFDM symbols, and wherein the overlapped Fourier transform function is configured to (i) overlap individual ones of the distorted OFDM symbols with overlapped time energy from respectively adjacent ones of the distorted OFDM symbols, (ii) transform the overlapped individual distorted OFDM symbols into distorted frequency domain symbols, (iii) perform complex multiplication of the distorted frequency domain symbols by equalization coefficients to equalize the distorted frequency domain symbols, (iv) remove the overlapped time energy from a time domain component of the equalized frequency domain symbols, and (v) produce undistorted frequency domain symbols from a frequency domain component of the time domain component with the overlapped time energy removed. a(v). A digital transmission system, comprising: a transmitter configured to transmit (i) a series of orthogonal frequency-division multiplexing (OFDM) symbols having no cyclic prefix attached thereto, and (ii) at least one constant amplitude zero autocorrelation waveform sequence (CAZAC) sequence; a receiver for receiving the transmitted series of OFDM symbols and the CAZAC sequence from the transmitter; and a signal path for communicating the transmitted series of OFDM symbols and CAZAC sequence from the transmitter to the receiver, wherein the series of OFDM symbols and the CAZAC sequence are received by the receiver with linear distortion from the signal path, and wherein the receiver is configured to utilize the received CAZAC sequence as a reference signal for equalizing the received series of OFDM symbols. b(v). The system of claim a(v), wherein the CAZAC sequence comprises at least one Zadoff Chu sequence. c(v). The system of claim b(v), wherein the at least one Zadoff Chu sequence comprises a first Zadoff Chu sequence preceding the series of OFDM symbols in the time D and a second Zadoff Chu sequence succeeding the series of OFDM symbols in the time domain. d(v). The system of claim a(v), wherein the receiver is further configured to determine from the received CAZAC sequence at least one of a channel characterization, an offset frequency, and a start of one or more of the OFDM symbols in the time domain. a(vi). A method of equalizing a transmitted digital signal, comprising the steps of: receiving, in the time domain, a sequential series of first, second, and third data blocks of the transmitted digital signal; forming a compound block in the time domain from the second data block including an end portion of first data block and a leading portion of the third data block; performing circular convolution on the compound block using a set of equalization coefficients to equalize the compound block in the time domain; extracting from the equalized compound block a narrow block corresponding to equalized time domain data of the second data block; converting the narrow block from the time domain into frequency domain data; and reading frequency domain symbols relating to the second data block from the converted narrow block. b(vi). The method of claim a(vi), further comprising a step of forming a compound block in the time domain from the third data block including an end portion of second data block and a leading portion of a fourth data block immediately succeeding the third data block. c(vi). The method of claim a(vi), wherein the transmitted digital signal is an orthogonal frequency-division multiplexing (OFDM) signal, and wherein the frequency domain symbols are OFDM symbols. a(vii). A method of equalizing a digital signal transmitted over a signal path, comprising the steps of: receiving, in the time domain, a sequential series of time domain samples of the transmitted digital signal; forming the received sequential series of time domain samples into a separate sub-series of overlapping compound time domain blocks, wherein each compound time domain block of the sub-series includes a pseudo-prefix comprising information from an immediately preceding block; determining an echo delay on the signal path; converting the compound blocks into the frequency domain to form compound frequency domain blocks; equalizing the compound frequency domain blocks to form equalized frequency domain blocks; converting the equalized frequency domain blocks into the time domain to form equalized time domain compound blocks; discarding, from the equalized time domain compound blocks, overlapping time domain energy portions corresponding to respective equalized pseudo-prefixes, to form narrow equalized blocks; pasting the narrow equalized blocks together to form a composite equalized time domain signal; and converting the composite equalized time domain into the frequency domain and read equalized frequency domain symbols therefrom. b(vii). The method of claim a(vii), wherein the digital signal is an orthogonal frequency-division multiplexing (OFDM) signal, and wherein the equalized frequency domain symbols are OFDM symbols. c(vii). The method of claim a(vii), wherein the step of determining comprises one of (i) performing signal characterization for the signal path and (ii) assigning a pre-determined threshold value for the echo delay. a(viii). A method of modulating, by a transmitter, a series of input digital symbols of a first modulation scheme, comprising the steps of: receiving a sequential series of samples of the digital symbols in a first domain of the first modulation scheme, wherein the first domain is one of the time domain and the frequency domain; determining a dual of the first modulation scheme, wherein the dual has a second modulation scheme in a second domain that is different from the first domain, and wherein the second domain comprises the other of the time domain and the frequency domain; applying a 90 degree rotational operation to the second modulation scheme to generate a rotational modulation format; modulating the series of digital symbols with the generated rotational modulation format; and outputting the modulated series of digital symbols to a receiver. b(viii). The method of claim a(viii), wherein the first modulation scheme comprises a single carrier modulation scheme. c(viii). The method of claim b(viii), wherein the single carrier modulation scheme comprises quadrature amplitude modulation. d(viii). The method of claim b(viii), wherein the single carrier modulation scheme comprises pulse amplitude modulation. e(viii). The method of claim d(viii), wherein the second modulation scheme comprises orthogonal frequency division multiplexing modulation. f(viii). The method of claim b(viii), wherein the orthogonal frequency division multiplexing modulation comprises partial response signaling. g(viii). The method of claim a(viii), wherein the first modulation scheme comprises orthogonal frequency division multiple access modulation, and wherein the second modulation scheme comprises time division multiple access modulation. h(viii). The method of claim a(viii), wherein the first modulation scheme comprises a multicarrier modulation format, and wherein the second modulation scheme comprises a single carrier modulation format. i(viii). The method of claim a(viii), wherein the first modulation scheme comprises one of a spread spectrum modulation format and a code division multiple access format. j(viii). The method of claim a(viii), further comprising a step of equalizing the series of digital symbols prior to the step of modulating. k(viii). The method of claim a(viii), further comprising a step of precoding the series of digital symbols prior to the step of modulating. l(viii). The method of claim a(viii), wherein the generated rotational modulation format comprises one of binary phase shift keying and duobinary modulation. m(viii). The method of claim a(viii), wherein the step of applying comprises one of a Fourier transform operation and an inverse Fourier transform operation. a(ix). A digital transmission system, comprising: a transmitter configured to transmit an input series of complex symbols; a duobinary encoder disposed within the transmitter, and configured to filter the input series of complex symbols and output a partial response signaling (PRS) signal; a converter disposed within the transmitter, and configured to convert the PRS signal output into the time domain; and a receiver for receiving the time domain-converted PRS signal from the transmitter over a signal path. b(ix). The system of claim a(ix), wherein the transmitter further comprises a pre-coder configured to pre-code the complex symbols prior to filtering by the duobinary encoder. c(ix). The system of claim a(ix), wherein the converter is further configured to perform at least one of a fast Fourier transform, an inverse fast Fourier transform, a discrete Fourier transform, and an inverse discrete Fourier transform. d(ix). The system of claim a(ix), wherein the signal path comprises at least one of a cable network, a wired transmission line, a wireless path, and a fiber optic line. e(ix). The system of claim a(ix), wherein the complex symbols comprise an orthogonal frequency division multiple access symbols transmitted in an upstream direction of the digital transmission system. f(ix). The system of claim a(ix), wherein the converter is further configured to construct a duobinary OFDM signal for transmission by splitting a subcarrier of the input series of complex symbols to appear over two adjacent frequency domain subcarriers. a(x). A method of modulating, by a transmitter, an input digital signal transmission, comprising the steps of: receiving the input digital signal having a first time-frequency order on the time-frequency axis; rotating the time-frequency axis by 90 degrees; modulating the input digital signal according to the rotated time-frequency axis; and outputting the modulated digital signal to a receiver. Some embodiments involve the use of one or more electronic or computing devices. Such devices typically include a processor, processing device, or controller, such as a general purpose central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, a reduced instruction set computer (RISC) processor, an application specific integrated circuit (ASIC), a programmable logic circuit (PLC), a programmable logic unit (PLU), a field programmable gate array (FPGA), a digital signal processing (DSP) device, and/or any other circuit or processing device capable of executing the functions described herein. The methods described herein may be encoded as executable instructions embodied in a computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processing device, cause the processing device to perform at least a portion of the methods described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term processor and processing device. This written description uses examples to disclose the embodiments, including the best mode, and also to enable any person skilled in the art to practice the embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. | 127,361 |
11863367 | DETAILED DESCRIPTION OF THE INVENTION Nowadays, information is transported or stored in analog or digital form. Digital has appeared more recently with computing and tends to replace analog in transport and storage. Data in multimedia codecs, which operate in blocks, are transported and stored in digital form. Although the carriers and physical media for communication have remained analog, the underlying modulations are digital, with no connection to the codec-related blocks. While the envelopes containing the codecs may carry data, the codecs themselves do not. Among other things, this invention proposes an alternative using analog or digital modulations to transport and store multimedia codec data and data in general, in order to communicate using ultra-narrow bandwidths and to increase storage capacities. Codec-related blocks are taken into account for maximum efficiency. Phases that have been ignored in some cases to improve compression efficiency have been reintroduced to play another very important role. This invention presents an additional processing method applied to multimedia codecs, audio, image and video compression methods, based solely on FFT, Fast Fourier Transform, using the largest points, the foreground, and the most energetic bands, the background, being able to use only a field of local peaks, with or without phases, and being able to ignore the phases in the background, characterized in that phases are added and used to carry data or the values of the displacements of the points or local peaks. This method comprises adding phases and using them to transport the values of the displacements of the local points and peaks in order to reduce the bandwidths, and adding phases in the background and using them to transport the points of the foreground in order to reduce the crest factor. This invention is based on three French patents and one French patent application. The first patent has a filing date of Aug. 3, 2006, filing number: FR0607091, publication number: FR2891100. To compress audio, voice and music frames, FFT is used, the largest points (foreground) and the most energetic bands (background). A field of local peaks only can be used. In the general case, there is a frame overlap of 50% or less. The phases of the foreground points are taken into account in the general case. For low quality voice, the local peaks can be used without phase and without frame overlap. The background bands are coded with less precision than the foreground points. One can be satisfied with a sign bit for the phases, or even simply ignore the phases. If we take enough points in the foreground, the background looks like a low amplitude white noise. To encode the phases, we choose a precision, for example four bits per phase (the phases will be between 0 and 15 by applying a simple rule of three), six bits per phase (the phases will be between 0 and 63), eight bits per phase (the phases will be between 0 and 255). It should be noted that these compressed frames can be used as simple signatures towards richer frames, thus of higher quality. The second patent has a filing date of Jun. 21, 2012, filing number: FR1201759, publication number: FR2992458. The compression methods in this patent allow for more compression of audio frames by taking advantage of successive and non-successive redundancies. For successive redundancies, no frame is transmitted, the receiver repeats the last received frame until a repetition credit is exhausted or until it receives a new frame. For non-successive redundancies, the transmitter only sends the number of a similar frame located behind. It should be noted that these compression methods allow very low latency times while having very high average compression rates. It should also be noted that for images and video, in k-space, the further away from the center, the more redundant the lines tend to be. These compression methods can still be applied for magnitudes. Since similar consecutive frames are not transmitted, at the level of transmission protocols, nothing can be modulated (complete silences) and not only the emissions can be reduced, but also the electromagnetic radiation if it is used. The third patent has a filing date of 4 Mar. 2014, filing number: FR1400535, publication number: FR3018385. The compression methods in this patent allow for further compression of audio frames using a global codebook, with the same codebook used on the transmitter and receiver sides. Two databases are generated with position vectors (frequencies) and magnitude vectors, using a partitioning algorithm. Only the largest local peaks are used and phases are ignored. Any frame is represented by two numbers pointing to the nearest position vector and the nearest magnitude vector. The bases can be generated with vectors of 16 to 32 elements, and a search can be made with only the first elements, for example the first four to eight elements. Note that the codebook can be used only on the receiver side (unilateral codebook). The transmitter sends a vector of magnitudes and a vector of positions, or only reduced vectors. The receiver finds the right codes and then the right vectors. With the unilateral version, the receiver can modify its codebook at any time while being compatible with the transmitter. If we take a number of first elements which allows us to generate a codebook of reasonable size and which allows us to have all the possibilities (for example with the first four elements, the relative positions on four bits, there are 65536 possible combinations), we can have direct access to richer vectors (for example to the vectors of positions). By establishing a correspondence with codes, symbols or words, one can significantly increase the transmission rates of pure data. It should also be noted that these codes or vectors can be used as simple signatures to richer, and therefore higher quality, frames or even to the original frames used to generate the databases. The patent application was filed on Mar. 29, 2016, filing number: FR1600516, publication number: FR3049799. It concerns the compression of images and videos, particularly medical images (conventional radiography, CT or X-ray scans, magnetic resonance imaging or MRI, etc.), line by line, using FFT and the two planes mentioned above. The algorithms are still applicable but the phases are no longer secondary as in audio. The phases contain for example the details (images) or the rigid displacements (video). We can also compress k-spaces of dimension greater than 2 (3D, 4D, . . . ): these compression methods are one-dimensional (1D), but can be applied to the lines of k-space with dimensions greater than two. As these compression methods are one-dimensional, to reduce the calculations, they can be applied to the rows or columns of an intermediate space obtained by applying a FFT on one dimension, for example on each row. In 2D for example, the second FFT to be applied to each column to have the components of the k-space is not applied. Most of the vital signs are quasi-stationary signals, consisting mainly of local peaks in the frequency domain (heart pulses, lung pulses, . . . ). Vital signs can be compressed by our codecs and can benefit from the compression methods described in this paper. We can mention as vital signs: the ElectroCardioGram (ECG), the ElectroMyoGram (EMG), the Arterial Blood Pressure (ABP), the PhotoPlethysmoGram (PPG). The ElectroencephaloGram (EEG) is more complex and must be taken into account by the largest points and the most energetic bands. In order to be able to store the displacements in the associated phases, if the phases (in the FFT sense) are already used, after the selection of the local points or peaks, the sine amplitudes and cosine amplitudes are used. The amplitudes behave like magnitudes with sign. Knowledge of the two amplitudes allows us to determine the phase. Variations of the two amplitudes in the same proportions have no influence on the phases. The magnitudes and amplitudes can undergo large (logarithmic) variations without significant consequences on the quality. Both analog and digital modulations are affected. If they are applicable, with our processes and methods, the analog modulations allow to have the best performances from the point of view of the consumption and the rate of information to be transmitted. In order to understand our approach and our work, we will begin by briefly recalling some generalities useful for the following. Most of the general information in this document is taken from the online encyclopedia Wikipedia. A modem is a device that transforms a digital signal (sequence of bits) into an analog signal with properties that facilitate its transmission on a given channel. The transmission is generally carried out by modulating a sinusoidal carrier, whose amplitude, phase or frequency is modified at the rate of the signal to be sent. On the other side of the transmission chain, the receiving modem detects the modifications made to the carrier, and deduces the modulating signal. If the signal to be transmitted is analog, we speak of amplitude, frequency or phase modulation; if it is digital, we speak rather of modulation by switching or jumping amplitude, frequency or phase because the modifications of amplitude, frequency or phase are brutal and discrete. An analog signal is a continuous signal which can take an infinite number of values, whereas the digital signal is a discrete (discontinuous) signal, which is summarized in a succession of “0” and “1”. Analog Modulations:AM, Amplitude Modulation.FM, Frequency Modulation.PM, Phase Modulation. Many complex schemes combining analog modulations have been developed for specific needs. Thus, the analog modulation of two carriers in quadrature is used for the transmission of color components on the subcarrier of the PAL system, or simultaneous phase and amplitude modulation in the NTSC system. The amplitude modulation index is the measure of the change in amplitude relative to the amplitude of the unmodulated carrier. The amplitude modulation index is normally between 0 and 1 (between 0% and 100%) to avoid overmodulation. The transmission systems generally incorporate a limiter circuit to avoid such an overrun. Digital modulations:ASK, Amplitude-Shift Keying, modulation by amplitude switching.FSK, Frequency-Shift Keying, modulation by frequency switching.PSK, Phase-Shift Keying, modulation by phase switching. The most commonly used forms of PSK are BPSK (or 2-PSK: two possible phase values), QPSK (or 4-PSK: four possible phase values) and DPSK (or differential-PSK: the information is contained not in an absolute phase value, but in the phase shift between two successive signals). In digital modulation, the parameters of the carrier, amplitude or angle (argument), are switched between several discrete values according to the binary codes to transmit. In APK (Amplitude-Phase Keying, or QAM, Quadrature Amplitude Modulation), the phase and the amplitude take different discrete values. AFSK (or Audio FSK) is a variant of FSK in which the carrier is an audible signal, therefore with a frequency lower than a few kilohertz. In this way, the modulated signal can be transmitted by an installation designed to carry voice or music, for example a telephone or radio link. In the latter case, the signal is modulated a second time during transmission. This is one of the techniques used in underwater acoustic communication and also a type of modulation used by radio amateurs for the ‘packet radio’ and the APRS (Automatic Packet Reporting System). Although in decline, analog modulations (amplitude modulation, frequency modulation or phase modulation) can be used advantageously by our procedures and methods. We give here some additional useful details about these modulations for the quick understanding of the paper. In analog modulation, modulation is applied to the carrier or subcarriers in proportion to the signal to be transmitted, by modifying the amplitude or the argument of the sine wave. There are several variants including two-sideband amplitude modulation, and single-sideband amplitude modulation. The two-sideband amplitude modulation is derived directly from the multiplication of the carrier wave by the signal. It is used in broadcasting (GO, PO and OC). Frequency modulation is a modulation mode consisting of transmitting a signal by modulating the frequency of a carrier signal. Frequency modulation (FM) makes it possible to restore the continuous component of the signal, it is used in high-fidelity broadcasting (“FM” band), in satellite television broadcasting, and in analog image transmission (radio facsimile, Slow Scan Television or SSTV). Phase modulation (PM) is used in VHF and UHF radiotelephony. Since phase modulation preceded by filtering is equivalent to frequency modulation, it is also another way of frequency modulating in radiotelephony. Analog versions of QAM (Quadrature Amplitude Modulation) are typically used to allow two analog signals to be modulated on a single carrier. For example, it is used in PAL and NTSC television systems, where the different channels provided by QAM allow it to carry chrominance (color) components. In radio applications a system known as C-QUAM (Compatible QUadrature Amplitude Modulation) is used for AM stereo radio. In digital, QAM (Quadrature Amplitude Modulation) is a technique that employs a combination of phase and amplitude modulation (by switching). It is widely used by modems to enable them to offer high bit rates. In a QAM constellation, the distance of the point from the origin indicates the amplitude, its angle indicates the phase shift. Each of the channels defined by DMT (Digital Multi Tone) multiplexing in ADSL (Asymmetric Digital Subscriber Line) is modulated in QAM on a maximum of 15 bits. 32768 combinations of amplitudes and phase shifts are therefore used. Pulse Code Modulation (PCM) requires a step size (which may not be linear over the entire range) and encodes it as a digital value (which will have a certain number of bits per sample); this will always introduce quantization noise. PCM is a digital representation of an electrical signal resulting from a digitization process. The signal is first sampled, then each sample is quantized independently of the other samples, and each of the quantized values is converted into a digital code. The independent processing of each sample means that there is no encryption or data compression. In audio, our processes and methods receive PCM data as input. PAM (Pulse Amplitude Modulation) coding uses the physical amplitude of the sample as the final modulation. It is an analog modulation technique (the amplitude used for modulation is the actual sampled value, not the closest approximation used in PCM, although it can be bounded). The number of possible pulse amplitudes in analog PAM is theoretically infinite. Digital PAM reduces the number of pulse amplitudes to a power of two. Some versions of the Ethernet communication standard are an example of the use of PAM (100 BASE-T, 100 BASE-T4, . . . ). Pulse amplitude modulation has also been developed for the control of light-emitting diodes (LEDs), particularly for lighting applications. PAM coding is a technique that is also used in PCM. The main transmission channels are: wireless channels, wire channels and optical channels. Our procedures and methods are very general and apply to all these communication channels. They can also be applied to underwater communications. They can use a dedicated network or an existing network. They can even use one or more channels of an existing network. We also give some generalities to better understand our approach. The traditional telephone network or PSTN (Public Switched Telephone Network) uses a pair of copper wires. A PCM (Pulse Code Modulation) frame is a 2.048 Mbits/s frame comprising 32 time slots (TS), 30 of which are intended for users (TS 0 and 16 are reserved for the service). Each IT time slot receives the equivalent of a sample of digitized sound, or 8 bits. The entire frame can therefore contain 256 bits. The PCM frame was developed for the time switching of digitized telephone channels. It allows 30 digitized telephone channels to be multiplexed on the same pair. Subsequently, the 30 digital channels of the PCM frame were used to transmit all kinds of digital data (FAX, X25 data, video, etc.). The PCM frame allows the transmission of 30 digital channels, the signaling for the 30 channels and the synchronization of all the information. The bandwidth required to transmit the human voice so that it can be correctly understood is: 300-3400 Hz. Sampling is, after filtering, an operation carried out on the signal to be transmitted in order to carry out the Analog/Digital conversion. It consists in substituting, with the original signal, a series of instantaneous values taken on the signal and regularly spaced in time. At precise moments, regularly spaced, one takes a sample of the signal, which will be representative of the amplitude of this one. On reception, to recover the original signal, the samples are filtered by a “low-pass” filter at 4000 Hz. Shannon's theorem shows that the original signal cannot be reconstituted correctly if the sampling frequency is not greater than twice the upper frequency of the signal to be transmitted. For the PCM frame the sampling frequency is 8000 Hz. Bit rate of a voice, quantized on 8 bits, i.e. 256 levels: 8000×8 64 Kbps. Frame duration: 1000000/8000=125 microseconds. Almost all metropolitan, regional, long distance and submarine networks today are based on fiber, which means that they can already scale to meet the voracious growth of data center interconnections by taking advantage of the latest optical transmission technologies. Optical fiber has become the primary medium for high-speed transmission. Modulation techniques consist of converting electrical signals into optical signals. However, two main techniques are possible: direct modulation and external modulation. The first one is simple but unsuitable for high data rate and long transmission distances. Then the external modulator is the solution to this constraint. Amplitude switching modulation is applied by varying the amplitude of the signal according to the bits to be coded. Analog amplitude modulation is applicable. It should be noted that amplitude modulation is the only one that can be used on optical fibers, because the equipment currently in use is not able to apply any other modulation to light waves. On the other hand, this modulation is rarely used on other media, because it causes a deterioration in the signal-to-noise ratio. Direct modulation: the principle of direct modulation is trivial: in digital modulation, to transmit a “1”, the laser diode is turned on and for a “0” it is turned off. This type of modulation is only used for data rates lower than about 5 Gb/s, beyond which it is no longer possible to modulate the laser diode directly, and an external modulator must be used. External modulation: For data rates above approximately 5 Gb/s, direct modulation of the laser is no longer possible. The laser diode works in continuous mode and an external device is used in front of it which allows the light to be interrupted or passed through depending on whether a “1” or a “0” is to be transmitted (in digital modulation). The methods described herein can be used with laser diodes in continuous or non-continuous mode, in direct or external modulation, in digital or analog modulation. In the case of continuous mode operation with an external device, the use of analog modulation can be considered. Amplitude modulation is found with LED (Light-Emitting Diode) bulbs, which allow information to be transmitted at very high rates. A light-emitting diode, abbreviated as LED, is an optoelectronic component capable of emitting light when an electric current flows through it. LED bulbs are starting to replace traditional bulbs: they are much less energy consuming, less hot and more colorful, and have many advantages. The main problem with an incandescent lamp is that most of the energy is lost as heat. With LEDs, it's different. The radiation emitted is no longer produced by the temperature but by the material itself. Li-Fi or Light Fidelity is a wireless communication technology based on the use of light with a wavelength between 480 nm—or 670 THz—(blue) and 650 nm—or 460 THz—(red). While Wi-Fi uses a radio part of the electromagnetic spectrum outside the visible spectrum, Li-Fi uses the visible (optical) part of the electromagnetic spectrum. The principle of Li-Fi is based on the coding and sending of data via the amplitude modulation of light sources (scintillation imperceptible to the eye), according to a well defined and standardized protocol. Li-Fi differs from laser, optical fiber and IrDa (infrared) communication by its protocol layers. The protocol layers of Li-Fi are suitable for wireless communication up to about 10 meters, slightly more than low-power Bluetooth, and less than high-power Bluetooth or Wi-Fi. Solar Li-Fi will use a solar cell instead of the photodiode classically used in Li-Fi technology. This technology will take advantage of the wide availability of solar cells in IoT products, such as cars and street lights. A new innovative wireless communication device that uses solar cells not only to power itself, but also as a receiver of light-transmitted data, could herald a revolution in the quest for Internet access in remote areas. The major drawback of Li-Fi at present is that it is unidirectional: while it can send information to a user, it cannot receive it, unlike Wifi. Our processes and methods are compatible with transmissions by Li-Fi or by solar Li-Fi. Of course, the data of the codecs of this document can be transported or stored with existing general processes and methods. The objective is to be able to transport or store the data of these codecs in other ways, with or without additional data, using existing infrastructures or creating new infrastructures. For transport, these processes are optimized for the Internet of Things (IoT), targeting long transmission distances. In the field of IoT, there are currently two types of networks: the Sigfox network and the LoRaWAN network. These two networks are also called LPWAN (Low Power Wide Area Network) networks. UNB (Ultra Narrow Bandwidth) networks use very narrow bandwidths (generally less than 1 kHz) to reach very long communication distances (for example 5 km in the city and more than 25 km in the countryside). By using very low data rates, very little power consumption is required to transmit over long distances. Sigfox is an example of a UNB network. The protocol allows equipment to send only 140 messages per day. This technology has a very long range (over 40 km) and the consumption of a chip is 1000 times lower than that of a GSM chip. Sigfox objects therefore have a lifespan that can exceed 10 years of autonomy. On the other hand, the defect is that it has little flow, it allows to pass a few kilobytes. Specifically, an object equipped with a Sigfox chip can transmit 140 messages, each of 12 bytes per day. During communications, the bit rate is 100 bps in Europe and 600 bps in the United States. In reception, the bit rate is 600 bps. The LoRaWAN protocol is a communication protocol for the Internet of Things that uses a proprietary CSS (Chirp Spread Spectrum) modulation technique called LoRa. This protocol is intended to be simple, inexpensive to implement, and energy efficient rather than enabling high data rates. The target of LoRaWAN is clearly long-range communications at low cost and low power. With the processes in this paper, the use of phases allows for very narrow bandwidths. The use of analog or digital modulations block by block (taking into account a whole frame) makes it possible to have very low rates in communication, thus making it possible to have very low consumptions. We describe below the processes used to reduce bandwidths, and to reduce consumption. Local peaks without phase: The selected local peaks are moved closer together so that they are contiguous. The displacement of each local peak is stored in the associated phase. A number of pairs of values (magnitudes, phases) are obtained and transformed into sine and cosine amplitudes. In order to reduce the bandwidths, we synthesize a new signal, low frequency, time domain using an inverse FFT. This signal, of low frequency and reduced bandwidth can be sent in several ways, for example:Directly, after a DAC (Digital to Analog Converter) conversion.After a DAC conversion and with analog modulation.With digital modulation. A suitable sampling frequency and number of bits per sample are chosen. For example, if we choose between 12 and 16 local peaks for a complete frame, we will have between 24 and 32 values to find. If we transmit 32 frames per second, in analog, we only transmit 32 frames per second. If we transmit 32 frames per second, in digital, we can transmit between 32×24=768 samples and 32×32=1024 samples per second. You can choose between 8 and 16 bits per sample. At the receiving end, the reverse operations are performed: ADC (Analog to Digital Converter) if a DAC was used, FFT, recovery of the amplitudes, then recovery of the magnitudes and phases, finally positioning of the local peaks in their place. Generating a very narrow bandwidth signal with the local points and peaks ignoring the phases comprises associating new phases to the points, comprises grouping the points so as to have a very narrow bandwidth, comprises putting the displacements or the original positions of the points in the associated phases, and comprises synthesizing a new, relatively low-frequency, time-domain signal using an inverse FFT. Points with phases, local peaks with phases: In order to take into account the real phases in the FFT sense of the points and local peaks, the magnitudes and phases are transformed into sine and cosine amplitudes; these amplitudes, which will be used to recover the phases, can be considered as magnitudes with sign and are used to generate two separate low frequency signals, each signal can be considered as a signal without phase. The previous procedure is applied to each signal. If the communication media allow, analog QAM modulation can be used to transmit both signals at the same time. Taking into account the actual phases in the FFT sense of the local points and peaks, comprises transforming the magnitudes and phases into sine amplitudes and cosine amplitudes, comprises considering these amplitudes as magnitudes with sign, comprises generating two separate low-frequency signals, each signal being considered as a signal without phase, and transmitting these signals, and comprises recovering the FFT phases from the received amplitudes. Background Bands: With the background bands, with or without the phase sign, there is only one low frequency signal to be generated per band; there are no displacements of the points even though there are some null points due in particular to the absence of the foreground points; the data are carried directly in the phases associated with the points. The bands are transmitted one after the other, after FFT inversion. Transporting the foreground data and additional pure data in the background bands comprises leaving the points at their place, even if there are some null points due in particular to the absence of the points of the foreground, comprises ignoring the phases in the FFT sense or taking only the sign of the phases in the bands, and comprises transporting the data directly into the reintroduced phases associated with the points. In order to increase the data transmission capacity, the zero points in the background corresponding to the points of the foreground can be replaced by points of any magnitude, in particular random, but less than the smallest of the magnitudes of the points in the foreground. During the decompression, these points can be recognized and ignored for the media because they correspond to non-zero points in the foreground. We describe the principle of power reduction below. The electrical energy required to provide a pulse is equal to the pulse energy divided by the pulse length. By lengthening the transmission times, the electrical power required is reduced. This is possible with low frequency frame transmissions. Analog modulations, if they can be used, provide the best performance. There are two main types of modulation in LPWAN: Modulation with ultra-narrow band, which consists of transmitting signals in the narrowest possible frequency band (Sigfox network for example). Modulation with a spread spectrum, which consists of spreading the spectrum over a large frequency band with very low transmission power (LoRaWAN network for example). Frequency-Hopping Spread Spectrum (FHSS) is a technique for transmitting signals by radio waves that alternately uses several channels (subcarriers) distributed in a frequency band according to a pseudo-random sequence known by the transmitter and receiver. Direct-Sequence Spread Spectrum (DSSS) is a technique used in satellite communications, wireless networks and more specifically the version of Wi-Fi defined by the IEEE 802.11b standard. The purpose of DSSS is, on the one hand, to make signals occupying a frequency band, such as a speech signal, more resistant to jamming and interference encountered during transmission, and on the other hand to allow several devices to share the same carrier frequency (code division multiple access). The processes of this document are compatible with frequency hopping spread spectrum, by varying the starting frequency of the low frequency signal, and direct sequence spread spectrum, by using the background bands simultaneously, with possible spacings between the bands. Using the frequency hopping spread spectrum comprises varying the starting frequency of the low frequency signal, and using the direct sequence spread spectrum comprises using all the background bands simultaneously, with possible spacings between the bands. Our processes are designed for multimedia or vital sign data. Although our processes can make do with a small bandwidth, with these types of data, the frame rates per day are much higher than networks like Sigfox. For example, it is possible to go from 8000 samples per second to 32 frames per second in analog modulation and 768 samples per second in digital modulation, on a pair of copper wires. In simple transfer of information or multimedia messages, it is possible to chain the transmissions of the frames as well as possible. Our processes can concern sensor data and allow data rates similar to those of networks like Sigfox. It is necessary to provide a regular record of the data or to make interpolations to have regularly spaced data. We then apply the compression methods in this paper (FFT, foreground and background), and then store the service data in the background phases before transmissions. For example, if we have sensor data every two, five, or ten seconds, after ten minutes we will have FFT buffers of 300, 120, or 60 points. It is possible to reserve the entire background (magnitudes and phases) for pure data transport and the foreground for multimedia data transport. Similarly, between two multimedia frames, the entire foreground and the entire background can be used to transport pure data. It is possible to use some of our techniques (inverse FFTs and intermediate FFTs to reduce bandwidths and bandwidth requirements) with other transformations that do not use phase, such as DCT (Discrete Cosine Transform), MDCT (Modified Discrete Cosine Transform), or real DWT (Discrete Wavelet Transform). Creating and transmitting UNB frames and moving back to the starting domain comprises using transforms as DCT (Discrete Cosine Transform), MDCT (Modified Discrete Cosine Transform), and real DWT (Discrete Wavelet Transform). High-speed communications are supported by our processes. Techniques similar to those used with OFDM (Orthogonal Frequency-Division Multiplexing) are used. OFDM is a method of coding digital signals by orthogonal frequency division in the form of multiple subcarriers. Since OFDM is a block transmission system, a guard interval is generally introduced between the blocks. This eliminates interference between successive blocks in the presence of multipath channels. If the subchannels have a sufficiently narrow frequency band, they are frequency non-selective. The following techniques or protocols use OFDM:IEEE 802.11 a and g (WLAN).IEEE 802.16a (WiMAX).ADSL (Asymmetric Digital Subscriber Line).DAB (Digital Audio Broadcasting).DVB-T (Digital Video Broadcasting). The new generation mobile networks (LTE, 4G) use a variant called OFDMA (Orthogonal Frequency-Division Multiple Access). OFDMA is a data multiplexing and coding technique used mainly in 4th generation mobile telephone networks. This radio coding combines frequency and time multiplexing, i.e. “frequency division multiple access” (FDMA) and “time division multiple access” (TDMA). It is used in particular in 4G LTE, LTE Advanced and Mobile WiMAX (IEEE 802.16e) cell phone networks. Below is an example of data from an OFDM network at 54 Mbits per second (WLAN):Symbol duration: 4 microseconds.Number of data subcarriers: 48Number of bits per subcarrier: 6 (64-QAM)Number of bits per symbol OFDM: 6×48=288Number of bits per symbol: (¾)×288=216 bitsBit rate=216/4 microseconds=54 Mbps The symbol duration of 4 microseconds is composed of a frame transmission time (TFFT=3.2 microseconds) and a frame separation time (TG=0.8 microseconds). For WiMAX, the TG/TFFT ratio can be ¼, ⅛, 1/16 or 1/32. With OFDM, in high speed, we fight against frequency selective channels (multipath) by extending the transmission time: delays become negligible. With our processes, it is possible to take advantage of these transmission times to transmit an entire frame in the time domain in modulation, especially analog. The bands with very homogeneous frequencies are naturally ready for OFDM. The bands may concern the background only. The bands can also involve a mixture with points in the foreground, after selection of the points (in this case, no data is transported). Finally, it is possible to divide the foreground into bands made up of points with phase, to keep the background in bands made up of points without phase or with the sign of the phase (in this case, the background can transport data). The foreground and background bands do not contain the same frequencies. To match a band to an OFDM frequency or subcarrier, a global (uniform) shift of frequencies from the band to the frequency or subcarrier is performed. The frequency of the center of a band can be chosen as the frequency of the subcarrier, if we take a contiguous zone of bands. Note that if no data is carried, the quality of the background can be improved by adding a phase to the points. There is no additional cost for transmissions compared to data transport or if analog QAM is used. Note also that the magnitudes and amplitudes can be logarithmic, with no noticeable effect on quality. If the sine amplitudes and cosine amplitudes undergo the same distortions, there is no effect on the phases. The crest factor can be reduced significantly. As in OFDM, Orthogonal Frequency-Division Multiple Access, the bands are divided between several frequencies or several subcarriers, a single inverse FFT is used, and a single signal is transmitted for the entire frame. Transmitting the entire frame with background bands comprises distributing the bins of the bands between several frequencies or several subcarriers, as in OFDM (Orthogonal Frequency-Division Multiplexing), and making a single inverse FFT. An ADSL link uses the PSTN but uses several frequencies at the same time called subcarriers. ADSL uses the concept of subcarriers. ADSL uses a frequency band between 0 Hz and about 1.1 MHz, divided into 255 intervals of 4.3125 kHz. ADSL 2+ uses a frequency band from 0 Hz to about 2.2 MHz, divided into 511 4.3125 kHz intervals. VDSL uses up to 30 Mhz (30a) and is normalized to 17 MHz (17a). The guardband between 2 subchannels is 300 Hz. Our processes are compatible with ADSL links. With protocols such as uXDT (Ultrasonic Cross-DeviceTracking), it is possible to take advantage of a television broadcast to send non-audio messages (uBeacons) to mobiles located next to the television via ultrasound. With our methods, the transmission rates can be increased considerably with uXDT, if one wants to transmit in the 18-20 kHz range at all costs in order to use the existing hardware. It is even possible to transmit voice, image or video on one or more frequencies or several subcarriers. One of the main techniques for the future 5G cell phone standard is based on F-OFDM (Filtered-OFDM). The subcarriers are grouped into subgroups which are modulated and synchronized independently. An inverse FFT is performed on each subgroup. The resulting time-domain signals are added together to form the final signal to be transmitted. With the processes of this document, techniques similar to F-OFDM, Filtered-OFDM, are used, especially in multiplexing several sources or several types of media. VHS (Video Home System) refers to a standard for recording video signals on magnetic tape developed in the late 1970s. VHS began to decline in the early 2000s. The gradual cessation of analog television broadcasts in favor of DTT (Digital Terrestrial Television) in many countries precipitates its disappearance. On a VHS tape, signals are recorded using frequency modulation, a waveform. Everything is analog in VHS. The LaserDisc was the first optical storage medium (initially for video) to be commercialized. Although it offered good sound and image quality, the LaserDisc had little success. Nevertheless, it is from the LaserDisc technology that several optical storage media have been developed, notably CDs and DVDs, which have enjoyed considerable success. LaserDisc is an analog medium (for video). It uses frequency modulation with the help of pits engraved on the disc. The methods of this document can also be applied to data storage, and both analog and digital modulations can be used. Transmissions, including frame by frame, can be easily secured by scrambling displacements, magnitudes or amplitudes, and by scrambling the band order if they exist. In order to scramble the data, it is sufficient to scramble the phases carrying the data, by scrambling the displacements. At low bit rates, to scramble a UNB frame, it is sufficient to scramble the phases that bring the points closer together. We can start by scrambling the order of the points or the local peaks. Strong interference is obtained by scrambling the magnitude or amplitude values. At high bit rates, to scramble a frame similar to an OFDM or F-OFDM frame, it is sufficient to scramble the band order in addition. In low-speed decoding, with UNB frames, a minimal, simple and automatic means of verification is available: the phases (which represent the displacement of local points or peaks to form very narrow beams) must be increasing. If only local peaks are used, they must not be contiguous. Although the accuracy of the magnitudes is less important than the accuracy of the positions (contained in the phases), the importance of the accuracy of the positions decreases as the values of the positions increase. There is some tolerance to distortion in time domain transmissions. Losses of multimedia frames may be tolerable. For pure data or if security is enabled, losses are not tolerable. Error detection and correction mechanisms must be implemented. Points in a band can be reserved or points can be added to a band or a UNB frame to carry a checksum to recognize these bands or frames in the noise. Retransmissions are possible and compatible with multimedia data transmissions. Between two multimedia frames, data frames can be inserted that can use all local points or peaks, as well as magnitudes and phases. Note that the basic compression algorithms in this paper may require non-multimedia data. For example: If not all bands in the background or a contiguous area of bands are used, the band number must be specified. A simple k-number can be sent to indicate that the current frame is identical to the frame behind at position k. In video, intermediate frames can be encoded by value difference and can use lossless compression. The background does not contain any points of the foreground. In addition, decimations can be applied. For example, the simple decimation consists in taking a point on two, the larger of the two points, located on the left or on the right. From the simple decimation, the double decimation consists in taking the larger of the two points. More generally, the decimation of order D consists in taking a point on 2 to the power D. It takes D bits to encode the local non-zero positions after decimation. We can perform a combination of several types of decimation: for example no decimation in the center (k-space) or on the left, then simple decimation, then double decimation, etc. The non-zero points after decimation are called non-decimable points. We can also implement, for the coding of positions in the background, a decimation using several tracks and interlaced pulses. S, veral tracks are chosen for the whole background or for a part of the background, each track must contain one or few non-zero points, each position being found in one and only one track. For example, if we choose five tracks of length 8, with only one possible bit per track, we need 3 bits per track to know the position of the non-decimable point. Possible positions for the tracks:Track 1: 1 6 11 16 21 26 31 36Track 2: 2 7 12 17 22 27 32 37Track 3: 3 8 13 18 23 28 33 38Track 4: 4 9 14 19 24 29 34 39Track 5: 5 10 15 20 25 30 35 40 Each non-zero bit corresponds to the point of greatest magnitude of the track. In order to be able to use the phases in the FFT sense for certain points of the background to increase the quality, or in order to increase the quantity of pure data to be transported, we implement the virtual decimation in the background bands, which consists in using the decimable points to transport the data. The non-decimable points must contain the magnitudes and the possible phases of the codec, the phases and possibly the magnitudes of the decimable points must contain the pure data, furthermore the magnitudes of the decimable points must not be greater than those calculated. Implementing the virtual decimation in the background bands comprises using the non-decimable points to contain the magnitudes and the possible phases of the codec, comprises using the phases and possibly the magnitudes of the decimable points to contain the data, and comprises ensuring that the magnitudes of the decimable points are not greater than those calculated. The error correction and redundancy bits can be placed in the background phases or in the decimable points of the background. The methods in this document can be applied to blockchain technologies. Blockchain technologies called blockchain are implemented, based on media, audio, image and video, the local points or peaks are not moved, and the background is used to store the data, which are the hashes from the mining algorithms. The foreground and background (without the phases) give a preview of a block in the chain. The original media itself is compressed with the codec of one's choice, for example in JPEG or PNG for images. The hashes are performed on the compressed documents with the codec of one's choice. To ensure a true chain, the background is divided into two parts: one part contains the hashes of the documents compressed with the codec of one's choice, the other part contains the hashes of the previous complete block of the chain. The original media must not change, but their locations may change. The addresses are put in the metadata associated with the blocks. With blockchain technologies called blockchain, in order to minimize the computations, semi-decentralized techniques are implemented, the current block is divided into three parts: a left part which is issued from the validation of the previous block, a central part which contains the useful information, and a right part which will be issued from the validation of the current block. The person who validates the current block (the miner) has at his disposal the central sub-block compressed with the codecs of this document (sub-block A), the central sub-block compressed with the codec of his choice (sub-block B) and a key provided by the system. It performs, in a more or less automated way, the following operations:It provides a media of the same type as the blockchain media.It compresses a block composed of the left sub-block, the compatible media it has provided and the central sub-block, with the codec of its choice.He encrypts the compressed data of this block with the key provided by the system.He applies the hash algorithms on this new encrypted block.It distributes the hash data in the background of sub-block A to have sub-block A1.It performs an inverse FFT on the sub-block A1 to get the sub-block C.The sub-block C, right part of the current block, will be the left part of the next block to be validated. We use blocks based on audio, image or video, the three parts are of the same nature. If audio or video is used, a frame-by-frame correspondence is established between the parts and each frame is worked on as described above. In order to take text documents into account, they must first be converted into images. Our processes are compatible with those based on very heavy and dissuasive computations, totally decentralized. By giving a more important role to the central system, we obtain semi-decentralized processes. The calculations can then be made much lighter. The system cannot be attacked because, for each validated block, there is:the signature of the system in the form of a key, which signature can vary depending on the miner and the block;the signature of the miner in the form of provided media. The effects of these signatures are propagated throughout the chain. Several more advanced uses are possible with the processes of this document, in order to carry out transmissions over long distances. A digital watermark is applied to UNB or Ultra Narrow Bandwidth frames in order to recognize them in the noise over very long distances. The very narrow bandwidths make them less sensitive to noise. This watermarking can be combined with spread spectrum techniques. Instead of doing an inverse FFT and transporting the UNB (Ultra Narrow Bandwidth) frames via analog or digital modulations, we consider the magnitudes and the phases containing the displacements of the points as OFDM FFT bins, place them in OFDM subcarriers, use a known crest factor reduction technique, or simply optimize the phase oppositions, or duplicate the symbols on other subcarriers, with or without interlacing, and optimize the phase oppositions. Transporting UNB (Ultra Narrow Bandwidth) frames as OFDM FFT bins, instead of with analog or digital modulations, comprises considering the magnitudes and the phases containing the displacements of the points as OFDM FFT bins, and placing them in OFDM subcarriers, and comprises using a known crest factor reduction technique, or simply optimizing the phase oppositions, or duplicating the symbols on other subcarriers, with or without interlacing, and optimizing the phase oppositions. The foreground points are transported into the reintroduced background phases, with the foreground data including magnitudes, positions and phases, or sine and cosine amplitudes and positions. The sign of the phase from the compression can be retained, or the sign of the phase can be alternated or optimized. Transporting the foreground points into the reintroduced background phases comprises the transport of the values of the magnitudes, positions and phases, or the values of the sine amplitudes, cosine amplitudes and positions. One of the most important problems with OFDM transmissions is called the Peak-to-Average Power Ratio (PAPR). Current techniques for reducing the peak factor include Tone Reservation (TR) and Active Constellation Extension (ACE). By using our techniques, we can reduce the crest factor considerably. By alternating the sign of the phase in the background (plus sign for the first point, minus sign for the second, plus sign or the third, minus sign for the fourth, etc.), or by using a more sophisticated optimization, a large reduction in the crest factor is obtained. In order to take into account cases where there are not enough non-zero bands in the background to carry the data, filler bands are inserted and this additional information (filler band or not) is also stored in the background phases. In this advanced use case, instead of the zero points in the background, which correspond to the points in the foreground, we can put points or white noise of low magnitudes. In the last advanced use presented in this paper, we will discuss additional processing methods applied to the output of a variable length encoding such as Huffman encoding, or run-length encoding (RLE), characterized in that a correspondence is established between the output of the variable length encoding or RLE encoding and the output of a FFT-based encoding, using a foreground composed of the largest points and a background composed of the most energetic bands. The goal is to use processes from this paper, including transporting the foreground points through the background bands for long distance transmissions. Huffman coding is a lossless data compression algorithm that uses a variable length code to represent a source symbol. It is usually used in the second stage of compression, once the media-specific redundancy has been revealed by other algorithms (such as JPEG compression for images, MPEG for video and MP3 for audio). Lossless compression algorithms, such as those used for file compression, also use Huffman. For example, LZH (Lha) and deflate (ZIP, gzip, PNG) combine a dictionary compression algorithm (LZ77) with entropy Huffman coding. With Huffman coding, the least frequent points use the most bits while the most frequent points use the least bits. With Huffman encoding, the less frequent points each using the most bits are mapped to the foreground points while the most frequent points each using the fewest bits are mapped to the background points. The modeling of the output of the Huffman encoding as a FFT codec comprises matching the less frequent points each using the most bits to the foreground points, comprises matching the most frequent points each using the fewest bits to the background points, comprises matching the number of repetitions, up to a limit, to the phases with the foreground points, and comprises matching the number of repetitions, up to a limit, to the phases with the background points, or ignoring the repetitions in the background. The RLE coding is very applicable to black and white scanned documents: instead of coding one bit per point, there is a counter indicating how many white or black points follow each other. RLE coding is also used for Group 3 and Group 4 faxes. The RLE coding is based on the repetition of consecutive elements. The basic principle consists in coding a first element giving the number of repetitions of a value and then completing it with the value to be repeated. With the output of the variable-length encoding, the least frequent points are matched to the points in the foreground of the FFT encoding output, the number of successive repetitions up to a certain limit being contained in the phase, and the other points are matched to the background of the FFT encoding. We do not take into account the successive repetitions in the background. Finally, we alternate the signs of the phases in the background. The least frequent points are chosen first and the maximum number of these points is chosen so that all the points in the foreground can be completely transported by the background. If the foreground points are transported into the background, on reception, the phases are first decoded to get the foreground, then the background points are found. If there are few non-zero points or if all the points are zero in the background, it may be preferable to use a reconciliation of the points and local peaks in order to have a UNB frame. With the output of the RLE (Run-Length Encoding) encoding, the values to be repeated are matched to the magnitudes and the number of repetitions, up to a certain limit, to the phases. Either two UNB frames are transmitted, or the background is completely generated or completed and the foreground points are transported in the background bands. The modeling of the output of the RLE encoding as a FFT codec comprises matching the values to be repeated to the magnitudes and matching the number of repetitions, up to a certain limit, to the phases. FIGS.1,2,3a,3b, and4show an example of maximum reconciliation of the points (no zero FFT bin between points). FIGS.5a,5band6show an example of moderate reconciliation of the points (a few zero FFT bins between points). The phases of the displaced points contain the original positions of the points, relative to the first point. We apply a simple rule of three to get the phases. For example, if we count the displacements from the first point, the phase of the first point is zero, the other phases are calculated by the formula: phase=−pi+2*pi*(n−n0)/(N/2−n0) where n=position of the point, n0=position of the first point, N=number of points of the FFT buffer and pi=3.14159 . . . . FIGS.7,8,9,10,11and12show an example of the transport of the foreground points into the background phases. It is assumed that there are 29 points in all, the first 3 points of greatest magnitude forming the foreground, the other 26 points forming the background. The initial phases of the background are zero, the phases of the foreground are: −pi/4, 0 and pi/4. Furthermore, the positions are assumed to be multiples of 10 bins. We choose to transport each point of the foreground with 24 bits: 8 bits for the positions, 8 bits for the magnitudes and 8 bits for the phases. The positions are encoded on 8 bits (values between 0 and 255). The magnitudes are encoded on 8 bits using the logarithm. The magnitude codes are given by the formula: code=CodeMaxi*log(Mag)/log(MagMaxi) where CodeMaxi=maximum number depending on the number of coding bits (255 in our case), Mag=calculated magnitude, MagMaxi=maximum possible magnitude, depending on the number of points in the FFT buffer and the number of bits per point in the time domain. If there are 16 bits per point in the time domain and N points in the FFT buffer, MagMaxi is calculated by the formula: MagMaxi=32767*N The phase codes on 8 bits, are obtained by applying a simple rule of three, by matching the values between −pi and pi to values between 0 and 255. To transport the whole foreground, 3*24=72 bits are needed. We choose for the phases of the background an 8-PSK modulation (Phase-Shift Keying), thus 3 bits per phase. The background can contain: 26*3=78 bits. This is sufficient in this example, but if necessary we can add 3 points of small magnitude at the corresponding positions of the foreground points to increase the transport capacity. We can also use relative positions or relative magnitudes (relative to the previous point) to decrease the number of bits per foreground point. The codes of the positions to be transported are: 55, 80, 100, or in binary: 00110111, 01010000, 01100100. The codes of the magnitudes to be transported are: 221, 226, 223, or in binary: 11011101, 11100010, 11011111. The codes of the phases to be transported are: 97, 128, 160, or in binary: 01100001, 10000000, 10100000. This corresponds to the following bits: 001 101 110 101 000 001 100 100 110 111 011 110 001 011 011 111 011 000 011 000 000 010 100 000. The following correspondences between bits and phases are used: “000”: 0.0 , “001”: pi/4 , “011”: 2*pi/4 , “010”: 3*pi/4 , “110”: 4*pi/4 , “111”: −3*pi/4 , “101”: −2*pi/4 , “100”: −pi/4 TheFIGS.13b,13cand13dshow that the amplitudes of the cosine waves and the amplitudes of the sine waves can be treated as signed magnitudes. TheFIGS.14aand14bshow how the frequency-hopping spread spectrum (FHSS)can be implemented by changing the starting position of the low frequency signal. TheFIG.15shows how the direct-sequence spread spectrum (DSSS) can be implemented by leaving some band positions unoccupied. TheFIGS.16band16cshow examples of decimation in bands. TheFIG.16dshows how data can be transported instead of decimated points. The chosen magnitudes must remain lower than the non-decimated magnitudes. TheFIGS.17band17cshow the influence of a simple optimization of the sign of the phase (random sign) on the resulting signal in order to control the maximum amplitude of the signal. TheFIG.18shows a DCT-2 transformation of a signal with 5 main frequencies. The UNB methods of this document can be applied to transformations such as DCT by considering their amplitudes as signed magnitudes. TheFIG.19shows a table with several examples of mapping of a binary RLE output to a FFT output. Depending on your needs, you can use the appropriate RLE (binary or more)—FFT mapping, in particular transporting the foreground into the background phases. We have chosen four phases for the successive repetitions, 0, PI/4, 2PI/4, 3PI/4, for 1, 2, 3 and 4 repetitions, where PI=3.14159265359, B=black pixel and W=white pixel. TheFIG.20shows a table with an example of mapping of a Huffman output to a FFT output. The Huffman coding can also be used to obtain a FFT output. We have chosen two phases for the successive repetitions, 0 and PI/2 for 1 and 2 repetitions. Several types of applications are possible with the processes and methods of this document. We give below six simple examples. First Example Data from a sensor to be transmitted. We have a reading every three seconds and we must transmit every ten minutes. We therefore have FFT buffers of 200 points to consider. We take FFT buffers of 200 or 256 points and we put the service data plus an exact value of the reading in the background phases. Second Example Voice communications or vital sign monitoring, using ultra-narrow bandwidth and analog amplitude modulation, or analog frequency modulation, or analog QAM. 12 to 16 local peaks with phase are enough to have a good quality. We can be satisfied with sending 30 to 60 frames per second to respect the constraints of real time. Third Example Broadcasting multimedia data, using multiple frequencies or subcarriers, with analog or digital modulations, with independent analog or digital signal transmissions. The frequencies or subcarriers must be well separated. The foreground is also divided into bands. Pure data can be transported in the background using the phases. Each band is associated with a frequency or subcarrier. Fourth Example Broadcasting multimedia data, using techniques similar to OFDM. Intermediate subcarriers are used. Each band (foreground or background) is associated with a subcarrier by shifting the band to the subcarrier. The frequency at the center of the band can be chosen as the subcarrier frequency if a contiguous band area is taken. Pure data can be transported in the background by using the phases. iFFTs (inverse FFT) are performed in order to have a single time domain signal to send. A DAC (digital-analog) converter is used to send an analog signal. On the receiver side, on receiving the signal, after ADC (analog-to-digital) conversion, FFTs are carried out to recover the subcarriers. Fifth Example Pressing multimedia data onto optical discs, using analog frequency modulations, as for laserdisc (hollows engraved on disc). Writing multimedia data to magnetic tapes, using analog frequency modulations, as for VHS (waveforms on tape). Sixth Example Creating a blockchain network, based on images. Semi-decentralized system, with a central authority playing an important role, but not controlling everything. The processes and methods of this document are intended to be used in underwater communications, in satellite communications, in radar communications, in wireline communications, in wireless communications, with the dial-up telephone network, with the mobile network, with ADSL, with optical fiber, with LED bulbs, . . . ,were there is a need for thin beam for very long distance communications at low speed, low power and low radiation;where there is a need for high speed communications using techniques similar to OFDM or F-OFDM. | 61,253 |
11863368 | DESCRIPTION OF EMBODIMENTS The following describes the embodiments of the present disclosure with reference to accompanying drawings. An embodiment of this application provides a method for subscribing to event streams. In this method, a first device can subscribe to a plurality of event streams of a second device at a time. The second device may report, to the first device based on the subscription, data of one or more event streams that is required by the first device. Specifically, the first device generates a first message used to subscribe to event streams, where the first message includes a group identifier, and the group identifier corresponds to a plurality of event streams; and the first device sends the first message to the second device, to obtain data of the plurality of event streams corresponding to the group identifier. After receiving the first message, the second device generates, based on the group identifier included in the first message, a subscription corresponding to the plurality of event streams. FIG.1is a schematic diagram of a network scenario according to an embodiment of this application. In the network scenario shown inFIG.1, a control device101can perform subscription setting for a device102and a device103. For example, the control device101is a client on which a network management protocol is run. The device102and the device103are both servers on which a network management protocol is run. A communication protocol used between the control device101and the device102is a network management protocol, and may be specifically a network configuration protocol (NETCONF) or a representational state transfer configuration protocol (RESTCONF). A communication protocol used between the control device101and the device103is the same as the communication protocol used between the control device101and the device102. The device102and the device103may be any network device that supports the foregoing network management protocol, for example, a router, a switch, or a server. FIG.2is a schematic flowchart of a method for subscribing to event streams according to Embodiment 1 of this application. The method for subscription provided in this embodiment of this application is now described by using interaction between the control device101and the device102as an example with reference toFIG.1andFIG.2. S201: The control device101and the device102perform capability negotiation with each other. For example, the control device101and the device102establish a session that is based on a network management protocol. The device102sends a capability identifier to the control device101, where the capability identifier is used to identify that the second device supports subscribing to a plurality of event streams. For example, the device102sends a hello message to the control device101, and the hello message includes the capability identifier. When the network management protocol is NETCONF, the capability identifier included in the hello message may use the following form: Netconf bulk-stream-sn capability:urn:ietf:params:netconf:capability:bulk-stream-sn:1.0 When the network management protocol is RESCONF, the capability identifier included in the hello message may use the following form: Restconf bulk-stream-sn capability:urn:ietf:params:restconf:capability:bulk-stream-sn:1.0 Netconf bulk-stream-sn capability represents a capability of bulk event streams under NETCONF. Restconf bulk-stream-sn capability represents a capability of bulk event streams under RESTCONF. bulk-stream-sn:1.0 represents supporting subscribing to bulk event streams. The bulk event streams may also be referred to as a plurality of event streams. An event stream is a group of continuous events that are sorted in a chronological order and that are converged in some cases, for example, system restart related parameters, node configuration information, node status information, alarm events, and delays. By using an example in which the network management protocol is NETCONF, the hello message uses the following format: <hello xmlns=“urn:ietf:params:xml:ns:netconf:base:1.0”><capabilities><capability>urn:ietf:params:netconf:base:1.1</capability><capability>urn:ietf:params:netconf:capability:startup:1.0</capability><capability>urn:ietf:params:netconf:capability: bulk-stream-sn:1.0</capability></capabilities><session-id>4</session-id></hello> For example, after receiving the capability identifier of the device102, the control device101may learn that the device102supports subscribing to a plurality of event streams. The control device101may determine the plurality of event streams based on a configuration or a requirement. The control device101packages the plurality of event streams into one group, and configures a corresponding group identifier. The control device101obtains a correspondence, where the correspondence includes the group identifier and the plurality of event streams corresponding to the group identifier. The group identifier is an index of bulk subscription models. The correspondence may be represented by using the bulk subscription models. The bulk subscription models are models that subscribe to a plurality of event streams by using YANG models. The control device101may send the correspondence to the device102. A model that is related to the correspondence and that is in the bulk subscription models may be represented as follows: module: ietf-bulk-subscription+--rw groups+--rw group* [group-id]+--rw group-id string+--rw stream* stringwhere module: ietf-bulk-subscription represents that the data model is the bulk subscription model. group-id represents the group identifier. stream* represents the plurality of event streams. For example, a requirement based on which the control device101packages the plurality of event streams may be a service requirement. The service requirement may correspond to a plurality of event streams. For example, when the service requirement is service assurance, the event stream may be an alarm event. The alarm event may be specifically a device alarm, a communication alarm, a processing error alarm, or the like. When the service requirement is service assurance, the event stream may be alternatively a threshold, a delay, QoS, or the like of a service parameter. When the service requirement is fault diagnosis, the event stream may be a response time, a cyber attack, CPU usage, or the like. S202: The control device101generates the first message, where the first message includes the group identifier. For example, the control device101subscribes to the plurality of event streams on the device102by sending the first message, to obtain the data of the plurality of event streams corresponding to the group identifier. The first message may use the following format to send the group identifier, to subscribe to the plurality of event streams corresponding to the group identifier. augment/sn:subscriptions/sn:subscription/sn: target:+--:(stream-group)+--rw group-id? ->/groups/group/group-idaugment/sn:establish-subscription/sn:input/sn:target:+--:(stream-group)+-- group-id? ->/groups/group/group-idwhere augment/sn:subscriptions/sn:subscription/sn:target represents a configured subscription. augment/sn: establish-subscription/sn:input/sn:target represents a dynamic subscription. The dynamic subscription may be implemented through remote procedure call (RPC). Optionally, the first message further includes bulk notification models. The bulk notification models are models used by the device102to report, to the control device101, the data corresponding to the plurality of event streams. The bulk notification model may use the following form: module: ietf-bulk-notificationaugment-structure/nm:message/nm:message-header:+--rw message-type identityref+--rw group-id? stringwhere ietf-bulk-notification represents that the data model is the bulk notification model. group-id represents the group identifier. Optionally, the second message further includes a message identifier, so that when a second message that includes a subscription identifier and that is sent by the device102is received, it is determined, based on the message identifier carried in the second message, that the subscription identifier corresponds to the group identifier. S203: The control device101sends the first message to the device102. For example, the control device101may send the first message to the device102by using the network management protocol. S204: The device102generates the second message, where the second message includes the subscription identifier. For example, the device102generates the subscription based on the group identifier in the first message and obtains the subscription identifier. The subscription identifier may be a randomly generated identifier. Subscription generation is to enable, by issuing a configuration command or a configuration invoking command, the device to report data based on a data model. The subscription identifier included in the second message generated by the device102may use the following form. +--ro output+--ro id subscription-id+--ro replay-start-time-revision? yang:date-and-time{replay}?where output represents a reply. id represents the subscription identifier (subscription-id). Optionally, the second message further includes the message identifier. The device102sends the message identifier and the subscription identifier to the control device101, so that the control device101determines, based on the message identifier, that the subscription identifier corresponds to the group identifier. S205: The device102sends the second message to the control device101. For example, the device102may send the second message to the control device101by using the network management protocol. S206: The device102generates a third message, where the third message includes the subscription identifier and data of N event streams. For example, the device102determines, based on the correspondence in the received bulk subscription model and the group identifier in the first message, the plurality of event streams subscribed by the control device101. The device102periodically obtains the data corresponding to the N event streams, or obtains the data of the N event streams when states corresponding to the N event streams are changed. N is an integer greater than or equal to 1. The plurality of event streams corresponding to the group identifier include the N event streams. To save network resources and improve a response speed, the N event streams may be the plurality of event streams corresponding to the group identifier. The device102generates the third message based on the subscription identifier and the data of the N event streams. The third message may use the following format to report the subscription identifier and the data of the N event streams.structure message +--ro message!+--ro message-header| +--ro message-time yang: date-and-time| +--ro message-id uint32| +--ro message-generator-id? string| +--ro notification-count? uint16+--ro notifications*| +--ro notification-header| | +--ro notification-time yang:date-and-time| | +--ro yang-module? yang:yang-identifier| | +--ro subscription-id* uint32| | +--ro notification-id? uint32| | +--ro observation-domain-id? string| +--ro notification-contents?| +--ro notification-footer!| +--ro signature-algorithm string| +--ro signature-value string| +--ro integrity-evidence? string+--ro message-footer!+--ro signature-algorithm string+--ro signature-value string+--ro integrity-evidence? stringwhere structure message represents the format of the message. subscription-id represents the subscription identifier. The data of the N event streams may be carried in notification-contents. The device102may report the data of the N event streams through bulk reporting and notification. Optionally, the third message further includes the group identifier, so that the control device101determines, based on the group identifier, requirements or configurations to which the data is related. The third message may carry the group identifier by using the following format. module: ietf-bulk-notificationaugment-structure/nm:message/nm:message-header:+--rw message-type identityref+--rw group-id? stringwhere group-id represents the group identifier. message-header represents a message header. ietf-bulk-notification represents bulk notification. augment-structure /nm:message/nm:message-header represents that the group identifier is carried in the message header of structure message. S207: The device102sends the third message to the control device101. For example, the device102may send the third message to the control device101by using the network management protocol. In the method provided in Embodiment 1 of this application, the device102can obtain, based on the group identifier sent by the control device101, the data of the N event streams corresponding to the group identifier. The N event streams are event streams subscribed by the control device101. The data of the plurality of event streams is obtained through subscription at a time, thereby saving network resources. Embodiment 2 FIG.3is a schematic flowchart of a method for subscribing to event streams according to Embodiment 2 of this application. The devices in the network scenario shown inFIG.1may alternatively use the method provided in the embodiment corresponding toFIG.3. The method provided in Embodiment 2 differs from the method provided in Embodiment 1 in content of S301to S303. For content of S201to S207in Embodiment 2, refer to corresponding content of S201to S207in Embodiment 1. Details are not described herein again. The following describes the content of S301to S303with reference toFIG.1andFIG.3. S301: The control device101generates a policy. For example, the control device101generates the policy based on the data of the N event streams that is reported by the device102, where the policy is used to indicate the device102to perform an action corresponding to the N event streams. By using an example in which the plurality of event streams subscribed by the control device101are alarm events caused by faults, a fault of a module1on the device102causes an alarm event1, a module2on the device102causes an alarm event2due to invocation of the faulty module1, and a module3on the device102causes an alarm event3due to invocation of the module2. The modules on the device102may be hardware or software. This is not limited in this embodiment of this application. The device102may collect statistics on the foregoing alarm events, and determine, based on a statistical quantity and a preset threshold, whether to notify a user. The device102reports the alarm event1, the alarm event2, and the alarm event3to the control device101. The control device101learns, based on the alarm event1, the alarm event2, and the alarm event3, that the alarm event2and the alarm event3are caused by the alarm event1, for example, learn the foregoing content based on a time at which an alarm is generated and/or an invoking relationship between the modules. The control device101determines that the alarm event1is a valid alarm, and the alarm event2and the alarm event3are invalid alarms. The policy is to confirm the alarm event1and clear the alarm event2and the alarm event3. S302: The control device101sends the policy to the device102. For example, the control device101sends the policy to the device102by using the network management protocol. S303: The device102performs an action according to the policy. For example, the device102performs the corresponding action according to the policy. Using the foregoing alarm events as an example, the device102clears the alarm event2and the alarm event3, and confirms the alarm event1. Optionally, the device102may further notify the user of the alarm event1after confirming the alarm event1, to efficiently clear a fault. In the method provided in Embodiment 2 of this application, the control device101may determine the policy based on the data of the N event streams that is reported by the device102, so that the device102adjusts the action for the event streams, thereby improving a response speed and troubleshooting efficiency. The control device in this embodiment of this application may include a cloud controller and a cloud analyzer, or the control device is a cloud analyzer, or the control device is a device that integrates functions of the cloud controller and the cloud analyzer. The messages, identifiers, data, and the like in the method provided in this embodiment of this application may all be expressed by using the data model in the YANG models. FIG.4is a schematic diagram of a structure of an apparatus for subscribing to event streams according to an embodiment of this application. The apparatus400provided in the embodiment corresponding toFIG.4is described from a perspective of a logical structure. The apparatus400provided in the embodiment corresponding toFIG.4may be the control device101in Embodiment 1 or Embodiment 2. A second device in the embodiment corresponding toFIG.4may be the device102in Embodiment 1 or Embodiment 2. With reference toFIG.4, the following describes the structure of the apparatus provided in this embodiment of this application. The apparatus400includes a generation module401and a first sending module402. The generation module401is configured to generate a first message used to subscribe to event streams. The first message includes a group identifier, and the group identifier corresponds to a plurality of event streams. The first sending module402is configured to send the first message to the second device, to obtain data of the plurality of event streams corresponding to the group identifier. The generation module401is configured to support the apparatus400in performing step S202in Embodiment 1 or Embodiment 2. The first sending module402is configured to support the apparatus400in performing step S203in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a first receiving module403. The first receiving module403is configured to receive a second message sent by the second device, where the second message includes a subscription identifier, and the subscription identifier is used to identify a subscription generated based on the plurality of event streams. The first receiving module403is configured to support the apparatus400in performing S205in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a second receiving module404. The second receiving module404is configured to receive a third message sent by the second device, where the third message includes the subscription identifier and data of at least one event stream in the plurality of event streams. The second receiving module404is configured to support the apparatus400in performing S207in Embodiment 1 or Embodiment 2. Optionally, the third message further includes the group identifier. Optionally, the apparatus further includes an obtaining module405and a second sending module406. The obtaining module405is configured to obtain a policy based on the data of the at least one event stream, where the policy is used to indicate the second device to perform an action corresponding to the at least one event stream. The second sending module406is configured to send the policy to the second device. The obtaining module405is configured to support the apparatus400in performing S301in Embodiment 2. The second sending module406is configured to support the apparatus400in performing S302in Embodiment 2. Optionally, the apparatus further includes a third receiving module407. The third receiving module407is configured to receive a capability identifier sent by the second device, where the capability identifier is used to identify that the second device supports subscribing to a plurality of event streams. The third receiving module407is configured to support the apparatus400in performing S201in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a third sending module408. The third sending module408is configured to send a correspondence to the second device, where the correspondence includes the group identifier and the plurality of event streams. The third sending module408is configured to support the apparatus400in performing S201in Embodiment 1 or Embodiment 2. The first receiving module403and the second receiving module404in this embodiment of this application may be one receiving unit. The first sending module402, the second sending module406, and the third sending module408may be one sending unit. In the apparatus provided in this embodiment of this application, the first message used to subscribe to event streams that is generated by the generation module401can enable the second device that receives the first message to report, based on the subscription, the data of the plurality of event streams, helping save network resources. FIG.5is a schematic diagram of a structure of an apparatus for subscribing to event streams according to an embodiment of this application. The apparatus500provided in the embodiment corresponding toFIG.5is described from a perspective of a logical structure. The apparatus500provided in the embodiment corresponding toFIG.5may be the device102in Embodiment 1 or Embodiment 2. A first device in the embodiment corresponding toFIG.5may be the control device101in Embodiment 1 or Embodiment 2. With reference toFIG.5, the following describes the structure of the apparatus provided in this embodiment of this application. The apparatus500includes a first receiving module501and a first generation module502. The first receiving module501is configured to receive a first message used to subscribe to event streams that is sent by the first device, where the first message includes a group identifier, and the group identifier corresponds to a plurality of event streams. The first generation module502is configured to generate, based on the first message, a subscription corresponding to the plurality of event streams. The first receiving module501is configured to support the apparatus500in performing S203in Embodiment 1 or Embodiment 2. The first generation module502is configured to support the apparatus500in performing S204in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a first obtaining module503and a first sending module504. The first obtaining module503is configured to obtain a subscription identifier, where the subscription identifier is used to identify the subscription generated based on the plurality of event streams. The first sending module504is configured to send a second message to the first device, where the second message includes the subscription identifier. The first obtaining module503is configured to support the apparatus500in performing S204in Embodiment 1 or Embodiment 2. The first sending module504is configured to support the apparatus500in performing S205in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a second obtaining module505, a second generation module506, and a second sending module507. The second obtaining module505is configured to obtain data of at least one event stream in the plurality of event streams. The second generation module506is configured to generate a third message based on the data of the at least one event stream, where the third message includes the subscription identifier and the data of the at least one event stream. The second sending module507is configured to send the third message to the first device. The second obtaining module505and the second generation module506are configured to support the apparatus500in performing S206in Embodiment 1 or Embodiment 2. The second sending module507is configured to support the apparatus500in performing S207in Embodiment 1 or Embodiment 2. Optionally, the third message further includes the group identifier. For example, the second obtaining module505is specifically configured to: determine, based on the group identifier, the plurality of event streams corresponding to the group identifier; and periodically obtain data of one or more event streams in the plurality of event streams, or obtain the data of the one or more event streams in the plurality of event streams after states of the one or more event streams are changed. Optionally, the apparatus further includes a third sending module508. The third sending module508is configured to send a capability identifier to the first device, where the capability identifier is used to identify that the second device supports subscribing to a plurality of event streams. The third sending module508is configured to support the apparatus500in performing S201in Embodiment 1 or Embodiment 2. Optionally, the apparatus further includes a second receiving module509and a processing module510. The second receiving module509is configured to receive a policy sent by the first device, where the policy is used to indicate the second device to perform an action corresponding to the at least one event stream. The processing module510is configured to perform the action according to the policy. The second receiving module509is configured to support the apparatus500in performing S302in Embodiment 2. The processing module510is configured to support the apparatus500in performing S303in Embodiment 2. Optionally, the apparatus further includes a third receiving module511. The third receiving module511is configured to receive a correspondence sent by the first device, where the correspondence includes the group identifier and the plurality of event streams. The third receiving module511is configured to support the apparatus500in performing S201in Embodiment 1 or Embodiment 2. The first receiving module501, the second receiving module509, and the third receiving module511in this embodiment of this application may be one receiving unit. The first sending module504, the second sending module507, and the third sending module508may be one sending unit. The first obtaining module503and the second obtaining module505may be one obtaining unit. In the apparatus provided in this embodiment of this application, the first generation module502generates, based on the group identifier in the first message, the subscription corresponding to the plurality of event streams, to subsequently report the data of the plurality of event streams at a time, thereby helping save network resources. FIG.6is a schematic diagram of a structure of an apparatus for subscribing to event streams according to an embodiment of this application. The apparatus600provided in the embodiment corresponding toFIG.6may be the apparatus400provided in the embodiment corresponding toFIG.4. The apparatus600provided in the embodiment corresponding toFIG.6is described from a perspective of a hardware structure. The apparatus600provided in the embodiment corresponding toFIG.6may implement the function of the control device101in Embodiment 1 or Embodiment 2. The apparatus600provided in the embodiment corresponding toFIG.6includes a processor601, a memory602, a communication bus604, and a communication interface603. The processor601, the memory602, and the communication interface603are connected by using a communication bus604. The memory602is configured to store a program. The processor601performs, according to executable instructions included in the program read from the memory602, method steps performed by the control device101in Embodiment 1 or Embodiment 2. The processor601may perform negotiation and communication with a second device, namely, the device102in Embodiment 1 or Embodiment 2 by using the communication interface603. The communication interface603is configured to support the apparatus600in performing S201, S203, S205, and S207in Embodiment 1 or Embodiment 2. The communication interface603is further configured to support the apparatus600in performing S302in Embodiment 2. The processor601is configured to support the apparatus600in performing S202in Embodiment 1 or Embodiment 2. The processor601is further configured to support the apparatus600in performing S301in Embodiment 2. The memory602is configured to: not only store program code and data, but also buffer the group identifier, the subscription identifier, and the data of the N event streams in Embodiment 1 or Embodiment 2. FIG.7is a schematic diagram of a structure of an apparatus for subscribing to event streams according to an embodiment of this application. The apparatus700provided in the embodiment corresponding toFIG.7may be the apparatus500provided in the embodiment corresponding toFIG.5. The apparatus700provided in the embodiment corresponding toFIG.7is described from a perspective of a hardware structure. The apparatus700provided in the embodiment corresponding toFIG.7may implement the function of the device102in Embodiment 1 or Embodiment 2. The apparatus700provided in the embodiment corresponding toFIG.7includes a processor701, a memory702, a communication bus704, and a communication interface703. The processor701, the memory702, and the communication interface703are connected by using the communication bus704. The memory702is configured to store a program. The processor701performs, according to executable instructions included in the program read from the memory702, method steps performed by the control device102in Embodiment 1 or Embodiment 2. The processor701may perform negotiation and communication with a first device, namely, the control device101in Embodiment 1 or Embodiment 2 by using the communication interface703. The communication interface703is configured to support the apparatus700in performing S201, S203, S205, and S207in Embodiment 1 or Embodiment 2. The communication interface703is further configured to support the apparatus700in performing S302in Embodiment 2. The processor601is configured to support the apparatus700in performing S204and S206in Embodiment 1 or Embodiment 2. The processor601is further configured to support the apparatus700in performing S303in Embodiment 2. The memory602is configured to: not only store program code and data, but also buffer the group identifier, the subscription identifier, the data of the N event streams, and the policy in Embodiment 1 or Embodiment 2. An embodiment of this application provides a system for subscribing to event streams. The system includes a first device and a second device. An apparatus400for subscribing to event streams or an apparatus600for subscribing to event streams may be disposed on the first device. An apparatus500for subscribing to event streams or an apparatus700for subscribing to event streams may be disposed on the second device. The first device may perform the action performed by the control device101in Embodiment 1 or Embodiment 2. The second device may perform the action performed by the device102in Embodiment 1 or Embodiment 2. A general-purpose processor mentioned in the embodiments of this application may be a microprocessor, or the processor may be any conventional processor. Steps of the method disclosed with reference to the embodiments of the present disclosure may be directly implemented by a combination of hardware and a software module in the processor. When the method is implemented by using software, code that implements the foregoing functions may be stored in a computer-readable medium. The computer-readable medium includes a computer storage medium. The storage medium may be any available medium accessible to a computer. The following is used as an example but is not limited: The computer-readable medium may be a random access memory (RAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or another compact disc storage, a magnetic disk storage medium or another magnetic storage device, or any other medium that can be used to carry or store expected program code in an instruction or data structure form and can be accessed by a computer. The computer-readable medium may be a compact disc (CD), a laser disc, a digital video disc (DVD), a floppy disk, or a Blu-ray disc. The embodiments in this specification are all described in a progressive manner. For same or similar parts in the embodiments, refer to these embodiments. Each embodiment focuses on a difference from other embodiments. Especially, a system embodiment is basically similar to a method embodiment, and therefore is described briefly. For related parts, refer to some descriptions in the method embodiments. | 32,686 |
11863369 | MODE FOR CARRYING OUT THE INVENTION Hereinafter, embodiments for implementing the present disclosure (hereinafter, referred to as embodiments) will be described with reference to the accompanying drawings. Note that, in this specification, the description of “and/or” means that both “and” and “or” can be taken. Furthermore, in this specification and the drawings, components having substantially the same functional configuration are denoted by the same reference numerals, and redundant explanations are omitted. The description will be given in the following order. 1. First embodiment of data processing system 2. Description of event data 3. Sending control of event data and original data 4. Accumulation processing of event data 5. Identification of data based on topic ID 6. Sending control in case where event type is Notify 7. Accumulation processing of event data 8. Reservation and assignment of event path 9. Negotiation of event class 10. Second embodiment of data processing system 11. Sending control of event data and original data 12. Negotiation of event class 13. Immediate transfer and delayed transfer 14. Configuration example of cloud computing <1. First Embodiment of Data Processing System> FIG.1is a block diagram illustrating a configuration example of a first embodiment of a data processing system to which the present disclosure is applied. A data processing system1ofFIG.1includes a sensor11and an event producer12. The sensor11is a sensor device that detects a state of some kind, and supplies sensor data that is a detection result to the event producer12. Examples of the sensor11include an acceleration sensor, a gyro sensor, a magnetic sensor, an odor sensor, an atmospheric pressure sensor, a temperature sensor, a humidity sensor, a wind speed sensor, an optical sensor (an RGB sensor, an IR sensor, and the like), a GPS sensor, and the like, for example, as used as an Internet of Things (IoT) sensor. The event producer12is an application that transfers data acquired by the sensor11to a network. The event producer12uses data supplied from the sensor11as source data (hereinafter, also referred to as original data or simply an original), to generate, as event data, a change amount of the original after a certain time point. Generally, the event data has a very small data amount with respect to the original data. For example, in a case where the sensor11is an image sensor (an RGB sensor) that receives RGB light, the original data can be a captured image obtained by imaging, and the event data can be luminance data indicating a change amount in luminance value from a previously obtained captured image. Furthermore, for example, in a case where the sensor11is a sensor device that detects a state of a product DB that stores prices of a plurality of products, the price of the product itself is the original data, and a change amount in the price is the event data. The data processing system1also includes an event consumer14and an event path manager15. The event producer12sends the original data and the event data to the event consumer14via an event path13. The event path13is a virtual path (a communication path) in a network, and the virtual path is assigned by the event path manager15and is for providing the original data and the event data to the event consumer14. The event consumer14is an application that utilizes data acquired by the sensor11. The event consumer14performs predetermined data processing, for example, data analysis processing, recognition processing, or the like, by using the original data and the event data sent from the event producer12via the event path13. By referring to a topic DB17, the event consumer14can select event data and original data necessary for the self from among data to be transferred on the network. The event path manager15reserves a virtual path connecting the event producer12and the event consumer14from paths in the network in response to a request from the event producer12, and assigns the virtual path as the event path13. The data processing system1further includes a topic manager16and the topic database (DB)17. The topic manager16assigns a topic ID, and notifies the event producer12. A topic represents observation target data to be notified as the event data by the event producer12, and the topic ID is identification information for identifying the topic. Furthermore, the topic manager16assigns an event class to every topic ID assigned by the self, and stores the event class in the topic DB17. The event class indicates priority of the virtual path when the event path manager15assigns the event path13. The event class is represented by, for example, a class ID which is identification information for identifying the class. For example, the priority is determined in advance for every class ID such that the class ID of No. 25 is a class having high priority and the class ID of No. 35 is a class having low priority, and the event class is designated by the class ID. Before transferring the event data, the event producer12stores, in the topic DB17in advance, a set of the topic ID notified from the topic manager16and topic description information (Topic Description) describing the topic. Furthermore, the event producer12requests the event path manager15to assign the event path13with an event class designated. The topic DB17stores, for every topic, a set of the topic ID and the topic description information registered by the event producer12. Furthermore, the event class of each topic ID registered by the topic manager16is also stored in the topic DB17in association with the topic ID. The topic DB17is referred to by the event consumer14for the event consumer14to select necessary event data. The event path13connecting the event producer12and the event consumer14ofFIG.1is one P2P (peer to peer) connection that transmits the original data, which is based on sensor data acquired by the sensor11, and the event data. Although not illustrated, the data processing system1includes a plurality of sensors11. Further, for each of the plurality of sensors11, the event producer12that transfers the sensor data into the network and the event consumer14that uses the sensor data are connected by P2P. The event path manager15assigns the event path13for each P2P connection. The topic manager16manages the topic IDs for all the topics to be transferred in the network. The topic DB17stores information regarding all the topics to be transferred in the network. As described above, various sensor devices can be taken as the sensor11of the data processing system1inFIG.1. Hereinafter, a case will be described as an example in which the sensor11is an extended DVS including a synchronous image sensor (FIS) and an asynchronous DVS. The synchronous image sensor is a sensor that captures an image in synchronization with a vertical synchronization signal, and outputs frame data that is image data of one frame (screen) at a period of the vertical synchronization signal. The DVS is a sensor that outputs event data indicating an occurrence of an event asynchronously in accordance with a timing of the occurrence of the event, with a luminance change in a pixel as the event. The DVS will be briefly described. The DVS is a sensor including a pixel that photoelectrically converts an optical signal and outputs a pixel signal, and configured to output a temporal luminance change of the optical signal as an event signal (event data) on the basis of the pixel signal. Such an event sensor is also referred to as an event-based vision sensor (EVS). While the synchronous image sensor captures an image in synchronization with a vertical synchronization signal, and outputs frame data that is image data of one frame (screen) at a period of the vertical synchronization signal, the DVS outputs the event data only at a timing when an event occurs. Therefore, it can be said that the DVS is an asynchronous (or address control) camera. In the following description, a synchronous image sensor that outputs frame-based image data at a predetermined period (frame rate) is referred to as an FIS in order to be distinguished from the DVS. In the DVS, for example, a voltage signal corresponding to a logarithmic value of an amount of received light incident on each pixel is detected as a pixel signal. Then, the DVS outputs “+1” representing a luminance change in a positive direction in a case where a change value of logarithmic luminance represented by the pixel signal is changed to be brighter exceeding a predetermined threshold value c, and outputs “−1” representing a luminance change in a negative direction in a case where the change value is changed to be darker exceeding the predetermined threshold value c. The event data is represented, for example, in the following form called an address-event representation (AER) format. e=(x,y,p,t) (1) In Equation (1), “x, y” represents coordinates of a pixel in which a luminance change has occurred. A time t of the event is a time stamp indicating a time when the event occurs, and is represented by, for example, a count value of a counter based on a predetermined clock signal in the sensor. It can be said that the time stamp corresponding to the timing at which the event has occurred is time information indicating the (relative) time at which the event has occurred, as long as an interval between events is maintained as it is at the time of occurrence of the event. A polarity p represents a direction of a luminance change in a case where a luminance change (a light amount change) exceeding the predetermined threshold value c occurs as an event, and indicates whether the luminance change is a change in a positive direction (hereinafter, also referred to as positive) or a change in a negative direction (hereinafter, also referred to as negative). The polarity p of the event is, for example, represented as “+1” in a case of positive, and represented as “−1” in a case of negative. Here, it is assumed that there is a time-series sequence en={x, y, pn, tn} (n=1, 2, 3, . . . , N(x, y)) of N pieces of event detected by the DVS in the pixel (x, y) on a two-dimensional coordinate space. Assuming that L is a luminance image and there is no noise, an initial luminance value, that is, a luminance value of the pixel (x, y) in an initial state in which no event has occurred is L0(x, y). Furthermore, since the threshold value of logarithmic luminance for an occurrence of a positive or negative event is c, the logarithmic luminance increases by c when p is positive, and the logarithmic luminance decreases by c when p is negative. In this case, using a luminance image Ln-1(x, y) at a time tn-1and an event enat a time tn, a luminance image Ln(x, y) at the time tncan be obtained by the following Equation (2). Ln(x,y)=Ln-1(x,y)*exp(c)(whenpn>0) Ln(x,y)=Ln-1(x,y)*exp(−c)(whenpn<0) (2) However, in practice, as the logarithmic luminance change c decreases, a probability that the DVS will pick up noise increases, and thus some noise correction is required. As described above, the DVS outputs only the position coordinates of the pixel in which the luminance change is detected, the polarity, and the time information. Since only a net change (difference) of the position coordinates, the polarity, and the time information is generated and outputted, further, there is no redundancy in an information amount of the data, and the DVS sensor has high temporal resolution on the order of psec. Since the information amount is small, power consumption is lower than that of a frame-based image sensor, and there is no unnecessary processing load even in a case of processing data, and processing time can be shortened. Since high-speed and low-delay data output is possible, an accurate time at which the event has occurred can be obtained. Note that the sensor11that is the extended DVS including the FIS that outputs frame data and the DVS that outputs the event data may have a mode in which the FIS and the DVS are provided in one device and adjusted so as to have the same imaging range, or a configuration may be adopted in which the FIS and the DVS are provided as different devices to be arranged adjacent to each other and adjusted so as to have the same imaging range. Furthermore, one sensor may be used in which each pixel can output both the event data of the DVS and the image frame data of the FIS. Examples of the sensor in which each pixel can output both the event data of the DVS and the image frame data of the FIS includes, for example, a dynamic and active-pixel vision sensor (DAVIS) sensor disclosed in “Brandli et al. A 240×180 130 dB 3 us latency global shutter spatiotemporal vision sensor, IEEEJSSC, 2014”. The sensor11notifies the event producer12of event en={x, y, pn, tn} corresponding to a luminance change amount between the luminance image Ln-1(x, y) at the time tn-1as the original data and the luminance image Ln(x, y) at the time tn. <2. Description of Event Data> Next, event data to be transferred into the network by the event producer12will be described with reference toFIG.2. As illustrated inFIG.2, the event data is transferred to the event path13reserved in a cloud21(network), including at least a topic ID, an original reference, and an event type. The event path13is assigned by the topic manager16in accordance with the topic ID, and is a path according to the event class. The topic ID (Topic ID) is observation data identification information for identifying observation target data to be notified as the event data. The event type indicates a type of the event data, and the event type stores either Update including data of a change amount or Notify that does not include data of a change amount but notifies only of a fact that there has been a change. In a case where the event type is Update, the original reference (Original Ref) stores a reference address that refers to original data before a luminance change expressed in Update is applied. Whereas, in a case where the event type is Notify, the original reference stores a reference address that refers to updated original data after a change phenomenon (the luminance change) expressed in Notify is generated. The original data in a case where the sensor11is the extended DVS is the luminance image Ln, that is, set data in which the pixel (x, y) of the luminance image Ln(x, y) is set to all the pixels. Therefore, the original data in a case where the event type is Update is a luminance image Ln-1before a luminance change at a time tnis applied, and the original data in a case where the event type is Notify is a luminance image Lnafter a luminance change at a time tnhas occurred. In the default, the event class is set by the topic manager16in accordance with a topic, that is, observation target data to be notified as the event data. However, depending on a type of the sensor11, the event class may be fixedly set in advance, such as a case where priority specific to the sensor11is designated. Alternatively, the event class may be set by performing some negotiation between with the event consumer14, which is a data transfer destination. In any case where the event class is set by any method, the event class can be changed in accordance with a traffic situation or the like in the network. FIGS.3to6illustrate a data example to be adopted as the topic ID. FIG.3illustrates an example in which a sensor ID (sensor identification information) for globally uniquely identifying the sensor11is adopted as the topic ID. In the example ofFIG.3, “Sensor-ID-1”, which is the sensor ID, is used as it is as the topic ID. FIG.4illustrates an example in which a combination of a sensor ID for globally uniquely identifying the sensor11and a ROI-ID (object identification information) for identifying a specific object region (a region of interest: ROI) in an image that is sensor data is adopted as the topic ID. In the example ofFIG.4, the sensor ID is “Sensor-ID-1”, and “ROI-ID-1” and “ROI-ID-2” for identifying the object region ROI are individually assigned to two objects of a car and a person in an image generated by the sensor11. “Sensor-ID-1/ROI-ID-1” is set as the topic ID for the car object, and “Sensor-ID-1/ROI-ID-2” is set as the topic ID for the person object. The ROI-ID for identifying the object region ROI is only required to be unique in the sensor. FIG.5illustrates an example in which an object ID (global object identification information) for globally uniquely identifying an object detected in an image that is sensor data is adopted as the topic ID. InFIG.5, in an image generated by the sensor11, “Object-ID-1” is assigned to a car object, and “Object-ID-2” is assigned to a person object, to be globally unique. The object ID is information that enable identification of a target object across a plurality of the sensors11. “Object-ID-1” is set as the topic ID for the car object, and “Object-ID-2” is set as the topic ID for the person object. FIG.6illustrates an example in which a query ID (query identification information) for identifying a query for observation target data is adopted as the topic ID. The query ID for identifying the query for observation target data is set by the event consumer14(or an event subscriber116inFIG.29) which is a side using the data. For example, in a case where a query such as “a trajectory of a man around 60 years old who is moving suspiciously at an ATM near Osaki Station” is set as a query for specifying an event by the event consumer14, the query ID that is query identification information for identifying the query is assigned by the topic manager16, and is notified to the event consumer14and the sensor11. The query ID is determined to be globally unique regardless of the sensor11. In the example ofFIG.6, “Query-Token-ID-1” is assigned as the query ID of the above-described query, and is set as the topic ID. <3. Sending Control of Event Data and Original Data> Next, with reference to a flowchart ofFIG.7, sending control of event data and original data acquired by the sensor11will be described. Note that, in the description in and afterFIG.7, a predetermined pixel (x, y) among a plurality of pixels included in the sensor11will be described, but all the control described below can be executed for the entire luminance image L by applying the pixel (x, y) to each of all the pixels of the sensor11. First, in step S1, the sensor11executes “Capture Original”. Specifically, the sensor11acquires a luminance image Ln-1(x, y) at a time tn-1as original data. In step S2, the sensor11executes “Detect Update & Send Update”. Specifically, the sensor11detects a luminance image Ln(x, y) at a time tn, and supplies event data en={x, y, pn, tn} indicating a luminance change between the luminance image Ln-1(x, y) at the time tn-1and the luminance image Ln(x, y) at the time tn, to the event producer12together with the luminance image Ln-1(x, y) at the time tn-1which is the original data. The event producer12acquires the event data enand the luminance image Ln-1(x, y). The event producer12executes “Generate Update Event” in step S3, and executes “Send Update Event” in step S4. Specifically, the event producer12generates and sends event data to the event consumer14. In this processing, the event ensupplied as a luminance change amount from the sensor11is sent as it is to the event consumer14as the event data. The processing of steps S2to S4is repeatedly executed a plurality of times as necessary, in some cases. In step S5, the event consumer14executes “Request Original”. Specifically, the event consumer14requests the event producer12for the original data by designating an original reference (Original Ref) included in the event data en. Upon receiving the request for the original data from the event consumer14, the event producer12executes “Send Original” in step S6. That is, the event producer12sends the luminance image Ln-1(x, y) at the time tn-1as the original data to the event consumer14. In step S7, the event consumer14acquires the luminance image Ln-1(x, y) at the time tn-1as the original data sent from the event producer12, and executes “Update Original & Process Updated Original”. Specifically, the event consumer14executes a process of recovering the luminance image Ln(x, y) at the time tnaccording to Equation (2) described above. In a case where no luminance change has occurred in the pixel (x, y), the luminance image Ln(x, y) becomes Ln(x, y)=Ln-1(x, y)*1 with c=0. Moreover, the event consumer14performs predetermined application processing using the updated luminance image Ln(x, y), for example, image rendering, analysis processing, or the like. The above processing of steps S1to S7is repeatedly executed. The event producer12erases the original data and the event data sent in the past at an appropriate timing such as after elapse of a predetermined time after sending of the original data. According to the sending control of the event data and the original data described above, by sending the event data and acquiring the original data as necessary, it is possible to reduce network traffic and efficiently perform transfer processing on a large amount of data. The sending control described with reference toFIG.7is an example in which the event producer12receives the request from the event consumer14and then sends the original data. However, there is also a method of unilaterally sending the original data, for example, periodically sending the original data at a fixed period, or the like. FIG.8is another example of sending control of event data and original data, and is a flowchart of illustrating the sending control in a case of periodically sending the original data. In this processing, first, in step S21, the sensor11executes “Capture Original”. That is, the sensor11acquires a luminance image Ln-1(x, y) at a time tn-1as the original data. In step S22, the sensor11executes “Detect Update & Send Update”. That is, the sensor11detects a luminance image Ln(x, y) at a time tn, and notifies the event producer12of event data en={x, y, pn, tn} indicating a luminance change between the luminance image Ln-1(x, y) at the time tn-1and the luminance image Ln(x, y) at the time tn, together with the luminance image Ln-1(x, y) at the time tn-1which is the original data. The event producer12acquires the event data enand the luminance image Ln-1(x, y). In step S23, the event producer12executes “Send Original”. That is, the event producer12sends the luminance image Ln-1(x, y) at the time tn-1which is the original data, to the event consumer14. The event consumer14receives the luminance image Ln-1(x, y). The event producer12executes “Generate Update Event” in step S24, and executes “Send Update Event” in step S25. The processing in steps S24and S25is similar to the processing in steps S3and S4inFIG.7, and the event enat the time tnnotified as the luminance change amount from the sensor11is sent as it is to the event consumer14as the event data. The processing of steps S22to S25is repeatedly executed a plurality of times as necessary, in some cases. In step S26, the event consumer14acquires the event data enat the time tnsent from the event producer12, and executes “Update Original & Process Updated Original”. That is, the event consumer14uses the event data enat the time tnand the luminance image Ln-1(x, y) at the time tn-1acquired before the event data en, to execute the process of recovering the luminance image Ln(x, y) at the time tnwith Equation (2) described above. Subsequently, the event consumer14performs predetermined application processing using the updated luminance image Ln(x, y), for example, image rendering, analysis processing, or the like. The event producer12erases the original data and the event data sent in the past at an appropriate timing such as after elapse of a predetermined time after sending of the original data. The above processing of steps S21to S26is repeatedly executed. In a case where the event consumer14receives the original data redundantly, the redundant original data is discarded. <4. Accumulation Processing of Event Data> In the data sending control described with reference toFIG.7, in “Generate Update Event” and “Send Update Event” executed by the event producer12, the process of transferring the event data as it is from the sensor11to the event consumer14is executed. However, in practice, traffic increases if the above-described sending control is performed for every one pixel. Therefore, a process is executed in which the event producer12accumulates update data to some extent and sends to the event consumer14, or the event consumer14accumulates update data to some extent and then requests for the original data. In other words, as illustrated on a left side ofFIG.9, the process of “Generate Update Event” executed by the event producer12can include accumulation processing (“Accumulate Update”) of accumulating a plurality of pieces of event data, which is update data, corresponding to a luminance change amount. Alternatively, as illustrated on a right side ofFIG.9, the process of “Request Original” executed by the event consumer14can include accumulation processing (“Accumulate Update”) of accumulating a plurality of pieces of event data, which is update data, corresponding to a luminance change amount. As a level (hereinafter, referred to as an accumulation level) for the event producer12or the event consumer14to complete the process of accumulating the update data to some extent, for example, the following five levels (1) to (5) are considered. Accumulation level (1) is a level at which sending is immediately performed in any case after one piece of net event data subjected to noise removal is generated. According to this Accumulation level (1), the event producer12transfers the acquired event data as it is as illustrated inFIG.7. Accumulation level (2) is a level at which sending is performed after accumulating until event data of a plurality of pixels reaches a value equal to or more than a predetermined threshold value determined in advance. Accumulation level (3) is a level at which sending is performed after accumulating until a region (a rectangular region or an edge region) of an object of a subject can be extracted. Accumulation level (4) is a level at which sending is performed after accumulating until an object of a subject can be recognized (subjected to content understanding or classification) and identified to be an observation target. Accumulation level (5) is a level at which sending is performed after accumulating until at least one object of a subject can be recognized as a cluster (a set of pixels), and a trajectory of the object can be tracked (starts to move). Furthermore, there are various variations according to a level and a combination of AI processing. These Accumulation levels (1) to (5) can be set for each topic ID. The process of accumulating a plurality of pieces of event data can also be performed by being divided between the event producer12side and the event consumer14side. (Case 1) inFIG.10illustrates a sending control example in which the event producer12side performs “Generate Update Event” and “Send Update Event” after accumulating one or more pieces of event data at any of Accumulation levels (1) to (5) described above. Specifically, in step S41, the event producer12accumulates one or more pieces of event data at any of Accumulation levels (1) to (5) described above. Then, the event producer12executes “Generate Update Event” in step S42, and executes “Send Update Event” in step S43. In step S44, the event consumer14executes “Request Original” to request the event producer12for original data. (Case 2) inFIG.10illustrates a sending control example in which the event consumer14side performs “Request Original” after accumulating a plurality of pieces of event data at any of Accumulation levels (2) to (5) described above. Specifically, in step S51, when update data is generated in one pixel by acquiring event data from the sensor11, the event producer12executes “Generate Update Event” in step S52, and executes “Send Update Event” in step S53. In step S54, the event consumer14receives the event data from the event producer12and accumulates a plurality of pieces of event data at any of Accumulation levels (2) to (5) described above. Then, in step S55, the event consumer14executes “Request Original”, and requests the event producer12for original data. (Case 3) inFIG.10illustrates a sending control example in which the event consumer14side performs “Request Original” after accumulating a plurality of pieces of event data at any of Accumulation levels (3) to (5) described above. Specifically, in step S61, the event producer12accumulates one or more pieces of event data at one of Accumulation level (1) or (2) described above. Then, the event producer12executes “Generate Update Event” in step S62, and executes “Send Update Event” in step S63. In step S64, the event consumer14receives event data from the event producer12and accumulates a plurality of pieces of event data at any of Accumulation levels (3) to (5) described above. Then, in step S65, the event consumer14executes “Request Original”, and requests the event producer12for original data. For example, assuming that, after a luminance image L1(x, y) at a time t1is acquired by the event producer12, the event consumer14collectively acquires 10 pieces of event data e2to e11from a time t2to a time t11. Thereafter, when the event consumer14executes “Request Original” to request the event producer12for original data, the luminance image L1(x, y) at the time t1is sent as the original data from the event producer12. FIG.11illustrates a data format of event data in a case where an event type is Update. As described with reference toFIG.2, the event data includes the topic ID (Topic ID), the original reference (Original Ref), and the event type. In a case where the event type is Update, one or more pieces of event data are further stored as update data (Update). The topic ID (Topic ID) stores, for example, a sensor ID, a combination of the sensor ID and a ROI-ID, an object ID, a query ID, and the like. In the example ofFIG.11, “Sensor-ID-1”, which is the sensor ID for globally uniquely identifying the sensor11, is stored. In the event type, Update is stored. As specific data of the update data, one or a plurality of pieces of the event data e is stored according to an AER format of Equation (1) described above. Alternatively, a plurality of pieces of the event data e may be stored in a compressed AER format obtained by compressing the plurality of pieces of the event data e in the AER format. The compressed AER format will be described. When k pieces of event data en-k+1, . . . , en-2, en-1, enat times tn-k+1, . . . , tn-2, tn-1, tnof the same coordinates (x, y) are collectively expressed, (x, y, pn-k+1, . . . , pn-2, pn-1, pn, tn-k+1, . . . , tn-2, tn-1, tn) is obtained. However, if time Δt=tn-tn-k+1over the k pieces of event is too large, a delay is increased. Therefore, the time Δt and the number of pieces k are dynamically determined to enable the delay to be minimized as much as possible and to be within a temporal resolution requirement of an event requested by an application (the event consumer14). In a case where q pieces of data having a positive polarity p and r pieces of data having a negative polarity p are generated among the k pieces of event data en-k+1, . . . , en-2, en-1, enin a range of the time Δt, it can be expressed as compressed-e (x, y, s, tn) according to the compressed AER format since a change amount in a logarithmic luminance in the range of the time Δt becomes (q−r)=s. As described above, in a case where the time resolution is allowed to be coarse in the range of the time Δt, the number of events to be transferred can be reduced by compressing the plurality of pieces of event data e in the compressed AER format. Note that the range of the time Δt may be divided into a plurality of sections, and a plurality of pieces of compressed data of compressed-e (x, y, s, tn) may be stored. In the event data, information (“Update Format”) indicating whether the stored update data is in the AER format or the compressed AER format is also stored. Assuming that the original data acquired by the event consumer14executing “Request Original” is a luminance image Ln-k+1(x, y) at a time tn-k+1, and the compression event data acquired in the compressed AER format is compressed-e (x, y, s, tn), the luminance image Ln(x, y) at the time tncan be obtained by the following equation based on the premise that there is no noise. Ln(x,y)=Ln-k+1(x,y)*exp(c*s) By sending after accumulating event data at a predetermined accumulation level by using the accumulation processing, it is possible to further reduce network traffic and efficiently perform transfer processing on a large amount of data. <5. Identification of Data Based on Topic ID> Next, identification of data of the event consumer14based on the topic ID will be described. As described with reference toFIG.11, the event data is transferred including the topic ID. As described with reference toFIGS.3to6, the topic ID is formed by, for example, the sensor ID, a combination of the sensor ID and the ROI-ID, the object ID, the query ID, and the like. In a case where the topic ID is formed by the sensor ID, event data to be acquired is all the event data issued by the sensor11. In a case where the topic ID is a combination of the sensor ID and the ROI-ID or the object ID, event data issued by the sensor11can be identified for each object region ROI in an image or each object. Therefore, the event consumer14can extract and process only event data necessary for the self on the basis of the topic ID, with reference to the topic DB17. That is, the event consumer14can view event data received by the self to confirm only event data that is really needed by the self, acquire original data of a source through “Request Original”, and recover the latest original data by applying update data. For example, a latest trajectory of only a target object can be tracked by applying only event data related to a movement of the object of interest of the self in the entire image of a certain sensor11. In the topic DB17, a set of the topic ID and the topic description information is stored for every topic. In a case where the topic ID is formed by the sensor ID, various types of attribute information of the sensor11, for example, position information such as latitude and longitude of the sensor11, and information such as an image-capturing direction (view port) are stored in the topic description information. In a case where the topic ID is formed by a combination of the sensor ID and the ROI-ID, information regarding the object region ROI, for example, a region position and a size, an object type (a person, a car, or the like), and the like are stored in the topic DB17as the topic description information at a time point when the sensor11recognizes the object region ROI. In a case where the topic ID is formed by the object ID, the topic manager16manages the topic DB17. Specifically, the topic manager16assigns a globally unique ID to an object as the object ID, and stores the topic ID and the topic description information in the topic DB17. Furthermore, the topic manager16notifies the event producer12connected to the sensor11capturing the relevant object, to store the object ID assigned by the topic manager16into the topic ID of the event data related to the object. Also in a case where the topic ID is formed by a query ID associated with a query for observation target data, the topic manager16manages the topic DB17. Specifically, the topic manager16sets a search query (for example, “a trajectory of a man around 60 years old who is moving suspiciously at an ATM near Osaki Station”) from the event consumer14as the topic description information, assigns the query ID, and stores the topic description information in the topic DB17. Furthermore, the topic manager16notifies the event producer12connected to the sensor11capturing the object corresponding to the query, to store the query ID assigned by the topic manager16into the topic ID of the event data related to the object. In this manner, by enabling the topic ID to be assigned to each sensor11or each observation target object in one sensor11, for example, the event consumer14can determine a region of no interest in the entire luminance image L from the sensor11on the basis of the topic description information or the like of the topic DB17, and determine a portion that does not need to be updated to the latest state on the basis of the topic ID. As a result, unnecessary update processing can be omitted. Furthermore, after all the event data issued by the sensor11is acquired, only necessary objects may be extracted on the basis of the topic ID. However, the event consumer14may also notify the event producer12of a necessary topic ID in advance, to cause the event producer12to transfer only the event data related to the necessary object. As a result, event transfer itself can be made efficient, and traffic can be reduced. FIG.12illustrates a flowchart of sending control including a process in which the event consumer14notifies the event producer12of a necessary topic ID in advance. In this processing, first, in step S81, the event consumer14executes topic registration processing (“Register Topic”) of notifying the event producer12of the necessary topic ID. The event producer12receives the notified topic ID from the event consumer14. The processing of steps S82to S85is similar to steps S1to S4of the flowchart ofFIG.7, respectively. However, there is a difference in that, among event data acquired from the sensor11, the event producer12sends only event data of the topic ID notified from the event consumer14, to the event consumer14. After step S85, similarly to the sending control inFIG.7, the event producer12sends original data to the event consumer14in a case where “Request Original” is notified from the event consumer14. The event consumer14executes a process of recovering the luminance image L, and performs predetermined application processing using the updated luminance image L. FIG.13illustrates a data example of a topic registration request (“Topic Registration Request”) notified from the event consumer14to the event producer12in the topic registration processing of step S81. As illustrated inFIG.13, as the topic registration request, the event consumer14can notify of the level of the accumulation processing performed by the event producer12, that is, Accumulation levels (1) to (5), together with the topic ID corresponding to the object or the like necessary for the self. In a case where a plurality of topics is assigned to event data issued by one sensor11, the accumulation level can be designated for each topic. <6. Sending Control in Case where Event Type is Notify> In the above description, the processing in a case where the event type is Update has been described. Next, processing in a case where the event type is Notify will be described. For example, in a case where a content of an update change amount is very large, or the like, the event producer12sends event data whose event type is Notify, and notifies the event consumer14of only of a fact that the update has occurred. FIG.14is a flowchart for explaining sending control of event data and original data in a case where the event type is Notify. This processing corresponds to the process ofFIG.7in a case where the event type is Update. First, in step S101, the sensor11executes “Capture Original”. Specifically, the sensor11acquires a luminance image Ln-1(x, y) at a time tn-1as original data. In step S102, the sensor11executes “Detect Update & Send Update”. That is, the sensor11detects a luminance image Ln(x, y) at a time tn, and notifies the event producer12of event data en={x, y, pn, tn} indicating a luminance change between the luminance image Ln-1(x, y) at the time tn-1and the luminance image Ln(x, y) at the time tn, together with the luminance image Ln-1(x, y) at the time tn-1which is the original data. The event producer12acquires the event data enand the luminance image Ln-1(x, y). The event producer12executes “Generate Notify Event” in step S103, and executes “Send Notify Event” in step S104. Specifically, the event producer12generates and sends event data whose event type is Notify, to the event consumer14. The event data in a case where the event type is Notify will be described later. The processing of steps S102to S104is repeatedly executed a plurality of times as necessary, in some cases. In step S105, the event consumer14executes “Request Updated Original”. Specifically, the event consumer14requests the event producer12for updated original data to which the updated change amount has been applied. Upon receiving the updated request for the original data from the event consumer14, the event producer12executes “Send Updated Original” in step S106. Specifically, the event producer12sends the luminance image Ln(x, y) at the time tnto the event consumer14as the updated original data. In step S107, the event consumer14acquires the luminance image Ln(x, y) at the time tnthat is the updated original data sent from the event producer12, and executes “Process Updated Original”. That is, the event consumer14performs predetermined application processing using the acquired updated luminance image Ln(x, y) at the time tn, for example, image rendering, analysis processing, or the like. FIG.15illustrates a data format of event data in a case where the event type is Notify.FIG.15corresponds to the data format of the event data ofFIG.11in a case where the event type is Update. In a case where the event type is Notify, the event data includes a topic ID (Topic ID), an original reference (Original Ref), and an event type. The topic ID (Topic ID) stores, for example, a sensor ID, a combination of the sensor ID and a ROI-ID, an object ID, a query ID, and the like. In the example ofFIG.15, “Sensor-ID-1”, which is the sensor ID for globally uniquely identifying the sensor11, is stored. The original reference (Original Ref) stores a reference address of the updated original data, that is, a reference address corresponding to the luminance image Ln(x, y) at the time tnin the processing example ofFIG.14. In the event type, Notify is stored. As described above, in a case where the event type is Notify, the event data of the update change amount (Event-Type=‘Update’) is not sent to the event consumer14, and only the updated original data is sent to the event consumer14. According to the sending control of the event data and the original data in the case where the event type is the Notify above, the event producer12sends the event data indicating a fact that there has been a change, and the event consumer14acquires the original data as necessary. As a result, network traffic can be reduced, and transfer processing can be efficiently performed on a large amount of data. <7. Accumulation Processing of Event Data> The accumulation processing of event data in a case where the event type is Notify will be described. In a case where the event type is Notify, the event data of an updated change amount is not sent to the event consumer14, and thus the event consumer14side cannot perform the accumulation processing. Therefore, in a case where the accumulation processing is performed, as illustrated inFIG.16, a process of accumulating one or more pieces of event data at any of Accumulation levels (1) to (5) is performed only on the event producer12side. In other words, among (Case 1), (Case 2), and (Case 3) inFIG.10in a case where the event type is Update, only processing corresponding to (Case 1) is enabled. FIG.17illustrates a sending control example in which the accumulation processing is performed in a case where the event type is Notify. Specifically, in step S121, the event producer12accumulates one or more pieces of event data at any of Accumulation levels (1) to (5) described above. Then, the event producer12executes “Generate Notify Event” in step S122, and executes “Send Notify Event” in step S123. In step S124, the event consumer14executes “Request Updated Original” to request the event producer12for original data. Here, in a case where the level of the accumulation processing is (3) to (5) among Accumulation levels (1) to (5) described above, and accumulation is performed as individualized data that is individualized in accordance with a requirement of an application by AI processing or the like, the individualized data can be passed as it is to the event consumer14, instead of the updated original data. FIG.18is a format obtained by extending a data format in a case where the event type illustrated inFIG.15is Notify, and illustrates a data format of the event data in which the individualized data can also be passed to the event consumer14in addition to the updated original data. In this data format, an individualized data reference (Transformed Original Ref) is stored in addition to the topic ID, the original reference, and the event type. In the individualized data reference, a reference address of the individualized data is stored. Specifically, in a case where the accumulation level is Level (3), the reference address of the individualized data representing a rectangular region of an object of a subject or an edge region of the object is stored. In a case where the accumulation level is Level (4), the reference address of the individualized data representing content recognition or classification of the object of the subject is stored. In a case where the accumulation level is level (5), the reference address of the individualized data representing a trajectory of the object of the subject is stored. FIG.19is a flowchart for explaining sending control of event data and original data in a case where the event consumer14designates and acquires the individualized data from the event producer12. First, in step S141, the sensor11executes “Capture Original”. That is, the sensor11acquires a luminance image Ln-1(x, y) at a time tn-1as the original data. In step S142, the sensor11executes “Detect Update & Send Update”. That is, the sensor11detects a luminance image Ln(x, y) at a time tn, and notifies the event producer12of event data en={x, y, pn, tn} indicating a luminance change between the luminance image Ln-1(x, y) at the time tn-1and the luminance image Ln(x, y) at the time tn, together with the luminance image Ln-1(x, y) at the time tn-1which is the original data. The event producer12acquires the event data enand the luminance image Ln-1(x, y). The event producer12executes “Generate Notify Event” in step S143, and executes “Send Notify Event” in step S144. Specifically, the event producer12generates and sends event data whose event type is Notify, to the event consumer14. This event data is sent in the data format ofFIG.18. The processing of steps S142to S144is repeatedly executed a plurality of times as necessary, in some cases. In step S145, the event consumer14executes “Request Transformed Original”. Specifically, the event consumer14requests the event producer12for individualized data by designating a reference address of the individualized data reference (Transformed Original Ref). Upon receiving the request for the individualized data from the event consumer14, the event producer12executes “Send Transformed Original” in step S146. Specifically, the event producer12sends the individualized data corresponding to the accumulation processing level to the event consumer14. In step S147, the event consumer14acquires the individualized data sent from the event producer12, and executes “Process Transformed Original”. That is, the event consumer14performs predetermined application processing using the acquired individualized data. As described above, the event consumer14can acquire the individualized data itself to perform the application processing, or can acquire updated original data to perform the application processing similarly to the case where the event type is Update. By sending after accumulating event data at a predetermined accumulation level through the accumulation processing, it is possible to further reduce network traffic and efficiently perform transfer processing on a large amount of data. <8. Reservation and Assignment of Event Path> Next, reservation and assignment of an event path will be described. FIG.20is a flowchart of event path reservation processing performed by the event path manager15. In this processing, first, in step S161, the event producer12generates an event path request for requesting reservation of an event path, and sends the event path request to the event path manager15(Request Event Path). The event path manager15receives the event path request from the event producer12in step S162, and reserves an event path between the event producer12and the event consumer14on the basis of the event path request. FIG.21illustrates details of the event path request sent from the event producer12to the event path manager15. The event path request stores an event class and a parameter. For example, a class ID which is identification information for identifying a class (priority) is stored in the event class. The parameter includes items indicating quality of data transfer, such as QoS Class, Bitrate, and Start/Stop/Duration. The QoS Class includes an ID for designating a (set) class of transport QoS such as a range of delays, maximum jitter, and a maximum error rate. The Bitrate includes a band bps, a guaranteed bit rate GBR (Guaranteed BitRate), and a maximum bit rate MBR (Maximum BitRate). The Start/Stop/Duration includes a start absolute time to use the path, an end absolute time (including designation of immediate use for immediately using after resources are assigned), a duration, and the like. By reserving network resources in advance by the event path request, it is possible to predict a maximum delay or the like before event data is transferred. The event path manager15dynamically reserves network resources that can establish an event path that satisfies conditions indicated by the event path request, in the network between the event producer12and the event consumer14. FIG.22is a flowchart for explaining event path reservation processing including timings of reserving the event path illustrated inFIG.20and sending the event data inFIG.7. In the example ofFIG.22, after executing “Generate Update Event” in step S181, the event producer12sends an event path request for requesting reservation of an event path to the event path manager15in step S182(Request Event Path). The processing in step S181is the same as the processing in step S3inFIG.7, and the processing in step S182is the same as the processing in step S161inFIG.20. Then, in step S183, the event path manager15reserves an event path between the event producer12and the event consumer14on the basis of the event path request. After the event path is reserved, an ACK is sent together with a designated event class from the event path manager15to the event producer12. Upon receiving the ACK from the event path manager15, the event producer12executes “Send Update Event” of sending event data to the event consumer14in step S184. In “Send Update Event”, the event data may be sent with an event class added. Note that, as described above, the event producer12may send the event data first instead of sending the event data after waiting for the ACK from the event path manager15. In this case, when the event path manager15fails to reserve the event path, a NAK is returned from the event producer12for the event data. The above-described processing inFIG.22is processing for reserving a necessary event path every time “Generate Update Event” occurs on the event producer12side. Whereas, the event path manager15may reserve event paths collectively to some extent at all times before the event producer12executes a series of “Generate Update Event”, and assign a necessary event path one after another from the event paths. In the case where the event paths are collectively reserved in advance, the event paths are reserved on the basis of a result of estimating network traffic between the event producer12and the event consumer14, on the basis of past statistical data or the like of the sensor11. FIG.23is a flowchart for explaining event path reservation processing in a case where an event path is assigned after being reserved in advance. In the example ofFIG.23, in step S201, the event path manager15estimates network traffic between the event producer12and the event consumer14, and reserves an event path between the event producer12and the event consumer14. The event producer12executes “Generate Update Event” in step S202, and thereafter sends an event path request to the event path manager15in step S203. In step S204, the event path manager15reserves an event path between the event producer12and the event consumer14from the event paths reserved in advance, and sends an ACK together with the designated event class to the event producer12. In step S205, the event producer12receives the ACK from the event path manager15, and executes “Send Update Event” to send event data to the event consumer14. As described above, in a case where event paths are reserved in advance and an event path is assigned from among them, it is possible to reduce overhead one after another in establishing network resources that can occur in reservation. <Modification of Event Path Request> When sending the event path request to the event path manager15in step S182ofFIG.22or step S203ofFIG.23, the event producer12may send the event path request in a format illustrated on a left side ofFIG.24. The event path request illustrated on the left side ofFIG.24shows a modification of the event path request illustrated inFIG.21. Comparing the event path request ofFIG.24with the event path request ofFIG.21, a topic ID and an event type are added. Upon receiving the event path request ofFIG.24, the event path manager15shares the topic ID and the event type with the event consumer14when reserving the event path between the event producer12and the event consumer14. As a result, when the event producer12executes “Send Update Event” to send event data to the event consumer14, as illustrated on a right side ofFIG.24, it is sufficient to store and send only update data (event data e), and a data amount of individual event data can be minimized. <Transport Stack Constituting Event Path> Next, a transport stack configuration constituting an event path will be described. A stack configuration when event data is transferred through an event path of a user plane on an optical network system is as illustrated inFIG.25. A transport stack includes, from a lowermost layer side, Fiber Layer (space division multiplexing (SDM) or mode division multiplexing (MDM) in one fiber), wavelength division multiplexing (WDM) of WDM layer, and time division multiplexing (TDM) of TDM layer in this order, and further includes, on top of that, an IP packet layer or a non-IP layer as a transport of an upper layer. The event path is basically established in connection oriented. That is, at a time of connection setup, a virtual path is formed between a sending side and a reception side by a generalized multi-protocol label switch (GMPLS) (network resources are reserved). In the example described above, the event producer12serves as the sending side, and the event consumer14serves as the reception side. A virtual path that satisfies each requirement is reserved in association with an event class (a distribution requirement such as priority) of the event path request sent by the sending side. For reserving the virtual path, resource reservation protocol (RSVP)-traffic engineering (TE) extension or the like for the GMPLS exchanged on a control plane is used. Note thatFIG.25illustrates an example of the stack configuration on the optical network system, but the event path is formed by reserving a virtual path that satisfies a predetermined transfer requirement even in a case where the stack configuration on a wireless network system is adopted. <9. Negotiation of Event Class> While reserving or assigning an event path, in a case where the event path manager15cannot reserve an event path that satisfies a requirement designated in an event path request from the event producer12, which is the sending side, or in a case where a state in which the event path is unavailable to be reserved continues, the event path manager15performs negotiation for adjusting an event class with the event consumer14. FIG.26illustrates a flowchart of event data sending control including negotiation of an event class. In step S221, the event path manager15performs a process of reserving or assigning an event path, and determines whether or not an event path satisfying a requirement designated in an event path request is unavailable to be reserved, or whether or not such a state continues. In a case where it is determined in step S221that the event path is unavailable to be reserved or such a state continues, the processing proceeds to step S222, and the event path manager15performs negotiation of negotiating (requesting) an event class change with the event consumer14, such as, for example, whether a value of QoS Class of a parameter of the event path request is allowed to be changed (Negotiate Delivery Requirements). When consent that the value of QoS Class is allowed to be changed is obtained in the negotiation with the event consumer14in step S222, in step S223, the event path manager15relaxes a delay constraint, and sends an event class change instruction (Change Qos Class) for changing the value of QoS Class to the event producer12(Modify Event Class). The negotiation of the event path manager15with the event consumer14may be performed one after another when the event class needs to be changed in some cases, or may be automatically determined when an adaptation requirement is learned to some extent in some cases. Whether to require negotiation of the event path manager15one after another or causes automatic determination can be designated by the event consumer14side prior to or during transfer of the event data. A series of processing in steps S224to S228is the same as the above-described step S2inFIG.7, steps S181to S184inFIG.22, or steps S202to S205inFIG.23, and thus a description thereof is omitted. Furthermore, for example, in a case where an event path that satisfies the requirement designated by the event path request is unavailable to be reserved, or in a case where such a state continues, a frequency of outputting event data may be reduced by changing a parameter of the sensor11. FIG.27illustrates a flowchart of event data sending control including negotiation for changing a parameter of the sensor11. In step S241, the event path manager15performs a process of reserving or assigning an event path, and determines whether or not an event path satisfying a requirement designated in an event path request is unavailable to be reserved, or whether or not such a state continues. In a case where it is determined in step S241that the event path is unavailable to be reserved or such a state continues, the processing proceeds to step S242, and the event path manager15performs negotiation of negotiating (requesting) with the event consumer14whether a parameter of the sensor11, for example, the threshold value c for a logarithmic luminance change is allowed to be changed (Negotiate Delivery Requirements). In the negotiation with the event consumer14in step S242, when consent is obtained that the threshold value c of the logarithmic luminance change is allowed to be changed, in step S243, the event path manager15sends an instruction to decrease a bit rate (Reduce Bitrate) to the event producer12(Modify Event Class). When receiving the instruction to decrease the bit rate, the event producer12notifies the sensor11of an instruction to increment the threshold value c of the logarithmic luminance change (Increment log-intensity threshold value c) in step S244. When the sensor11acquires the instruction to increment the threshold value c of the logarithmic luminance change from the event producer12, the sensor11makes a change to increment the threshold value c of the logarithmic luminance change. As a result, an occurrence frequency of events detected in the processing of steps S245to S249decreases. The processing in steps S245to S249is the same as that in step S2inFIG.7described above, steps S181to S184inFIG.22, or steps S202to S205inFIG.23, and thus a description thereof will be omitted. Conversely, the event path manager15may increase a frequency of outputting event data in a case where improvement of network traffic is found. FIG.28illustrates a flowchart of event data sending control including negotiation for changing a parameter of the sensor11in a direction of improving a bit rate. In step S261, the event path manager15determines whether or not network traffic has been improved. In a case where it is determined that network traffic has been improved in step S261, the processing proceeds to step S262, and the event path manager15sends a parameter for the sensor11for increasing a bit rate (Increase Bitrate), for example, to the event producer12(Modify Event Class). When receiving the instruction to increase the bit rate, the event producer12notifies the sensor11of an instruction to decrement the threshold value c of the logarithmic luminance change (Decrement log-intensity threshold value c) in step S263. As a result, a detection sensitivity increases, and the event class is changed so as to satisfy the initial requirement designated in the event path request. After step S263, the same processing as steps S245to S249inFIG.27is executed, and detection of the event with use of the changed threshold value c of the logarithmic luminance change and sending of the event data are executed. <10. Second Embodiment of Data Processing System> In the data processing system1of the first embodiment, a mode has been adopted in which the event producer12and the event consumer14are configured by Peer to Peer (P2P) connection. However, a case is also assumed in which there are a large number of event consumers14and sensor data generated by the sensor11is transferred to the large number (plurality) of event consumers14. In a second embodiment, a configuration of a data processing system1in such a case will be described. Note that, in the second embodiment, it is not always necessary to provide a plurality of event consumers14, and it is needless to say that a case where there is one event consumer14is also included. FIG.29is a block diagram illustrating a configuration example of the second embodiment of a data processing system to which the present disclosure is applied. In order to reduce an application execution load of a sensor device, the data processing system of the second embodiment introduces a broker that mediates data distribution in a cloud, and constitutes a distribution system with a publish/subscribe model. Here, the publish/subscribe model is a type of asynchronous messaging paradigm, and is programed such that a sender (a publisher side) of a message sends a message without assuming a specific receiver (subscriber). The published message is published by being divided into topics. The publisher has no knowledge regarding the subscriber. The subscriber side designates a topic of interest and receives only messages belonging to the topic. The subscriber has no knowledge of the publisher. The publish/subscribe model basically has a low degree of coupling between the publisher side and the subscriber side, and thus has good scalability and can cope with a dynamic network configuration. In the second embodiment, a data distributor corresponding to the event producer in the first embodiment is referred to as an event publisher, and a data user corresponding to the event consumer in the first embodiment is referred to as an event subscriber. The data processing system1ofFIG.29includes a sensor111, an event publisher112, a broker114, an event subscriber116, and an event path manager117. The event publisher112and the broker114are connected by an event path113, and the broker114and the event subscriber116are connected by an event path115. Note that, although not illustrated, the data processing system1inFIG.29also includes a topic manager16and a topic DB17similarly to the first embodiment. The sensor111may be implemented by a sensor device alone, or may be integrally incorporated as part of an edge device on which the event publisher112connected to the sensor device is executed. The edge device is implemented by, for example, a data processing device such as a server device. Hereinafter, the sensor111and the event publisher112are collectively referred to as a sensor/edge211. A cloud221includes a plurality of nodes and a network connecting the nodes. The network includes a communication network or a communication path of any communication standard such as, for example, the Internet, a public telephone network, a wide-area communication network for a wireless mobile body such as a so-called 4G line or 5G line, a wide area network (WAN), a local area network (LAN), a wireless communication network that performs communication conforming to the Bluetooth (registered trademark) standard, a communication path for short-range wireless communication such as near field communication (NFC), a communication path for infrared communication, or a communication network of wired communication conforming to a standard such as high-definition multimedia interface (HDMI (registered trademark)) or universal serial bus (USB). Each node constituting the cloud221includes, for example, a network connection device such as a sensor device, a router, a modem, a hub, a bridge, a switching hub, a base station control device, a switch, or a server. An application to be executed on the network connection device as a node functions as the broker114, the event subscriber116, the event path manager117, the topic manager16, or the topic DB17. The cloud221includes an edge cloud222arranged on an edge side close to the sensor/edge211that injects sensor data into a network, and a center cloud223arranged in a core network other than that. The edge cloud222includes, for example, a base station and the like in a case where the network is a mobile phone communication network. The sensor111and the event publisher112correspond to the sensor11and the event producer12in the first embodiment. The sensor111is a sensor device that detects a state of some kind, and supplies generated original data to the event publisher112. The event publisher112is an application that obtains original data from the sensor111and sends the original data and event data to the broker114via the event path113. The event publisher112and the broker114are connected in a one-to-one relationship. The event publisher112transfers all acquired event data to the broker114, and the broker114handles a request from the event subscriber116and sends the event data. Furthermore, not only the sending of the event data, the original data is also similarly transferred from the event publisher112to the broker114, and the broker114handles a request from the event subscriber116. The broker114is located in the edge cloud222and the event subscriber116is located in the center cloud223. The broker114sends the original data or the event data sent from the event publisher112via the event path113, to the event subscriber116via the event path115. The broker114and the event subscriber116are connected in a one-to-many (plural) relationship. Similarly to the topic registration processing (“Register Topic”) in which the event consumer14notifies the event producer12of a necessary topic ID in the first embodiment, the broker114acquires and grasps a topic ID of interest to the event subscriber116in advance through the topic registration processing (“Register Subscription”) performed by the event subscriber116. Then, in a case where the event data of interest to the event subscriber116is sent from the event publisher112, the broker114appropriately reserves the event path115and transfers to the event subscriber116. The event path manager117is arranged in either the edge cloud222or the center cloud223, and reserves and secures an event path similarly to the event path manager15of the first embodiment. Reservation and assignment of the event path113between the event publisher112and the broker114and of the event path115between the broker114and the event subscriber116are performed individually. The event path113between the event publisher112and the broker114is reserved in response to a request from the event publisher112, and the event path115between the broker114and the event subscriber116is reserved in response to a request from the broker114. <11. Sending Control of Event Data and Original Data> Next, with reference to a flowchart ofFIG.30, sending control of event data and original data by the data processing system1ofFIG.29will be described. First, in step S301, the event subscriber116executes topic registration processing (“Register Subscription”) of notifying the broker114of a necessary topic ID. The broker114receives the topic ID notified from the event subscriber116. In step S302, the event path manager117reserves the event paths113and115. That is, the event path manager117estimates network traffic between the event publisher112and the broker114, and reserves the event path113between the event publisher112and the broker114. Furthermore, the event path manager117estimates network traffic between the broker114and the event subscriber116, and reserves the event path115between the broker114and the event subscriber116. The sensor111executes “Capture Original” in step S311, and executes “Send Original” in step S312. That is, the sensor111acquires a luminance image Ln-1(x, y) at a time tn-1as original data, and sends to the broker114. The broker114receives the luminance image Ln-1(x, y) from the sensor111, and stores in an internal storage unit. The luminance image Ln-1(x, y) may be sent to the broker114via the event publisher112. In step S313, the sensor111executes “Detect Update & Send Update”. Specifically, the sensor111detects a luminance image Ln(x, y) at a time tn, and supplies event data en={x, y, pn, tn} indicating a luminance change between the luminance image Ln-1(x, y) at the time tn-1and the luminance image Ln(x, y) at the time tn, to the event publisher112. The event publisher112obtains the event data en. In step S314, the event publisher112executes “Generate Update Event” for generating event data. Subsequently, in step S315, the event publisher112executes “Request Event Path”, and sends an event path reservation request to the event path manager117and the broker114. The broker114also executes “Request Event Path”, and sends the event path reservation request to the event path manager117. In step S317, the event path manager117individually assigns the event path113between the event publisher112and the broker114and the event path115between the broker114and the event subscriber116, from among event paths reserved in advance. When the assignment of the event path113and the event path115is completed, the event publisher112executes “Send Update Event” in step S318. Specifically, the event publisher112sends the event data generated in step S314to the broker114via the event path113. The broker114receives the event data from the event publisher112, and sends to a large number of event subscribers116via the event path115. The processing of steps S313to S318is repeatedly executed a plurality of times as necessary, in some cases. In step S319, the event subscriber116executes “Request Original”. Specifically, the event subscriber116requests the broker114for the original data by designating an original reference (Original Ref) included in the event data en. The broker114executes “Send Original” on the basis of “Request Original” from the event subscriber116, and sends the luminance image Ln-1(x, y) at the time tn-1as the original data to the event subscriber116. In step S320, each of the large number of event subscribers116acquires the luminance image Ln-1(x, y) at the time tn-1as the original data sent from the broker114, and executes “Update Original & Process Updated Original”. That is, the event subscriber116executes a process of recovering the luminance image Ln(x, y) at the time tn, and executes predetermined application processing using the updated luminance image Ln(x, y). As described above, in a case where sensor data is distributed to a large number of event subscribers116, the event publisher112sends the original data and the event data to the broker114, and the broker114distributes the original data and the event data in response to requests from the large number of event subscribers116. This configuration can reduce an execution load of the sensor/edge211application. <12. Negotiation of Event Class> Event classes that are set in each of the event path113between the event publisher112and the broker114and the event path115between the broker114and the event subscriber116are basically the same. However, the event class is changed in accordance with a traffic situation between the individual broker114and the event subscriber116, in some cases. FIG.31illustrates a flowchart of event class change control in which the event path manager117performs negotiation of an event class. In step S341, in a case where it is determined that an event path satisfying a requirement designated in an event path request is unavailable to be reserved or such a state continues, the event path manager117individually performs negotiation of negotiating (requesting) for an event class change with the broker114or the event subscriber116(Negotiate Delivery Requirements). That is, the event path manager117negotiates with the broker114for the event path113and negotiates with the event subscriber116for the event path115. As a result of the negotiation in step S341, when consent for the event class change is obtained, the event path manager117sends an event class change instruction to the event publisher112or the broker114(Modify Event Class) in step S342. In other words, the event class change instruction is sent to the event publisher112for the event class change of the event path113, and the event class change instruction is sent to the broker114for the event class change of the event path115. Here, a bit rate is adjusted such that a bit rate of the event path113between the event publisher112and the broker114is greater than a maximum value of a bit rate of (a plurality of pieces of) the event path115between the broker114and the event subscriber116. That is, adjustment is performed such that a requirement between the event publisher112and the broker114is severer than a requirement between the broker114and the plurality of event subscribers116. FIG.32illustrates a flowchart of the event class change control in a case where entire optimization is achieved by reflecting an event class of the event path115in an event class of the event path113. By reflecting the event class negotiated between the broker114and the plurality of event subscribers116in the event class between the event publisher112and the broker114, overall optimization can be achieved in some cases. Specifically, in step S361, in a case where it is determined that a requirement designated in an event path request in the event path115cannot be satisfied or such a state continues, the event path manager117performs negotiation of negotiating (requesting) for an event class change with the event subscriber116(Negotiate Delivery Requirements). As a result of the negotiation in step S361, when consent for the event class change is obtained, the event path manager117sends an event class change instruction, for example, for lowering a bit rate to the broker114(Modify Event Class) in step S362. In step S363, the broker114determines whether or not a maximum value of bit rates of the plurality of event paths115between the broker114and the event subscribers116has been stabilized to a predetermined value (judge if stable). In a case where the broker114determines that the bit rate is stable, the broker114sends an event class change instruction for decreasing a bit rate corresponding to the maximum value of the bit rate, to the event path manager117(Modify Event Class). In step S364, the event path manager117sends the event class change instruction to the event publisher112on the basis of the event class change instruction from the broker114(Modify Event Class). For example, in a case where a bit rate between the broker114and the plurality of event subscribers116is stable at 5 Mbps, 4 Mbps, and 2 Mbps, it is possible to minimize overall traffic by changing a bit rate between the event publisher112and the broker114, which has been 10 Mbps, to 6 Mbps. That is, by changing an event class while maintaining a requirement between the event publisher112and the broker114to be slightly severer than a requirement between the broker114and the plurality of event subscribers116, unnecessary network resource consumption can be lowered. <13. Immediate Transfer and Delayed Transfer> In the control example described above, an example has been described in which one event publisher112distributes event data to a plurality of event subscribers116via one broker114. However, there is a case where the broker114acquires event data having different topic IDs from a plurality of event publishers112(for example, in a case where one sensor111has one topic ID, or the like), and there is a case where the broker114acquires event data with multiple different topic IDs from one event publisher112(for example, a case where a topic ID corresponding to a plurality of object regions ROI is included in an image of one sensor111, or the like). Event data of a different topic ID acquired by the broker114has different priority (an event class). Thus, in a case where there is no margin in the event path115between the broker114and each of the event subscribers116and resource scramble occurs, the broker114appropriately selects and executes immediate transfer in which event data is immediately transferred and delayed transfer (cache transfer) in which event data is temporarily cached and then transferred, according to each priority. FIG.33is a flowchart illustrating a control example of the immediate transfer and the delayed transfer. Steps S381and S382inFIG.33are a control example in a case where the immediate transfer is performed. Specifically, in step S381, the event publisher112executes “Send Update Event”, to send event data to the broker114. In step S382, the broker114receives the event data from the event publisher112, and determines an event class of the event data. Then, in a case where the event class of the received event data is an event class with a severe delay requirement or the like, in other words, an event class with high priority, the broker114immediately sends the event data to the event subscriber116. Steps S391to S393inFIG.33are a control example in a case where the delayed transfer is performed. Specifically, in step S391, the event publisher112executes “Send Update Event”, to send event data to the broker114. In step S392, the broker114receives the event data from the event publisher112, and determines an event class of the event data. Then, in a case where the event class of the received event data is an event class with a lax delay requirement or the like, in other words, an event class with low priority, the event data is temporarily stored (cached) in an internal storage unit. Then, in step S393, after waiting for a certain period of time, the broker114sends the temporarily stored event data to the event subscriber116. As described above, the broker114can appropriately select and execute the immediate transfer and the delayed transfer of the received event data, on the basis of the priority indicated by the event class. Network and calculation resources can be preferentially assigned to the event data having high urgency, or the event data with low priority can be cached. <14. Configuration Example of Cloud Computing> The method and the system described in this specification, including the data processing system and the network control method described above, can be implemented using computer programming or engineering techniques, including computer software, firmware, hardware, or a combination or subset thereof. FIG.34illustrates a block diagram of a computer in which various embodiments described in this specification can be implemented. The present disclosure can be implemented as a system, a method, and/or a computer program. The computer program may include a computer-readable storage medium, and computer-readable program instructions that cause one or more processors to execute aspects of the embodiments are recorded on the computer-readable storage medium. The computer-readable storage medium can be a tangible device that can store instructions for use in an instruction execution device (a processor). The computer-readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of those devices. More specific examples of the computer-readable storage medium include each (and suitable combinations) of the following: a flexible disk, a hard disk, a solid state drive (SSD), a random access memory (RAM), a read only memory (ROM), an erasable and programmable read only memory (EPROM) or a flash memory (Flash), a static random access memory (SRAM), a compact disk (CD or CD-ROM), a digital versatile disc (DVD), and a card type or a stick type memory. The computer-readable storage medium as used in the present disclosure is not to be construed as being a transitory signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (for example, a light pulse through an optical fiber cable), or an electrical signal sent over a wire. Computer-readable program instructions of the present disclosure may be downloaded from the computer-readable storage medium to a suitable computing or processing device, or may be downloaded to an external computer or external storage, for example, via a global network such as the Internet, a local area network, a wide area network, and/or a wireless network. The network includes a copper transmission line, an optical communication fiber, wireless transmission, a router, a firewall, a switch, a gateway computer, an edge server, and/or the like. A network adapter card or a network interface in a computing device or a processing device can receive the computer-readable program instructions from the network, and transfer and store the computer-readable program instructions on the computer-readable storage medium in the computing device or the processing device. The computer-readable program instructions for executing the processes of the present disclosure include machine language instructions and/or microcode, and these are compiled or interpreted from source code written in any combination of one or more grogram languages, including an assembly language, Basic, Fortran, Java, Python, R, C, C++, C#, or similar programming languages. The computer-readable program instructions can be executed completely on a user's personal computer, notebook computer, tablet, or smartphone, and can also be executed completely on a remote computer or computer server, or any combination of these computing devices. The remote computer or computer server may be connected to a user's device or a device via a computer network, such as a local area network, a wide area network, or a global network (for example, the Internet). In order to implement aspects of the present disclosure, there is also an embodiment in which, for example, an electric circuit including a programmable logic circuit, a field-programmable gate array (FPGA), and a programmable logic array (PLA) uses information from computer-readable program instructions for configuring or customizing the electronic circuit, and execute the computer-readable program instructions. Aspects of the present disclosure are described in this specification with reference to flowcharts and block diagrams of a method, a device (a system), and a computer program according to an embodiment of the disclosure. It will be understood by those skilled in the art that each block of the flowcharts and the block diagrams, and combinations of blocks in the flowcharts and the block diagrams can be implemented by computer-readable program instructions. The computer-readable program instructions capable of executing the system and the method described in the present disclosure are used by one or more processors (and/or one or more cores in the processor) of a general purpose computer, a special purpose computer, or other programmable devices for manufacturing a device. By executing program instructions via a processor of a computer or other programmable devices, a system for implementing functions described in the flowcharts and the block diagrams of the present disclosure is created. These computer-readable program instructions may also be stored in a computer-readable storage medium that can instruct a computer, a programmable device, and/or other devices to function in a specific method. Accordingly, the computer-readable storage medium storing instructions is an article of manufacture including instructions for implementing aspects of the functions specified in the flowcharts and the block diagrams of the present disclosure. The computer-readable program instructions are loaded onto a computer, other programmable device, or other device, and execute a series of operational steps on the computer, other programmable device, or other device, to generate a processing result of the computer. By the program instructions being executed on the computer, other programmable device, or other device, functions specified in the flowcharts and the block diagrams of the present disclosure is implemented. FIG.34is a functional block diagram of a network system800in which one or a plurality of computers, servers, and the like are connected via a network. It should be noted that hardware and software environments shown in an embodiment ofFIG.34is shown as an example of providing a platform for implementing software and/or a method according to the present disclosure. As illustrated inFIG.34, the network system800may include, but is not limited to, a computer805, a network810, a remote computer815, a web server820, a cloud storage server825, and a computer server830. In one embodiment, multiple instances of one or more functional blocks illustrated inFIG.34are used. FIG.34illustrates a more detailed configuration of the computer805. Note that the functional blocks illustrated in the computer805are illustrated to establish exemplary functions and not all illustrated. Furthermore, although detailed configurations of the remote computer815, the web server820, the cloud storage server825, and the computer server830are not illustrated, they may include configurations similar to the functional blocks illustrated for the computer805. As the computer805, it is possible to use a personal computer (PC), a desktop computer, a laptop computer, a tablet computer, a netbook computer, a personal digital assistant (PDA), a smartphone, or any other programmable electronic device capable of communicating with other devices on the network810. Then, the computer805includes a processor835, a bus837, a memory840, a non-volatile storage845, a network interface850, a peripheral interface855, and a display interface865. Each of these functions may be implemented as an individual electronic subsystem (an integrated circuit chip or a combination of a chip and an associated device) in one embodiment, and some functions may be combined and implemented as a single chip (system on chip or SoC) in another embodiment. The processor835can be one or more single or multi-chip microprocessors, such as, for example, one designed and/or manufactured by Intel Corporation, Advanced Micro Devices, Inc. (AMD), Arm Holdings (Arm), or Apple Computer. Examples of the microprocessor include Celeron, Pentium, Core i3, Core i5, and Core i7 manufactured by Intel Corporation, Opteron, Phenom, Athlon, Turion, and Ryzen manufactured by AMD, and Cortex-A, Cortex-R, and Cortex-M manufactured by Arm. The bus837can employ a high speed parallel or serial peripheral interconnection bus of a proprietary or industry standard, such as, for example, ISA, PCI, PCI Express (PCI-e), or AGP. The memory840and the non-volatile storage845are computer-readable storage media. The memory840can employ any suitable volatile storage device, such as a dynamic random access memory (DRAM) or a static RAM (SRAM). For the non-volatile storage845, it is possible to adopt one or more of a flexible disk, a hard disk, a solid state drive (SSD), a read only memory (ROM), an erasable and programmable read only memory (EPROM), a flash memory, a compact disc (CD or CD-ROM), a digital versatile disc (DVD), a card type memory, or a stick type memory. Furthermore, a program848is also a set of machine readable instructions and/or data. This set is stored in the non-volatile storage845, and is used to create, manage, and control a specific software function explained in detail in the present disclosure and described in the drawings. Note that, in a configuration in which the memory840is much faster than the non-volatile storage845, the program848can be transferred from the non-volatile storage845to the memory840before being executed by the processor835. Via the network interface850, the computer805can communicate with and interact with other computers via the network810. For the network810, a configuration can be adopted including wired, wireless, or optical fiber connection by, for example, a local area network (LAN), a wide area network (WAN) such as the Internet, or a combination of LAN and WAN. In general, the network810includes any combination of connections and protocols that support communication between two or more computers and associated devices. The peripheral interface855can input and output data to and from other devices that can be locally connected to the computer805. For example, the peripheral interface855provides a connection to an external device860. As the external device860, a keyboard, a mouse, a keypad, a touch screen, and/or other suitable input devices are used. The external device860may also include a portable computer-readable storage medium, such as, for example, a thumb drive, a portable optical disk or a magnetic disk, or a memory card. Software and data for implementing an embodiment of the present disclosure, for example, the program848, may be stored on such a portable computer-readable storage medium. In such an embodiment, software may be loaded onto the non-volatile storage845, or alternatively may be loaded directly onto the memory840via the peripheral interface855. The peripheral interface855may use an industry standard, such as RS-232 or universal serial bus (USB), to connect with the external device860. The display interface865can connect the computer805to a display870, and there is a mode in which the display870is used to present a command line or a graphical user interface to a user of the computer805. The display interface865can use one or more of dedicated connections or industry standards such as a video graphics array (VGA), a digital visual interface (DVI), DisplayPort, and high-definition multimedia interface (HDMI) (registered trademark), to connect to the display870. As described above, the network interface850provides communication with other computers and storage systems, or devices external to the computer805. The software program and data described in this specification can be downloaded via the network interface850and the network810, for example, to the non-volatile storage845from the remote computer815, the web server820, the cloud storage server825, and the computer server830. Moreover, the system and the method of the present disclosure can be executed by one or more computers connected to the computer805via the network interface850and the network810. For example, in one embodiment, the system and the method of the present disclosure are executed by the remote computer815, the computer server830, or a combination of multiple interconnected computers on the network810. Data, data sets, and/or databases employed in the embodiment of the system and the method of the present disclosure can be downloaded and stored from the remote computer815, the web server820, the cloud storage server825, and the computer server830. Here, in this specification, the processing performed by the computer according to the program needs not necessarily be performed in chronological order with the order described as the flowchart. That is, the processing executed by the computer according to the program includes processing executed in parallel or individually (for example, parallel processing or processing by an object). Furthermore, the program may be processed by one computer (processor), or may be distributed and processed by a plurality of computers. Moreover, the program may be transferred to a remote computer to be executed. Moreover, in this specification, a system means a set of a plurality of components (devices, modules (parts), and the like), and it does not matter whether or not all components are in a same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device with a plurality of modules housed in one housing are both systems. Furthermore, for example, a configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units). On the contrary, a configuration described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit). Furthermore, as a matter of course, a configuration other than the above may be added to a configuration of each device (or each processing unit). Moreover, as long as a configuration and an operation of the entire system are substantially the same, a part of a configuration of one device (or processing unit) may be included in a configuration of another device (or another processing unit). Note that the present embodiment is not limited to the above-described embodiment, and various modified examples can be made without departing from the scope of the present disclosure. The effects described in this specification are merely examples and are not limited, and effects other than those described in this specification may be present. Note that the present disclosure can have the following configurations. (1) A network control method including: by a network connection device, sending, to a data processing device prior to original data, event data indicating a change amount at a predetermined time point of sensor data that is the original data generated by a sensor device, and sending the original data to the data processing device on the basis of a request from the data processing device for the original data. (2) The network control method according to (1) above, in which the data processing device executes a process of recovering the original data at the predetermined time point on the basis of the original data acquired by sending a request for the original data to the network connection device and on the basis of the event data acquired previously. (3) The network control method according to (1) or (2) above, in which the network connection device executes accumulation processing of accumulating a plurality of pieces of the event data at a predetermined accumulation level, and sends the event data after the accumulation processing to the data processing device prior to the original data. (4) The network control method according to any one of (1) to (3) above, in which the data processing device acquires a plurality of pieces of the event data at a predetermined accumulation level, then sends a request for the original data to the network connection device, and acquires the original data. (5) The network control method according to (3) or (4) above, in which the predetermined accumulation level is any of: a unit of a predetermined value or more determined in advance; a unit that allows extraction of a region of an object included in the original data; a unit that allows recognition of the object; and a unit that allows tracking of a trajectory of the object. (6) The network control method according to any one of (3) to (5) above, in which the predetermined accumulation level is set for every piece of observation data identification information for identifying observation target data to be notified as the event data. (7) The network control method according to any one of (1) to (6) above, in which the event data is sent including observation data identification information for identifying observation target data to be notified as the event data. (8) The network control method according to (7), in which the observation data identification information is any of: sensor identification information for identifying the sensor device; the sensor identification information, and object identification information for identifying an object on the sensor device basis; global object identification information for globally uniquely identifying the object; or query identification information for identifying a query for the observation target data. (9) The network control method according to (8) above, in which the data processing device determines reception of the event data on the basis of the observation data identification information. (10) The network control method according to (8) or (9) above, in which the data processing device notifies the network connection device of the observation data identification information, and the network connection device sends, to the data processing device, only the event data including the observation data identification information notified from the data processing device. (11) The network control method according to any one of (1) to (10) above, in which the event data includes at least observation data identification information for identifying observation target data to be notified as the event data, a reference address of the original data, and an event type. (12) The network control method according to (11) above, in which the event type is either Update including data of the change amount, or Notify that does not include data of the change amount and notifies only of a fact that there has been a change, and the reference address of the original data is a reference address of the original data before a change amount at the predetermined time point is applied in a case where the event type is the Update, and is a reference address of the original data after a change amount at the predetermined time point is applied in a case where the event type is the Notify. (13) The network control method according to (12) above, in which in a case where the event type is the Notify, the event data includes a reference address of individualized data obtained by individualizing a plurality of pieces of the event data at a predetermined accumulation level. (14) The network control method according to (13) above, in which the individualized data is data of any of: a unit that allows extraction of a region of an object included in the original data; a unit that allows recognition of the object; and a unit that allows tracking of a trajectory of the object. (15) The network control method according to any one of (1) to (14) above, in which when sending the event data, the network connection device sends an event path request for requesting reservation of a virtual path between with the data processing device. (16) The network control method according to (15) above, in which identification information for identifying priority of the virtual path is stored in the event path request. (17) The network control method according to (15) or (16) above, in which a parameter indicating quality of data transfer is stored in the event path request. (18) The network control method according to any one of (15) to (17) above, in which an event type, and observation data identification information for identifying observation target data to be notified as the event data are stored in the event path request. (19) The network control method according to any one of (1) to (18) above, in which the network connection device sends a plurality of types of the event data to a plurality of the data processing devices, and selects and executes immediate transfer and delayed transfer of a plurality of types of the event data in accordance with priority of the event data. (20) A data processing system including: a network connection device configured to send, to a data processing device prior to original data, event data indicating a change amount at a predetermined time point of sensor data that is the original data generated by a sensor device, and send the original data to the data processing device on the basis of a request from the data processing device for the original data. REFERENCE SIGNS LIST 1Data processing system11Sensor12Event producer13Event path14Event consumer15Event path manager16Topic Manager17Topic DB21Cloud111Sensor112Event publisher113Event path114Broker115Event path116Event subscriber117Event path manager211Sensor/edge221Cloud222Edge cloud223Center cloud800Network system805Computer810Network815Remote computer820Web server825Cloud storage server830Computer server835Processor840Memory845Non-volatile storage848Program860External device | 102,857 |
11863370 | DETAILED DESCRIPTION In the following description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments which may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the application, and it is to be understood that other embodiments may be utilized and that structural, logical and electrical changes may be made without departing from the scope of the present application. The following description of example embodiments is, therefore, not to be taken in a limited sense, and the scope of the present application is defined by the appended claims. The functions or algorithms described herein may be implemented in software in one embodiment. The software may consist of computer executable instructions stored on computer readable media or computer readable storage device such as one or more non-transitory memories or other type of hardware based storage devices, either local or networked. Further, such functions correspond to modules, which may be software, hardware, firmware or any combination thereof. Multiple functions may be performed in one or more modules as desired, and the embodiments described are merely examples. The software may be executed on a digital signal processor, ASIC, microprocessor, or other type of processor operating on a computer system, such as a personal computer, server or other computer system, turning such computer system into a specifically programmed machine. Existing TCP High Availability (HA) is built on two control boards in a router. If two boards fail at the same time, the TCP fails in the router. There is a new requirement for TCP HA from service providers to protect against two failures such as two board failures at the same time, ensuring availability of routing functions. In various embodiments of the subject matter of the present application, at least three control boards (called Primary Board (PB), Secondary Board (SB), Third Board (TB), and/or Auxiliary Board (AB) in one set of examples, Primary Node (PN), Secondary Node (SN), Third Node (TN), and/or Auxiliary Node (AN) in a further set of examples, and Active and Standby network elements (NEs) in still further examples. The multiple network elements in a router or device may be used with modified data handling to provide high availability. A network element may be implemented on a board, a virtual machine on a server, or via a multiprocessor unit (MPU) of a router or switch in various embodiments. In embodiments using more than three nodes or boards, numbers 1-n in one or more figures may be used to identify the nodes or boards, with “1” being the primary node or board. Reliability of a three-board system over prior two board solutions may be improved from 99.999% (referred to as 5 nines) by an order of magnitude to 99.9999% (referred to as 6 nines) in some embodiments. TCP is a communications protocol that provides reliable, ordered, and error-checked delivery of a stream of bytes between Apps running on hosts. The bytes are also referred to as octets, containing 8 bits per byte. TCP is commonly used as a protocol for Internet communications, and may be complemented with Internet Protocol (IP) and referred to as TCP/IP. In further embodiments, the number of bits per byte may be higher, such as 16, 32, 64, etc. Border Gateway Protocol (BGP) is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems (AS) on the Internet. The protocol is often classified as a path vector protocol but is sometimes also classed as a distance-vector routing protocol. Embodiments are described with multiple, at least three, boards operating in synchrony, sending and receiving data among themselves and line cards in an efficient manner. Boards may be added as desired such that more than three boards are used providing even higher availability. If one or more boards are detected as having failed, the remaining board or boards may continue to operate in a reconfigured manner to provide high availability. Further embodiments may utilize a sequence number corresponding to the number of bytes in a data transfer, with1being the first byte and m being the last byte. The sequence number may correspond to a different selected amount of data other than a byte in further embodiments, such as number of bytes or bits, segments, pages, or other logical representation of an amount of data. The sequence number may be used in acknowledgments of data transfers between nodes to indicate an amount of the data transfer that was received. If less than all the data has been received, the sequence number in the acknowledgment will be lower than expected, notifying a node that not all the transferred data was received. The sequence number may also be used to synchronize a new primary node following failure of a previous primary node. Parallel High Availability Embodiments A system includes a primary board having circuitry for executing a primary App and a TCP module. A secondary board has circuitry for executing a secondary copy of the primary App and a secondary TCP module. A third board has circuitry for executing a third copy of the primary App and a third TCP module. A line card is coupled to all the boards, wherein the primary board, secondary board and third board are coupled in parallel to transfer data and acknowledgments, via their respective TCP modules, between each of the boards and the line card and/or between boards, and wherein the boards are reconfigurable to communicate with the line card regardless of the failure of one or two of the boards. A first method includes receiving incoming data from a peer device via a line card in a router, sending the received incoming data to TCP modules in at least three router boards, providing the data to an App duplicated on the at least three router boards, each router board acknowledging receipt of the data via the TCP modules, and acknowledging receipt of the data to the peer device via the line card responsive to all the boards acknowledging receipt of the data. A second method includes receiving data from an App running on a primary board in a router, the data being received by a TCP module on the primary board, the TCP module on the primary board providing the received data in parallel to at least two other boards each having a TCP module and a copy of the App, the TCP module on the primary board providing the received data to a line card coupled in parallel to all the boards, and providing an acknowledgment to each board in parallel from a peer device responsive to successful delivery of the data to the peer device. FIGS.1A and1Bare block flow diagrams illustrating an example architecture100for TCP high availability in multiple boards, showing message data flow among a line card (LC)140and three router boards, PB110, SB120, and TB130. Incoming TCP data (1. Data) from a line card (LC)140is indicated inFIG.1A, with outgoing TCP data (data generated by an App160running on the PB110) illustrated inFIG.1B. In one embodiment, a TCP module150runs on each of the three or more router boards. The router boards may include circuitry to perform operations, such as a processor and memory. The memory may include one or more buffers or windows for buffering data being transferred until a transfer is complete. An App160(or a number of Apps) uses the TCP module150on each of the boards. Apps may include any software that sends and/or receives data, such as browsers, word processing programs, and a multitude of other Apps. Every board is connected to a network connection such as Line Card (LC)140or a number of LCs in further embodiments. Every line card140is connected to a peer170or neighbor router170or device to send and receive data. Peer170may be a device referred to as a remote peer that may be remotely located and coupled via a network for transferring data, such as via data packets using TCP. The peer170may be a device of any size or type executing a similar application or a different application exchanging data with App160. As shownFIG.1A, incoming TCP data (1. Data) from a peer170is sent (2. Data) to the TCP module150on every board by the line card140configured to receive the data from the peer170. Each TCP module150stores the data in its buffer/window, delivers the data (3. Data), using TCP, to its App(s) (or, the App160reads the data) as needed and sends an acknowledgment (Ack) (3. AckPB, 3. AckSB, 3. AckTB) to the line card140indicating the data has been received. The line card sends the peer170an acknowledgment (4. Ack) for the data after receiving the Ack for the data from every TCP module150. The line card140is thus aware of every board and ensures that an Ack from every board is received for each byte of data received before acknowledging receipt of the data. As shown inFIG.1B, where number of components is the same as that inFIG.1A, outgoing TCP data (1. Data) originating from an App160(such as Border Gateway Protocol (BGP)) using TCP on PB110is concurrently sent (2. Data) to the TCP module150on each of the other boards such as SB120and TB130. The TCP module150delivers the TCP data (3. Data) to its corresponding Apps160in order as needed, and sends an acknowledgment (Ack) (4. AckSB, 4. AckTB) to the TCP module150on PB110, which sends the TCP data to the line card140(5. Data). The line card140sends the TCP data (6. Data) to peer170, and receives an acknowledgment (7. Ack). The line card140, responsive to the 7. Ack, sends an acknowledgment (8. Ack) to each board: PB110, SB120, and TB130. In various embodiments, the boards110,120and130and the line card140may communicate with each other using one or more of several different transport protocols, including TCP. Other protocols include but are not limited to Transparent Inter Process Communication Protocol (TIPC) or User Datagram Protocol (UDP). The use of three boards provides protection against two failures such as two board failures at the same time in an efficient way. The parallel connection of the boards combined with tracking of successful delivery and receipt of data prior to acknowledging or sending data results in a high availability system, with each App160and TCP module150having a synchronized state. A synchronized state includes each board110,120and130having the same data so that either the SB120or TB130is capable of becoming a primary board without loss of data, or with loss of minimal data. While three boards are shown, the system may be expanded to accommodate further boards in a parallel connected manner to ensure all boards are synchronized. For incoming data, the data is then sent from the line card to each of the boards, which update their Apps and acknowledge receipt directly to the line card. For outgoing data, the PB110may simply send the data to more than two boards and coordinate reception of acknowledgments from each board prior to sending the data to the line card. The line card would then send an acknowledgment to each board following acknowledgment of receipt by a peer170. In some embodiments, boards may be added and synchronized such that their TCP modules are in a same state as the TCP modules on the other boards. In one embodiment, the TCP state of the PB110is smoothly synchronized or backed up to the newly inserted board such as the SB120or TB130or a fourth or further board. A configuration file may be used to determine the role each board takes, resulting in rerouting of data traffic among the boards. The configuration file may be modified by a human operator or control logic in a device that manages the board configuration. Modification of the configuration file may take into account measured board reliability, assigning the highest reliable board as the PB110. Succeeding boards are assigned as the SB120and TB130if there are a sufficient number of boards still operating. In further embodiments, the architecture operates to quickly and smoothly switch over the control on TCP modules and others to a live board such as TB130when PB110and SB120fail at the same time. FIG.2is a block flow diagram illustrating example TCP data synchronization generally at200for outgoing data. Note that like components are numbered consistently in the figures. In one example, it is assumed that the PB110is running and one or more new boards, auxiliary boards (AB)220, are inserted. The PB110backs up its TCP socket (a data structure describing end to end communication connections via port numbers and IP addresses) to its corresponding App160on AB220. Note that AB220is a representation of SB120with one or more additional auxiliary boards represented behind it. After a TCP socket is created on AB220, the state and data structures of the socket (basically a copy of the state of the TCP module150) is/are replicated to the TCP module150on AB(s)220. For incoming TCP data, the incoming data synchronization during the backup period is that the LC140sends the incoming TCP data to the TCP module150on AB220, but the TCP module150on AB220does not deliver any incoming data to the App160on AB220as illustrated inFIG.2. Alternative methods of synchronization are described below. For outgoing TCP data, such as data (1. Data inFIG.2) received from the App160, the outgoing data synchronization may be done in multiple different ways as illustrated inFIGS.2,3, and4. InFIG.2, outgoing TCP data synchronization may be performed without acknowledgments. App160on PB110sends data (1. Data) to the TCP module150on PB110. Outgoing TCP data originated from App160, such as BGP on PB110is concurrently sent to peers170, such as routers via LC140(3. Data), the TCP module150on AB220(2. Data), and TCP modules150on each of the other boards. The TCP module150on AB220does not deliver the TCP data originated from the App160on PB110to the corresponding App160on AB220as represented by the “X” inFIG.2. LC140receives the data (2. Data) from the TCP module150on PB110and sends the data to a peer170(3. Data). LC140receives an Ack (4. Ack) for the data from the peer170and sends the Ack (5. Ack) to the TCP module150on every board. The TCP module150on AB220receives the Ack for the data from the peer170via LC140and removes the data from its window/buffer. The TCP module150on PB110receives the Ack for the data from the peer170via LC140and removes the data from its window/buffer, completing the data communication. Performing the TCP data synchronization without Acks provides high performance by sending TCP data originating from the App160on PB110to the peer170and other boards concurrently without additional work being done for synchronization between PB110and AB220for the outgoing TCP data. FIG.3is a block flow diagram300illustrating example connections and message flow for outgoing TCP data synchronization utilizing acknowledgments. App160on PB110sends TCP data (1. Data) to the TCP module150on PB110. Outgoing TCP data originated from the App160such as BGP on PB110is concurrently sent to the TCP module150on AB220(2. Data), and the TCP module150on each of the other boards. The TCP module150on AB220does not deliver the TCP data originated from the App160on PB110to the corresponding App160on AB220. The TCP module150on AB220sends an Ack message (3. Ack) to the TCP module150on PB110for the data received. If the TCP module150on PB110does not receive an Ack message from the TCP module150on AB220, the TCP module150on PB110retransmits the data to the TCP module150on AB220until receiving the Ack for the data from AB220. The TCP module150on PB110sends data (5. Data) to LC140after receiving Ack messages (3. Ack) from AB220and all the other boards. LC140receives the data from the TCP module150on PB110and sends the data (6. Data) to a peer170. LC140receives an Ack (7. Ack) for the data from the peer170and sends the Ack (8. Ack) to the TCP module150on every board. The TCP module150on AB220receives the Ack for the data from the peer170via LC140and removes the data from its window/buffer. The TCP module150on PB110receives the Ack for the data from the peer170via LC140and removes the data from its window/buffer. The TCP module150on each of the other boards receives the Ack for the data from the peer170via LC140and removes the data from its window/buffer, completing the data transfer successfully in accordance with TCP, with additional reliability provided by the Acks between TCP modules. FIG.4is a block flow diagram400illustrating example connections and message flow for outgoing TCP data synchronization utilizing implied acknowledgments. An App160on PB110sends data (1. Data) to the TCP module150on PB110, which stores the data into its output buffer in order. Data (2. Data) is sent by the TCP module150on PB110to the TCP module150on AB220, and to the TCP module150on each of the other boards in parallel. The TCP module150on AB220stores data in its output buffer in order, but does not send data to the corresponding App160on AB220as represented by the X inFIG.4. The TCP module150on AB220sends a request for data (Req msg) to the TCP module150on PB110when the TCP module150on AB220finds some data missing or sends an empty request when the amount of data sent (supposed) to the App160from last request is greater than a given size such as ½ of its buffer size (the TCP module150on AB220sends the TCP module150on PB110an empty request with the sequence number corresponding to the last byte sent to the App160if it does not have any request over a given interval). The TCP module150on PB110sends the data (2. Data) to the TCP module150on AB220after receiving the request. The TCP module150on PB110sends a peer170, via LC140, the data (5. Data) that is older than that requested (i.e. Ack'ed from the TCP module150on AB220and each of the other boards). LC140receives the data (5. Data) from the TCP module150on PB110and sends the data (6. Data) to the peer170. LC140receives an Ack (7. Ack) for the data from the peer170and sends the Ack (8. Ack) to the TCP module150on every board. The TCP module150on AB220receives the Ack for the data from the peer170via LC140and removes the data from its buffer. The TCP module150on PB110receives the Ack for the data from the peer170via LC140and removes the data from its buffer The TCP module150on each of the other boards receives the Ack for the data from the peer170via LC140and removes the data from its buffer, completing the data transfer successfully in accordance with TCP, with additional reliability provided by the implied Acks between TCP modules. No extra timers are run for the TCP module150on PB110. Most of the load for TCP HA is moved to the TCP module150on AB220. In the case the TCP module150on AB220misses data from PB110, the TCP module150on PB110will send the data to the TCP module150on AB220after receiving the request for the data from the TCP module150on AB220. The use of such requests may be faster than simply retransmitting data after a retransmission timer expires. After batch backup and before real time backup, TCP data streams may be synchronized by the use of byte sequence numbers corresponding to the data streams as illustrated inFIGS.5and6. For an incoming TCP data stream, the TCP module150on PB110holds off the data delivery to the App160on PB110. The TCP module150on PB110sends, to the TCP module150on AB220, the sequence number m, corresponding to the last byte of the data delivered to the App160just before the holding off. The App160on PB110copies the data from TCP module150in its input buffer to the corresponding App160on AB220. The beginning of the data in the buffer should be the boundary of a packet. The data in the input buffer of the App160and the data in the input buffer of the TCP module150on AB220and the incoming data from the peer170forms a continuous incoming TCP data stream on AB220for the socket backed up from PB110. FIG.5is a block flow diagram500illustrating example message flow for synchronizing an incoming TCP data stream boundary, and illustrates the use of buffers in the Apps and TCP modules. After batch backup and before real time backup, the TCP module150on PB110holds off the data delivery to the App160on PB110. The App160on PB110copies the data from its input buffer to the input buffer of the corresponding App160on AB220. The beginning of the data in the buffer should be the boundary of a data stream. The TCP module150on PB110sends the TCP module150on AB220the sequence number m corresponding to the last byte of the data delivered to the App160just before the holding off. The data in the input buffer of the App160and the data in the input buffer of the TCP module150on AB220and the incoming data from the peer170coupled to the LC140forms the continuous incoming TCP data stream on AB220for the socket backed up from PB110. After Batch Backup Completes and Real Time Backup Starts, the TCP module150on AB220sends the incoming TCP data from the peer170starting at sequence number m+1 to the corresponding App160on AB220. After Switchover to AB220, TCP on AB220continues sending data from the peer170to App160. The TCP module150on AB220sends an Ack to the peer170for the received data. FIG.6is a block flow diagram600illustrating example message flow for synchronizing an outgoing TCP data stream boundary. For an outgoing TCP data stream, the App160on PB110holds off the data delivery to the TCP module150. The TCP module150on PB110sends the TCP module150on AB220the sequence number n corresponding to the last byte of the data delivered to the TCP module150by the App160on PB110just before the holding off. The last byte should be the boundary of the data packet. As illustrated inFIG.6, after batch backup and before real time backup, the App160on PB110holds off the data delivery to the TCP module150. The TCP module150on PB110sends, to the TCP module150on AB220, the sequence number n (1. Seq #n) corresponding to the last byte of the data delivered to the TCP module150by the App160just before the holding off. The last byte should be the boundary of a packet. After batch backup completes and real time backup starts, an App160sends data (2. Data) to the TCP module150on PB110, which stores it into its output buffer in order. Data (3. Data/n+k) is sent to the TCP module150on AB220and each of the other boards in parallel. The TCP module150on AB220stores data in its output buffer in order and sends data (4. Data) to the corresponding App160on AB220as needed. In the example embodiment, the TCP module150on AB220sends the TCP module150on PB110a request (5. Req msg) for data as an implied Ack and the TCP module150on PB110sends the data to AB220after receiving the request. The TCP module150on PB110sends the data (6. Data) to LC140, which sends the data (7. Data) to a peer170. The peer170acknowledges (8. Ack) and the LC140sends an Ack (i.e. older than that requested) to AB220and each of the other boards. In the case that PB110does not receive any implied Ack (i.e. request) for a given time, it also sends the TCP module150on AB220the data and generates an alarm indicating that AB220may not work. After Switchover to AB220, the TCP module150on AB220sends all the data in its output buffer to the peer170. After batch backup completes and real time backup starts, for an incoming TCP data stream, the TCP module150on AB220sends the incoming TCP data from the peer170from sequence number m+1 to the corresponding App160on AB220. The App160on AB220starts to read its incoming TCP data from the peer170. For an outgoing TCP data stream, the TCP module150on AB220sends the outgoing TCP data originated from the App160on PB110from sequence number n+1 to the corresponding App160on AB220. The App160on AB220starts to read the outgoing TCP data originated from the App160on PB110. In some embodiments, extended TCP socket options may be used. For outgoing data delivery to an App160, an option can be set by an App160on AB220. When the option is set or enabled, the TCP module150on AB220sends, to the App160, the outgoing TCP data originated from the corresponding App160on PB110. For incoming data delivery to an App160, an option can be set by an App160on AB220. When set or enabled, the TCP module150on AB220sends, to the App160. the incoming TCP data received from the peer170. Outgoing data synchronization method options may be used by an App160by selecting one of the outgoing TCP data synchronization methods (such as no-Acks, explicit Acks, and implied Acks). FIG.7is a block flow diagram700, similar toFIG.1A, illustrating example connections and message flows for incoming data (1. Data) to the TCP module150on all boards from a peer170. The description ofFIG.7provides additional details regarding incoming data. LC140receives TCP data (1. Data) from a peer170and sends the data (2. Data) to the TCP module150on every board in parallel. The TCP module150on every board (e.g., PB110, SB120and TB130) receives the data from the peer170via LC140and sends the data to its App160in parallel. Each board sends its acknowledgment (3. AckPB, AckSB, AckTB) to the LC140, which sends an acknowledgment (4. Ack) to the peer170via LC140. LC140receives the acknowledgments from the TCP module150on every board and sends the peer170Ack, where Ack is min {AckPB, AckSB, AckTB}, indicating the minimum data (sequence #) received/ack'ed by all boards. LC140may resend data to the TCP module150on a board that does not send an acknowledgment or whose acknowledgment is behind others using an input data buffer holding data previously received from the peer170. With the buffer, LC140may send an Ack to the peer170after the data in its buffer is acknowledged by the TCP module150on every board. FIG.8is a block flow diagram800, similar toFIG.1B, illustrating example connections and message flows for outgoing data (1. Data) to the TCP module150on all boards from the App160on PB110. The description ofFIG.8provides additional details regarding outgoing data. The TCP module150on PB110receives outgoing TCP data originated from an App160such as BGP. The TCP module150on PB110concurrently sends the data (2. Data) to the TCP module150on each of the other boards such as SB120(Secondary Board) and TB130(Third Board). The TCP modules150on SB120and TB130deliver the TCP data (3. Data) originated from the App160on PB110to the corresponding Apps160on SB120and TB130in order as needed, and sends respective Acks (4. AckSB, AckTB) to the TCP module150on PB110. The TCP module150on PB110receives Acks from the TCP modules150on each of the other boards and sends the data (5. Data) to the peer170via LC140(after receiving Acks from the TCP module150on each of the other boards). LC140receives the data from the TCP module150on PB110and sends the data (6. Data) to the peer170. LC140receives an Ack (7. Ack) for the data from the peer170and sends the Ack (8. Ack) to the TCP modules150on all of the boards. The TCP module150on PB110removes the data from its output buffer after receiving the Ack for the data from the peer170via LC140. The TCP module150on each of SB120and TB130removes the data from its output buffer after receiving the Ack for the data from the peer170via LC140, completing the data transfer. FIG.9is a block flow diagram900illustrating example connections and message flows for outgoing data from the TCP module150on all boards in a further embodiment. Outgoing TCP data originated from an App160such as BGP in PB110are concurrently sent (1. Data) to the TCP module150on each of the other boards such as SB120(Secondary Board) and TB130(Third Board). The TCP module150on SB120and TB130delivers the TCP data (2. Data) originating from the App160in PB110to the corresponding Apps160in SB120and TB130in order as needed, and the SB120and TB130send respective Acks (3. AckSB, AckTB) to the TCP module150on PB110. The TCP module150on PB110sends the data (4. Data) to the peer170via LC140(after receiving Acks from the TCP module150on each of the other boards). Ack (5. Ack) from the peer170is sent, by the LC140, to the TCP module150on every board. The TCP module150on PB110removes the data from its window/buffer after receiving Ack for the data from the peer170router. The TCP modules150on SB120and TB130remove the data from their window/buffers after receiving Acks for the data from the peer170router, completing the data transfer. Switchover and recovery is performed when a board is detected has having failed. Many different methods of failure detection may be used, such as the use of a heartbeat transmitted by each board that is monitored by other boards, the LC140, or some other controller coupled serially or in parallel. Absence of the heartbeat may indicate failure of a board, or a communication path to the board. Responsive to detection of failure of a board, an old AB220may become a new PB110when the old PB110dies or by a configuration change, which as described Above, may be made to make the highest performing board the PB110. Apps using the TCP module150on new PB110send data to peers through the TCP module150. Apps on new PB110update their consumers such as RM (Routing table manager). Apps on the new PB110receive and process interface events. For any peer170session, the session is re-created if the session is not in a final state (e.g., established for a BGP peer170). The TCP module150on the new PB110starts to send Acks to peers through LC140. The TCP module150on new PB110accepts data from its Apps, sends data to the TCP modules150on each of the other boards and then sends data to one or more peers through LC140after receiving Acks for the data from the TCP module150on every other board. Multiple options for incoming data synchronization are now described with respect toFIGS.10,11,12, and13. A first option, utilizing explicit Acks, is shown inFIG.10at1000where incoming TCP data packets are concurrently sent to the TCP module150on PB110, the TCP module150on AB220, and the TCP module150on each of the other boards. The TCP module150on AB220discards the data as indicated by the “X” onFIG.10. The TCP module150on each of the other boards is assumed to deliver the data to the corresponding Apps. The TCP module150on AB220sends an Ack message to the TCP module150on PB110after receiving the data from the peer170device. The TCP module150on PB110delivers data (3. Data) to the App160in order after receiving data from the peer170and after receiving Ack message (2. Ack msg) for the data from the TCP module150on AB220and from the TCP module150on each of the other boards. The TCP module150on PB110sends the peer170an Ack message for the data it receives from the peer170after receiving Ack messages (4. Ack) for the data from the TCP module150on AB220and the TCP module150on each of the other boards. The first option is very efficient. The incoming data are concurrently sent to and processed by PB110, AB220and all the other boards. The synchronization between PB110and AB220is done through short acknowledgment messages. The first option is also very reliable. The impact of the problems on AB220on PB110is minimized. The TCP module150on PB110can continue receiving TCP data from the peer170and delivering them to the App160when there are problems on AB220, which can be quickly and easily detected by PB110through a number of methods. For example, when the TCP module150on PB110receives a certain amount of data from peers but does not receives any Acks for the data from AB220, the TCP module150on PB110can send the data to the App160by ignoring Acks and generate an alarm indicating that there may be problems on AB220. FIG.11is a block flow diagram1100illustrating a second option that includes operations of the first option and also utilizes a request for data to speed up data transfer. In addition, the TCP module150on PB110sends data to AB220after receiving data for a given time without receiving an Ack message from the TCP module150on AB220. The TCP module150on PB110sends a request message to the TCP module150on AB220after not receiving an Ack message from the TCP module150on AB220for a given interval and without receiving the data from the peer170for the given interval. The TCP module150on AB220sends the data to the TCP module150on PB110after receiving Request message from the TCP module150on PB110. In addition to the advantages of option1, the second option should have higher performance than option1in general. In the case that the TCP module150on AB220misses data from a peer170and the TCP module150on PB110receives the data, the TCP module150on PB110should send the data to the TCP module150on AB220faster than the peer170. In the case that the TCP module150on PB110misses data from a peer170and the TCP module150on AB220receives the data, the TCP module150on AB220should send the data to the TCP module150on PB110faster than the peer170through receiving the request from the TCP module150on PB110. To avoid the impact of unnecessary requests and data on the performance in some special situations, timers for sending data/request messages may be adjusted accordingly. The timers should have a time that is less than the TCP retransmission timer. Timers may be turned off when it is detected that most of the data/requests sent are not necessary (i.e. when sending data/request messages does not significantly speed up the incoming packet synchronization. This can be done by recording the percentage of time saved for the data/requests sent between PB110and AB220). FIG.12is a block flow diagram1200illustrating a third option that utilizes implied Acks. AB220sends PB110request messages for missing incoming data. Incoming data packets (1. Data) from peers are concurrently sent to the TCP module150on PB110, the TCP module150on AB220, and the TCP module150on each of the other boards. The TCP module150on AB220does not deliver the data to the corresponding App160in order. Instead, TCP module150discards the data as indicated by the “x” inFIG.12. The TCP module150on AB220sends the TCP module150on PB110a request (2 Req msg) for the data when it finds some data missing or an empty request when the amount of data sent (assumed) to an App160from last request is greater than a given size such as one-half of its window/buffer size or it does not send the TCP module150on PB110any request for a given time. An empty request contains the sequence number corresponding to the last byte sent to the App160on AB220. The TCP module150on PB110sends the App160all the data (3. Data) that is older than requested in its buffer in order (i.e. is acknowledged by the TCP module150on AB220and the TCP module150on each of the other boards). The TCP module150on PB110sends acknowledgments (4. Ack) to the peer170via LC140for all the data in its window/buffer that is older than that requested (i.e. is acknowledged by the TCP module150on AB220and the TCP module150on each of the other boards). The use of implied Acks in the third option may provide additional advantages to the first option. The third option should have higher performance than the first option in general. A request for the data missing on AB220acts as a negative acknowledgment. An empty request from AB220to PB110implies that the TCP module150on AB220received all the data before the sequence number in the request. From these requests, the TCP module150on PB110can determine what data is missing on AB220and what data is missing on PB110. If the TCP module150on AB220sends the TCP module150on PB110an implied acknowledgment (i.e. an empty request) after receiving a data packet, this option is almost equivalent to the first option. The frequency of sending implied acknowledgments can be adjusted by the number of packets/bytes received or by using a timer. For large incoming traffic flows, a lower implied acknowledgment frequency will further reduce the IPC bandwidth consumption between PB110and AB220. FIG.13is a block flow diagram1300illustrating a fourth option utilizing implied Acks and requests and data to further speed up data transfer. The fourth option builds on the third option. The TCP module150on PB110sends the data to the TCP module150on AB220after receiving a non-empty request message from the TCP module150on AB220. The TCP module150on PB110sends a request message for the data to the TCP module150on AB220after determining that the data is missing from the TCP module150on PB110and that the TCP module150on AB220received the data. TCP on AB220sends the data to the TCP module150on PB110after receiving the request message from PB110. The fourth option, in addition to the advantages of the first option may have higher performance than first and third options in general. In the case that the TCP module150on AB220misses data from a peer170and the TCP module150on PB110receives the data, TCP on PB110sends the data to the TCP module150on AB220faster than the peer170. In the case that the TCP module150on PB110misses data from a peer170and TCP on AB220receives the data, the TCP module150on AB220sends the data to the TCP module150on PB110faster than the peer170through receiving the request from the TCP module150on PB110. To avoid the impact of unnecessary requests and data on the performance in some special situations, timers for sending data/request messages may be adjusted accordingly. The time should be less than the TCP retransmission timer. The timers may be turned off when it is detected that most of the data/requests sent are not necessary (i.e. sending data/request messages does not speed up the incoming packet synchronization a lot. This can be done by recording the percentage of time saved for the data/requests sent between PB110and AB220). FIG.14is a block flow diagram1400illustrating an example multiple board high availability system1400for communications utilizing TCP/IP modules that include a TCP stack and an IP stack. The system may not need extra supports from the IP stack. Incoming data from neighbor routers are concurrently sent to Apps on PB110and each of the other boards. Apps on PB110and each of the other boards receive and process the incoming data concurrently. The App160on PB110sends outgoing data directly to neighbor routers through LC140. The App160on each of the other boards sends an Ack message to the PB110for the data it receives and processes. App160on PB110sends an Ack message to neighbor router after it receives the data and the Ack messages for the packet from all the other boards. The TCP/IP based boards operate by an App160on PB110sending a Request message to AB220after not receiving Ack message for the data for a given interval and without receiving the data from the TCP/IP for the interval. The App160on PB110sends the data to AB220after receiving the data from the TCP/IP without receiving an Ack message from AB220for a given time. App160on AB220sends the data to the PB110after receiving the Request message from PB110. Sequential/Serial High Availability Embodiments A system includes a primary board having circuitry for executing a primary App and a TCP module. A secondary board has circuitry for executing a secondary copy of the primary App and a secondary TCP module. A third board has circuitry for executing a third copy of the primary App and a third TCP module. A line card is coupled to the third board, wherein the primary board, secondary board and third board are coupled sequentially to transfer data and acknowledgments between each sequential pair of boards via their respective TCP modules, and wherein the boards are reconfigurable via a switching fabric such as a crossbar switch or an Ethernet switch such that each board can communicate with the line card regardless of the failure of one or two of the boards. The system may use various methods to provide sequential high availability. A first method includes receiving incoming data from a peer device via a line card in a router, sending the received incoming data from the line card via a serial connection to the third board, from the third board to the secondary board, and from the secondary board to the primary board in sequence, wherein TCP modules in the boards receive the incoming data, providing the data to an App duplicated on the at least three boards via the TCP modules on each board, each board acknowledging receipt of the data via the TCP modules and serial connection in sequence from the primary board, through the secondary board and the third board to the line card, and acknowledging receipt of the data to the peer device via the line card responsive to all the boards acknowledging receipt of the data. A further method includes receiving data from an App running on a primary board in a router, the data being received by a TCP module on the primary board, the TCP module on the primary board providing the received data via a serial connection through at least two other boards each having a TCP module and a copy of the App, wherein one of the other boards is a last board, the TCP module on the last board providing the received data to a line card coupled via the serial connection to the last board, and providing an acknowledgment to each board in succession via the serial connection responsive to successful provision of the data by the line card to a peer device. Yet a further method includes coupling a primary board having circuitry for executing a primary App and a TCP module, a secondary board having circuitry for executing a secondary copy of the primary App and a secondary TCP module, a third board having circuitry for executing a third copy of the primary App and a third TCP module, and a line card in series to transfer data and acknowledgments between each sequential pair of devices via their respective TCP modules, and wherein the boards are reconfigurable to communicate with the line card regardless of the failure of one or two of the boards, and changing a sequence of the boards such that roles of the board change corresponding to a new sequence, wherein the serial connection is reconfigured to match the new sequence. FIGS.15A and15Bare block flow diagrams illustrating example an architecture for sequential TCP HA utilizing multiple control boards. Message flow is indicated for incoming data inFIG.15Aat1500. Message flow for outgoing data shown inFIG.15Bat1550. The three control boards may be referred to as a Primary Board (PB110), Secondary Board (SB120) and Third Board (TB130) in a router or device. These boards are connected sequentially/serially for transfer of data between the boards. PB110connects to SB120which connects to TB130. The last board, TB130, connects to the Line Card (LC140) or a number of LCs. Every LC140is connected to a peer170or neighbor router or device. A TCP module150runs on each of the boards. An App160(or a number of Apps) uses the TCP module150on each of the boards. In one embodiment, a TCP module150runs on each of the three or more router boards. The router boards may include circuitry to perform operations, such as a processor and memory. The memory may include one or more buffers or windows for buffering data being transferred until a transfer is complete. An App160or App160(or a number of Apps) uses TCP on each of the boards. Apps may include any software that sends and/or receives data, such as browsers, word processing programs, and a multitude of other Apps. As shown inFIG.15Bat1500, outgoing TCP data (1. Data) originated from an App160, such as Border Gateway Protocol (BGP) using the TCP module150on PB110(2. Data) is sent to the TCP module150on the next board such as SB120, which sends the data (3. Data) to the TCP module150on the next sequential board or to the LC140(4. Data) if the board is the last board. LC140sends the data (5. Data) to a peer170device and an Ack (6. Ack) to the TCP module150on the last board after receiving the Ack from the peer170. Each TCP module150except for the TCP module150on PB110sends the TCP data (3. data, 4 Data) to its corresponding Apps in order as needed, and sends an Ack (8. Ack and 9. Ack) to the TCP module150on the previous sequential board after receiving an Ack from the next sequential board or from the LC140(7. Ack). As shown inFIG.15Aat1550, incoming TCP data (1. Data) from a peer170is sent to the TCP module150on the last board (2. Data) such as TB130by a LC140connecting to the peer170, which stores the data in its buffer, sends the data to its App160(s) (3. Data) and to SB120(3. Data) using the TCP module150as needed. SB120sends the data (4. Data) to its App160and to PB110, which sends the data (5. Data) to its App160. The TCP modules, starting with PB110send acknowledgments in sequence (6. Ack, 7. Ack, and 8. Ack) to the previous boards and to the LC140. The LC140sends the peer170an ack (9. Ack) for the data after receiving the Ack (8. Ack) for the data from the TCP module150on the last board. FIG.16is a block flow diagram1600illustrating further detail regarding the handling of incoming data (1. Data) received by the LC140from a peer device170. TCP data from the peer170is sent to the TCP module150on the last board such as TB130via the LC140. Incoming TCP data from a peer170is sent to an App160such as BGP as needed and sent to the TCP module150on previous board (e.g., SB120). Incoming TCP data is sent to an App160such as BGP as needed and sent to the TCP module150on previous board (e.g., PB110). An Ack for the incoming TCP data sent to an App160on PB110such as BGP is sent to the TCP module150on next board (e.g., SB120). On SB120, an Ack is sent to TCP on next board (e.g., TB130) after receiving the Ack from previous board (e.g., PB110). On TB130, an Ack is sent to LC140after receiving the Ack from previous board (e.g., SB120). On LC140, the Ack is sent to the peer170after receiving the Ack from the last board (e.g., TB130). FIG.17is a block flow diagram1700illustrating further detail regarding the handling of outgoing data generated from an App160on the PB110. Outgoing TCP data from App160such as BGP is sent to the TCP module150on PB110. Outgoing TCP data is then sent to the TCP module150on next board (SB120). On SB120, outgoing TCP data is sent to an App160such as BGP as needed and is sent to the TCP module150on the next board (e.g., TB130). On TB130, outgoing TCP data is sent to an App160such as BGP as needed and is sent to the LC140. On LC140, the outgoing TCP data is sent to the peer170. LC140receives the Ack from the peer170and sends an Ack to the TCP module150on the last board (e.g., TB130). TB130, after receiving Ack from LC140, removes the acknowledged data from its buffer and sends an Ack to the TCP module150on its previous board (e.g., SB120). SB120, after receiving Ack from TB130, removes the acknowledged data from its buffer and sends an Ack to the TCP module150on its previous board (e.g., PB110). PB110, after receiving Ack from SB120, removes the acknowledged data from its buffer. While three boards are shown, the system may be expanded to accommodate further boards in a serial connected manner to ensure all boards are synchronized. Incoming data is sent serially from the line card to the last board and board by board to the first or primary board once an added board or boards are synchronized. The boards each update their apps and acknowledge receipt serially from the primary board back to the line card. For outgoing data, the PB110may simply send the data to more than two boards and coordinate reception of acknowledgments from each board prior to sending the data to the line card. The line card would then send an acknowledgment to each board following acknowledgment of receipt by a peer170. In some embodiments, boards may be added and synchronized to such that their TCP modules are in a same state as the TCP modules on the other boards. In one embodiment, the TCP state of the PB110is smoothly synchronized or backed up to the newly inserted board such as the SB120or TB130or a fourth or further board. A configuration file may be used to determine the role each board takes, resulting in rerouting of data traffic between the boards. The configuration file may be modified by a human operator or control logic that may take into account measured board reliability, assigning the highest reliable board as the PB110, with succeeding boards assigned as the SB120and TB130if there are a sufficient number of boards still operating. In further embodiments, the architecture operates to quickly and smoothly switch over the control on TCP and others to a live board such as TB130when PB110and SB120fail at the same time. FIGS.18and19are block flow diagrams1800and1900illustrating example operation when the primary board fails or is removed. The PB110is shown with an “X” indicating such failure or removal.FIG.18at1800shows the message flow prior to the failure, with the PB110, SB120, and TB130passing data and acknowledgments in a sequential manner, as described above with reference toFIGS.15A,15B,16and17. Incoming data is indicated on the left side ofFIGS.18and19, and outgoing data is indicated on the right side ofFIGS.18and19. When the primary board no longer operates, the former SB120becomes a new PB110and the former TB130becomes the new SB120as shown inFIG.19at1900. Message flow is identical to that inFIG.18, except that communications no longer make their way to the former PB110. FIGS.20and21are block flow diagrams2000and2100illustrating example operation when the secondary board fails or is removed. The SB120is shown with an “X” indicating such failure or removal.FIG.20at2000shows the message flow prior to the failure, with the PB110, SB120, and TB130passing data and acknowledgments in a sequential manner, as described above with reference toFIGS.15A,15B,16and17. Incoming data is indicated on the left side ofFIGS.20and21, and outgoing data is indicated on the right side ofFIGS.20and21. When the SB120no longer operates, the former TB130becomes a new SB120as shown inFIG.21at2100. Message flow is changed such that communications occur between the new SB120and the PB110, bypassing the former SB120. FIGS.22and23are block flow diagrams2200and2300illustrating example operation when the third board fails or is removed. The TB130is shown with an “X” indicating such failure or removal.FIG.22at2200shows the message flow prior to the failure, with the PB110, SB120, and TB130passing data and acknowledgments in a sequential manner, as described above with reference toFIGS.15A,15B,16and17. Incoming data is indicated on the left side ofFIGS.22and23, and outgoing data is indicated on the right side ofFIGS.22and23. When the TB130no longer operates as represented inFIG.23at2300, message flow is changed such that communications with the LC140occur between the LC140and SB120, and then sequentially to the PB110, bypassing the TB130. FIGS.24and25are block flow diagrams2400and2500of an example illustrating the failure of two boards, the PB110and SB120, as indicated by the “X”s.FIG.24at2400illustrate prior sequential message flow, whileFIG.25at2500indicates the new message flow following failure. In this example, the former TB130inFIG.24becomes the new PB110inFIG.25. FIGS.26and27are block flow diagrams2600and2700of an example illustrating the failure of two boards, the SB120and TB130, as indicated by the “X”s.FIG.26at2600illustrates prior sequential message flow, whileFIG.27at2700indicates the new message flow following failure. The former PB110remains the PB110and communicates directly with LC140, bypassing the failed boards120and130. FIGS.28and29are block flow diagrams2800and2900of an example illustrating the failure of two boards, the PB110and TB130, as indicated by the “X”s.FIG.28at2800illustrates prior sequential message flow, whileFIG.29at2900indicates the new message flow following failure. The former SB120becomes the new PB110inFIG.29. Messages flow directly between the new PB110and the LC140, bypassing the former PB110and TB130. When a new board is inserted into the system, it is integrated in three stages: batch backup of an App160using TCP, after batch backup and before real time backup, and after batch backup completes and real time backup starts. Given a sequence of boards in slots of a router, the PB110(Primary Board) may be the first board, and LB (Last Board) is the last board. In the examples shown, TB130is the LB. All boards work normally, passing data as previously described. When the new board is inserted, the batch backup of an App160using TCP starts. LC140holds off sending TCP data to the LB and receiving TCP data from the LB, and the LB holds off sending TCP data to LC140and receiving TCP data from LC140. The new board (AB220) is appended to the sequence after the LB. A connection between the LB and AB220, and one between AB220and LC140are created. The connection between the LB and LC140is removed. App160on the LB backs up its TCP sockets, other states and data structures to its corresponding App160on AB220. After a TCP socket is created on AB220, the state and data structures of the socket are synchronized in the TCP layer between LB and AB220. In stage: “after batch backup and before real time backup”, the incoming and outgoing data streams may be handled differently. For the incoming TCP data stream, the TCP module150on the LB holds off the data delivery to the App160on LB. The TCP module150on the LB sends the TCP module150on AB220the sequence number m corresponding to the last byte of the data delivered to the App160just before the holding off. The App160on the LB copies the data from the TCP module150in its input buffer to the corresponding App160on AB220. The beginning of the data in the buffer should be the boundary of a data stream. The data in the input buffer of the App160and the data in the input buffer of the TCP module150on AB220and the incoming data from a peer170form the continuous incoming TCP data stream in AB220for the socket backed up from the LB. For an outgoing TCP data stream, the App160on PB110/LB holds off the data delivery to the TCP module150. The TCP module150on the LB sends the TCP module150on AB220the sequence number n corresponding to the last byte of the data delivered to the TCP module150on the LB by the App160just before the holding off. The last byte should be the boundary of the data stream. The PB110backs up its TCP socket to its corresponding App160on AB220. After the TCP socket is created on AB220, the state and data structures of the socket (basically a copy of the state of the TCP module150) are replicated to the TCP module150on AB220. For incoming TCP data, the incoming data is synchronized during the backup period by the LC140sending the incoming TCP data to the TCP module150on AB220. The TCP module150on AB220, however, does not deliver any incoming data to the App160on AB220. This is illustrated inFIG.30. FIG.30is a block flow diagram3000illustrating further detail regarding the handling of the incoming TCP data stream boundary when the inserted board (AB) reaches stage: “after batch backup and before real time backup”. The TCP module150on PB110/LB holds off the data delivery to the App160on PB110. The App160on PB110/LB copies the data from its input buffer to the corresponding App160on AB220. The beginning of the data in the buffer should be the boundary of a data stream. The TCP module150on PB110/LB sends the TCP module150on AB220the sequence number m corresponding to the last byte of the data delivered to the App160just before the holding off. The data in the input buffer of the App160and the data in the input buffer of the TCP module150on AB220and the incoming data from a peer170form the continuous incoming TCP data stream in AB220for the socket backed up from PB110/LB. After Batch Backup Completes and Real Time Backup Starts. The TCP module150on AB220sends the incoming TCP data from the peer170starting at sequence number m+1 to the corresponding App160on AB220. The TCP module150on AB220sends the incoming TCP data from the peer170to TCP on LB/PB110. After Switchover to AB220, TCP on AB220continues sending data from the peer170to App160. The TCP module150on AB220sends Acks to the peer170for the data received via LC140. Alternative methods of synchronization are described below. FIG.31is a block flow diagram3100illustrating further detail regarding the handing of the outgoing TCP data stream boundary when the inserted board (AB) reaches stage “after batch backup and before real time backup”. The App160on PB110/LB holds off the data delivery to the TCP module150on PB110/LB. The TCP module150on PB110/LB sends the TCP module150on AB220the sequence number n corresponding to the last byte of the data delivered to the TCP module150by the App160just before the holding off. The last byte should be the boundary of the data stream. After batch backup completes and real time backup starts. An App160sends data to the TCP module150on PB110/LB and the TCP module150on PB110/LB stores it into its buffer in order. Data is sent to the TCP module150on AB220. The TCP module150on AB220stores data in its buffer in order and sends data to the corresponding App160on AB220as needed. The TCP module150on AB220also sends the data to the peer170. The TCP module150on AB220receives an Ack from the peer170via LC140and removes the acknowledged data from its buffer. The TCP module150on AB220sends an Ack message to PB110/LB. When the inserted board reaches stage: “after batch backup completes and real time backup starts”, the old LB sends TCP data to AB220(new LB) and receives TCP data from AB220. AB220(new LB) sends TCP data to old LB, receives TCP data from old LB, sends TCP data to LC140and receives TCP data from LC140. LC140sends TCP data to AB220and receives TCP data from AB220as a new LB. For the incoming TCP data stream, the TCP module150on AB220sends the incoming TCP data from the peer170starting at sequence number m+1 to the corresponding App160on AB220. The App160on AB220starts to receive its incoming TCP data from the peer170. For the outgoing TCP data stream, the TCP module150on AB220sends the outgoing TCP data originated from the App160on PB110starting at sequence number n+1 to the corresponding App160on AB220. The App160on AB220starts to monitor the outgoing TCP data originated from the App160on PB110. Some extended TCP socket options include outgoing data delivery to the App160. This option can be set by an App160on AB220. When enabled, the TCP module150on AB220sends the App160the outgoing TCP data originated form the corresponding App160on PB110. A further option includes incoming data deliver to the App160. The option can be set by an App160on AB220and when enabled, the TCP module150on AB220sends the App160the incoming TCP data from the peer170. In one embodiment, the sequence of the boards may be changed, resulting in the boards switching roles. A sequence of boards may be ordered as PB110(i.e. B1), SB120(i.e. B2), TB130(i.e. B3), B4, . . . where B1is board1, B2is board2, etc. Changes in the order of boards mean that roles of some boards in the sequence are changed, thus their positions in the sequence are changed. There are at least two ways to change positions of boards (or called change order of boards). Cold hard changes refers to changing the order of boards through removing (or bringing down) boards and inserting/appending (or bringing up) boards. E.g., for PB110(B1), SB120(B2) and TB130(B3) running in a router, if old TB130(B3) is promoted to new PB110, old PB110(B1) is changed to new TB130, then this can be achieved by removing SB120(B2) and appending it (resulting in: PB110(B1), SB120(B3), TB130(B2); and removing PB110(B1) and appending it (resulting in: PB110(B3), SB120(B2), TB130(B1)). In one embodiment, an array is created for possible changes in the order of boards, and actions to make changes. For example,FIG.32illustrates an array3200for three boards PB110(B1), SB120(B2) and TB130(B3). The array includes a current order, followed by actions taken to change the order. FIGS.33,34, and35are block flow diagrams3300,3400, and3500illustrating the example changes, including resulting message flows. InFIG.33at3300, a first change is made to change the order PB110(B1), SB120(B2), TB130(B3) to PB110(B3), SB120(B2), TB130(B1). SB120(B2) is removed and appended after TB130(B3), resulting in PB110(B1), SB120(B3), TB130(B2). TB130(B3) becomes SB120(B3) and SB120(B2) becomes TB130(B2). InFIG.34at3400, PB110(B1) is removed and appended after TB130(B2), resulting in PB110(B3), SB120(B2), TB130(B1). SB120(B3) becomes PB110(B3), TB130(B2) becomes SB120(B2), and PB110(B1) becomes TB130(B1). FIG.35illustrates the new arrangement and sequential message flow resulting from the changes at3500. A further way to change the order of the boards is to change the connections among the boards, such as by use of software, and referred to as hot soft changes. To perform such a change, all boards are frozen to hold off all data delivery and the sending of data by apps. The boards are then moved to their expected positions in the sequence by changing the routing of data between the boards. The boards are then unfrozen, such that data transfer can occur. FIG.36illustrates an example array3600that is created for possible changes on order of boards, and actions to make changes by way of hot soft changes for three boards PB110(B1), SB120(B2) and TB130(B3). Note that a Freeze action may be performed for all boards before each Move action, and action Unfreeze all boards after the Move are not shown in the array. Every board affected by Move resends data if data has not been acknowledged. Following the Freeze action, actions are performed to change on order of boards according to the array. FIGS.37A,37B,38A, and38Bare block flow diagrams illustrating the hot soft change, including resulting message flows. InFIG.37Aat3700, representing incoming data, a first change represented by arrow3710is made to change the order PB110(B1), SB120(B2), TB130(B3) to PB110(B2), SB120(B1), TB130(B3). The actions include a Freeze of boards B1, B2and B3(states) and Movement of B2to position before B1, resulting in PB110(B2), SB120(B1), TB130(B3). A further action is then used to unfreeze boards B1, B2and B3. The same procedure is followed for the same change inFIG.37Bat3750represented by arrow3760for outgoing data. The resulting new configurations are shown inFIGS.38A and38Bfor incoming and outgoing data respectively. InFIG.38Aat3800, for incoming TCP data, TB130(B3) resends data to SB120(B1) if data is not Ack′ed, and SB120(B1) resends data to PB110(B2) if data has not been acknowledged. For outgoing TCP data, PB110(B2) resends data to SB120(B1) if data is not Ack′ed and SB120(B1) resends data to TB130(B3) if data has not been acknowledged. Acknowledgment and Synchronization Methods FIG.39is a block diagram representation of a router3900for transmitting and receiving data packets. Router3900includes multiple network elements (NEs) at3910,3912,3914,3916, and3920labeled NE1, NE2, NE3, NE4. . . NEn. Such multiple NEs may be referred to as a cluster of NEs. There may be n NEs in various embodiments, with n greater than 2. The NEs may be boards, nodes, or virtual machine (VM) containers in different embodiments that may operate in an active state or mode and have m backups in the same cluster operating in a standby state or mode, where number of backups is between 1 and n−1. The states of each NE may be synchronized as described below and can support high availability against up to n−1 failures. The NEs each transmit and receive data via a router or switch3930that is serves as a network connection to communicate with an external network3940that may include multiple routers indicated at3942,3944, and3946, which may be referred to as peers. Note that the switch3930may be wired or wireless, or may be a network connection integrated individually into each NE in further embodiments, such as when NEs are serially connected to each other. Each network element may include suitable processing and memory resources for executing various applications and communication protocols. An App3950, such as BGP may communicate via a transmit plugin3955, such as TCP Tx-Plugin and socket3960to a TCP module3965. TCP module3965may be coupled via a receive plugin3970, such as TCP Rx-Plugin, to an internet protocol connection, IP3975for coupling to switch3930. FIG.40is a diagram illustrating the format of an example packet header4000for packets that are transmitted and received. In one embodiment, the packet header4000includes multiple 32 bit sections shown as rows and including multiple fields. A 32 bit section shown as the first row includes an 8 bit version number4010of a multicast protocol, an 8 bit type4020indicating a data packet or an acknowledgment packet, and a 16 bit packet length4030identifying the size of a packet including the header. A second 32 bit section, illustrated as a second row includes a 16 bit checksum4040and a 16 bit AuType4050. The checksum is a checksum of an entire packet, and the AuType is an authentication type such as clear text password and MD5 (Message Digest type 5). The last 32 bit section is an authentication field4060that includes an authentication encryption field to ensure that the packet is authentic. The authentication encryption value in one embodiment identifies the transmitter and receiver of the communication, and is encrypted to ensure the packet cannot be easily tampered with. FIG.41is a diagram illustrating the format of a data packet4100. Data packet4100includes a similar packet header as packet header4000and also includes a data sequence number4110and a payload4120, shown as data field4120. The sequence number4110specifies where in the stream the packet is. FIG.42is a diagram illustrating the format of an acknowledgment packet (also referred to as a message) Ack4200. Ack420includes a similar packet header as packet header4000and also includes an Ack sequence4210that corresponds to sequence number4110, acknowledging a number of bytes successfully received. FIG.43is a block diagram illustrating an example data flow among multiple network elements indicated generally at4300. An active NE4310and two standby NEs4315and4320are shown in communication with a peer4325. Packet flow is shown by numbered arrows in sequence of 1-5. A TCP packet4330is shown as being received by NE4310unicast from remote peer4325. The active NE4310TCP Rx-plugin3970stores the packet into a receive aggregation buffer, Rx Aggr Buff4335. The data may then be forwarded to an APP on active NE4310. A new unicast transmit Ack, u-tx-Ack4340may then be transmitted back to remote peer4325. The u-tx-Ack4340may be sent if different from a previous such Ack, and a timeout, window size limit is met, or by receipt of a transmit data packet. Then, the active NE4310will multicast the data in the Rx Aggr. buffer4335as indicated by data block4345to both standby network elements4315and4320. Note that the above qualifications of when to send the u-tx-Ack4340allows the aggregation of data packets into a single Ack, allowing faster communications via an Ack-Sync reliable multicast. The standby network elements4315and4320receive the packet and send multicast acknowledgment (m-Ack) packets at4350and4355respectively back to active network element4310. The standby network elements4315and4320may then forward the packets to their respective TCP3965and APP3950. FIG.44graphically illustrates the use of multiple packet aggregation in two timing diagrams illustrated generally at4400. Diagram4405illustrates aggregation of five data packets using Ack-sync reliable multicast, while diagram4410shows the same data packets being received without the use of aggregation. Following a previous u-Ack4415, five unicast data packets4420,4421,4422,4423, and4424are received from a peer and then multicast as indicated at m-pkt4425. m-Acks indicated generally at4430are then received from two or more standby NEs. A u-Ack4435may then be sent to the peer. While five data packets are shown as being aggregated, fewer or more data packets may be aggregated in further embodiments. Diagram4410, shows the packets being multicast to the standby NEs following receipt by the active NE. As each packet, u-pkt-in4420,4421,4422,4423, and4424is received from a peer, the packet is multicast to standby NEs as indicated at m-pkt4450,4451,4452,4453, and4454. However, for each packet, an m-Ack indicated at4460,4461,4462,4463, and4464respectively is received before the next u-pck-in is received. Responsive to the last m-pkt and m-Ack being received, a u-Ack4460may be sent. This sequential progression of multicasting a packet and receiving an Ack for each multicasted packet may take more time. The non aggregation shown in diagram4410and also utilizes more bandwidth, as it requires more packets to be sent. FIG.45is a block diagram illustrating an example reliable multicast in the transmit direction by multiple network elements indicated generally at4500. The network elements are numbered consistently with the reference numbers used inFIG.43. APP3950in active NE4310generates data to be transmitted to remote peer4325. The data is sent4505by APP3950to TCP Tx-Plugin3955, which sends4507the data to a Tx Aggr. buffer4510. A packet is unicast as indicated at4520from TCP module3965via IP3975to the remote peer4325. A new u-rx-Ack4530is received from the remote peer4325that is different than a previous u-rx-Ack. Responsive to such receipt, the Tx Aggr. buffer4510is triggered4535to multicast one or more transmit packets4540that are aggregated therein prior to receipt of the new u-rx-Ack4530. The packets are transmitted via the TCP tx-plugins3955. The standby nodes receive the aggregated data and send the Acks4545,4546back to the active NE4310. The standby nodes also, via the TCP Tx-Plugins forward the packets to the TCP sockets3960which also make the packets available to the APPs3950on the standby NEs such that they are synchronized with APP3950on active NE4310. FIG.46is a block diagram illustrating transmit and receive merge of communications by multiple network elements indicated generally at4600. The network elements are numbered consistently with the reference numbers used inFIGS.43and45. In transmit and receive merge, the previous u-tx-Ack can be piggybacked4605with packets in the Rx Aggr. buffer3965to form data that includes both transmit and receive packets4610that is multicast from the active NE4310to the standby NEs4315and4320. The merger may be done by including a header in a data packet that contains the latest byte sequence number of the data payload. Each standby NE provides an Ack4545,4546. Upon receipt of the Acks, the socket3960triggers the transmission of data to the remote peer as indicated at4520. FIG.47is a block diagram illustrating transmit and receive merge with timeout for communications by multiple network elements indicated generally at4700. The network elements are numbered consistently with the reference numbers used inFIGS.43,45and46. In this embodiment, data from both the Rx Aggr. buffer4335and from the Tx Aggr. buffer4510is merged as indicated at4710and multicast to the standby NEs as indicated at4720. A multicast timeout is used to trigger the merge and start the multicast. The timeout may occur responsive to a set time following receipt of a new u-tx-Ack. Timing diagram4405also illustrates a method of controlling a TCP Ack number (Ack #) to improve multicast performance. A multicast transmission from the active NE3950is sent responsive to the TCP module3965being ready to send out an Ack. Reducing or delaying the TCP module Ack may improve the performance of Ack-sync reliable multicast. Increasing the Ack timeout and the receive window size will reduce the number of TCP Acks, thereby reducing the time for transmission. In one embodiment, the window size is a number of bytes to be sent or received from or to a buffer storing data. In TCP, the maximum window size value was originally limited to 65,535 bytes. This is changed with the TCP window scale option, which is an option to increase the receive window size allowed in TCP. By using the window scale option, the receive window size may be increased up to a maximum value of 1,073,725,440 bytes. Some non-limiting examples of window size for Ack-sync reliable multicast may include 65,535 or 17,520. The receive window (size) is the number of bytes a sender can transmit without receiving an acknowledgment. TCP sets a timeout when it sends data and if data is not acknowledged before timeout expires it retransmits data. The value (or length) of the timer is determined based on the round-trip time (RTT) of a TCP connection. A minimum TCP retransmission timeout (RTO Minimum) is 10 milliseconds. Examples of length of a retransmission timer includes, but is not limited to 3 to 5 seconds. FIG.48illustrates a method at4800for reducing multicast Acks to improve multicast performance. The active NE does not need to wait for all multicast Acks from all the standby NEs. This is illustrated by the difference in two diagrams. Diagram4805illustrates waiting for Acks4810from all four standby NEs labeled nd1-Ack, nd2-Ack, nd3-Ack, and nd4-Ack, between sending multicast packets4815. Diagram4820illustrates waiting for only some of the Acks prior to sending the next multicast packet4815. In this case, receipt of two Acks from any two of the standby NEs will trigger sending of the next multicast packet4815. Acks4810are shown as being received from nd1and nd3corresponding to the first and third NEs responsive to the first multicast packet4815. Any two NEs providing Acks4810for the next multicast packet may trigger a further multicast packet transmission. When an active NE failure occurs, an election between which NE should become the new active NE may occur among the standby NEs that have acknowledged all the multicast packets at the time of the failure. The standby NEs may send out their latest Ack number, and the standby NEs having the largest Ack number may be eligible for election as the new active NE. The election may be performed as indicated above, based on performance or other desired methodology. The use of the various forms of aggregation and reducing Acks as described above can relieve a performance bottleneck referred to as “Ack storming” and improve overall multicast performance by transmitting more data with fewer overall Acks and not having to wait for Acks before sending further packets. While the synchronized state in the standby nodes may be delayed due to aggregation of packets, the state update in the active node is not delayed by the aggregation and the state of the standby nodes is recoverable. FIGS.49,50,51, and52are example block flow diagrams numbered consistently withFIG.39and illustrating data and Ack flow between the router3900NEs3910,3912,3914,3916, and3920and remote peers3942,3944, and3946. FIG.49at4900illustrates parallel operation of router3900for incoming data (1. Data) in a first example embodiment. Incoming TCP data (1. Data) from a remote APP or BGP peer such as3942is received at the router switch3930and forwarded (2. Data) to the TCP module3965on every NE in parallel. The TCP module3965on each NE sends the data to its corresponding APP3950(3. Data) and to the switch3930an Ack (3. AckNE1. . . AckNEn) responsive to a read request from the APP3950. The switch then sends an Ack (4. Ack) back to the peer device3942. FIG.50at5000illustrates serial or sequential operation of router3900for incoming data (1. Data) in a first example embodiment. Incoming TCP data (1. Data) from a remote APP or BGP peer such as3942is received at the router3900by the first or active node3910, NE1via a network connection which may be part of each NE as a wired or wireless connection to the network. The data is then sent to the TCP module3965via the TCP Rx Plugin3970as indicated by (2. Data). The TCP module on each of the standby NEs, NE2-NEn, send an Ack (3. AckNE1. . . AckNEn), and to the APPs responsive to a read request from the APPs3950. The active NE, NE1then sends an Ack (4. Ack) back to the peer device3942, using the minimum sequence number among the Acks received from the TCPs on the NEs. FIG.51at5100illustrates parallel operation of router3900for outgoing data (1. Data) generated by APP3950in the active NE13910. The TCP module3965sends the data (2. Data) to the other NEs, NE2-NEn. The APP on each NE reads the data (3. Data). The TCP module3965on each of the NEs then sends an Ack (3. AckNE2. . . AckNEn), to the active NE, NE1. The data is then sent (5. Data) via the active NE, NE1via the router switch3930to the remote peer3942. An Ack (7. Ack) is sent by the remote peer3942and received by the switch3930, followed by an Ack (8. Ack) sent from the switch3930in parallel to each of the ENs coupled to the switch. FIG.52at5200illustrates another embodiment of parallel operation of router3900for outgoing data (1. Data) generated by APP3950in the active NE13910. The TCP module3965sends the data (2. Data) to the other NEs, NE2-NEn concurrently. The APP on each NE reads the data (3. Data). The TCP module3965on each of the NEs then sends an Ack (3. AckNE2. . . AckNEn), to the active NE, NE1. The data is then sent (5. Data) via the active NE, NE1by the router3900to the remote peer3942after receiving the Acks (AckNE2AckNEn) from the NEs. An Ack (6. Ack) is sent by the remote peer3942and received by the active EN1followed by an Ack (7. Ack) sent from the active EN1to the other ENs2-nin parallel. FIG.53is a flowchart illustrating an example computer implemented method5300of synchronizing states in NEs for outgoing TCP communications. States for outgoing TCP resynchronize when a new active NE, NE1replace an old NE1responsive to a failure of the old NE1indicated at detection operation5310. Each of the standby NEs, NE2-NEn sends its sequence number at operation5320, as obtained from the received packets that have been acknowledged. The new active NE1then operates at5330to identify the minimum sequence number received. This minimum sequence number is sent to each of the other NEs at operation5340. Each NE determines at5350whether or not the minimum sequence number is less than its own sequence number and then sends the data at5360between the two sequences to the new active NE1. After receiving the data, the new NE1sends the data from the minimum sequence number to the maximum sequence number at operation5370to every NE that does not have the maximum sequence number, which data is then used by each NE at operation5380to update its state so that all the NEs have the same state. Note that the data also contains the corresponding sequence numbers so each NE will now have the latest data that any of the NEs had, and normal operation begins at5390with the new active NE. Even with aggregation of data and the use of reduced Acks, all NEs now have the same state, which is the highest state, of any of the NEs that did not fail. Note that the above process works and ensures reliability of the NEs, even if m devices fail at the same time or over a period of time. FIG.54is a flowchart illustrating an example computer implemented method5400performable by a standby NE in one example embodiment. The standby NE in one embodiment executes, at operation5410, a secondary copy of a primary application executing on an active network element, as well as a TCP module for communicating with a peer and other network elements. At operation5420, data packets may be received that originated from a peer coupled via a network connection. Acknowledgments for the received data packets may be provided at operation5430by the standby NE. In the event of a network failure, the standby NE may become a new active network element at5440. The new active NE may then communicate via the network connection to a peer and one or more further standby network elements regardless of the failure of an addition network element at5450. Upon becoming the new active NE, the method5300of synchronizing the standby NEs may be performed. FIG.55is a block diagram illustrating circuitry for implementing one or more boards and line cards and for performing methods according to example embodiments. All components need not be used in various embodiments. One example computing device in the form of a computer5500may include a processing unit5502, memory5503, removable storage5510, and non-removable storage5512. Although the example computing device is illustrated and described as computer5500, the computing device may be in different forms in different embodiments. For example, the computing device may instead be a smartphone, a tablet, smartwatch, or other computing device including the same or similar elements as illustrated and described with regard toFIG.55. Devices, such as smartphones, tablets, and smartwatches, are generally collectively referred to as mobile devices or user equipment. Further, although the various data storage elements are illustrated as part of the computer5500, the storage may also or alternatively include cloud-based storage accessible via a network, such as the Internet or server based storage. Memory5503may include volatile memory5514and non-volatile memory5508. Computer5500may include—or have access to a computing environment that includes—a variety of computer-readable media, such as volatile memory5514and non-volatile memory5508, removable storage5510and non-removable storage5512. Computer storage includes random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM) and electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD ROM), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions. Computer5500may include or have access to a computing environment that includes input5506, output5504, and a communication connection5516. Output5504may include a display device, such as a touchscreen, that also may serve as an input device. The input5506may include one or more of a touchscreen, touchpad, mouse, keyboard, camera, one or more device-specific buttons, one or more sensors integrated within or coupled via wired or wireless data connections to the computer5500, and other input devices. The computer may operate in a networked environment using a communication connection to connect to one or more remote computers, such as database servers. The remote computer may include a personal computer (PC), server, router, network PC, a peer170device or other common network node, or the like. The communication connection may include a Local Area Network (LAN), a Wide Area Network (WAN), cellular, WiFi, Bluetooth, or other networks. Computer-readable instructions stored on a computer-readable medium are executable by the processing unit5502of the computer5500. A hard drive, CD-ROM, and RAM are some examples of articles including a non-transitory computer-readable medium such as a storage device. The terms computer-readable medium and storage device do not include carrier waves to the extent carrier waves are deemed too transitory. Storage can also include networked storage such as a storage area network (SAN) indicated at5520. EXAMPLES Parallel high availability examples include:1. A system includes a primary board having circuitry for executing a primary application and a TCP module, a secondary board having circuitry for executing a secondary copy of the primary application and a secondary TCP module, a third board having circuitry for executing a third copy of the primary application and a third TCP module, and a line card coupled to all the boards, wherein the primary board, secondary board and third board are coupled in parallel to transfer data and acknowledgments among the primary board, secondary board and third board via the respective TCP modules of the primary board, secondary board and third board, and wherein the boards are reconfigurable to communicate with the line card regardless of the failure of one or two of the boards.2. The system of example 1 wherein each TCP module includes an input buffer.3. The system of any of examples 1-2 wherein the circuitry comprises a processor.4. The system of example 3 wherein each TCP module includes an input buffer and an output buffer.5. A method includes receiving incoming data from a peer device via a line card in a router, sending the received incoming data to TCP modules in at least three router boards, providing the data to an application duplicated on the at least three router boards, each router board acknowledging receipt of the data via the TCP modules, and acknowledging receipt of the data to the peer device via the line card responsive to all the boards acknowledging receipt of the data.6. The method of example 5 wherein each router board acknowledges receipt of the data via the TCP modules via the parallel connection to the line card.7. The method of any of examples 5-6 wherein each router board explicitly acknowledges receipt of the data via the TCP modules via a parallel connection to a primary board, and wherein the primary board acknowledges receipt to the line card.8. The method of any of examples 5-6 wherein each router board explicitly acknowledges receipt of the data via the TCP modules via a parallel connection to a primary board, requests missing data from the primary board, and wherein the primary board acknowledges receipt to the line card.9. The method of any of examples 5-6 wherein each router board implicitly acknowledges receipt of the data via the TCP modules via a parallel connection to a primary board by requesting missing data from the primary board, and wherein the primary board acknowledges receipt to the line card when no requests are received after a timer expires.10. The method of example 5 wherein each router board implicitly acknowledges receipt of the data via the TCP modules via a parallel connection to a primary board by requesting missing data from the primary board, sends a request message for missing data, and wherein the primary board acknowledges receipt to the line card when no requests are received after a timer expires.11. The method of example 10 wherein a time for the timer is less than the time for a TCP retransmission timer.12. The method of any of examples 5-11 and further including holding off on delivering data to the application on the primary board until the data is synchronized with a newly added board, sending a sequence number m corresponding to a last byte of the data delivered to the application before holding off, copying the data from the primary board TCP module to the newly added board TCP module and delivering data to the applications on all boards responsive to completion of the copying of the data.13. A method includes receiving data from an application running on a primary board in a router, the data being received by a TCP module on the primary board the TCP module on the primary board providing the received data in parallel to at least two other boards each having a TCP module and a copy of the application, the TCP module on the primary board providing the received data to a line card coupled in parallel to all the boards, and providing an acknowledgment to each board in parallel from a peer device responsive to successful delivery of the data to the peer device.14. The method of example 13 wherein the data is provided from the TCP module on the primary board to the other boards without receipt of an explicit acknowledgment.15. The method of example 13 wherein the other boards provide an explicit acknowledgment for data provided from the primary board and wherein the primary board sends the data to the line card upon receipt of such acknowledgments from each board.16. The method of example 13 wherein the other boards provide a request message responsive to missing bytes of the data, serving as an implied acknowledgment for data provided from the primary board and wherein the primary board sends the data to the line card upon not receiving a request from any of the boards after a timer expires.17. The method of any of examples 13-16 and further including the primary board backing up TCP sockets to a new board that has been inserted into a router slot, replicating the TCP socket on the new board, and synchronizing the received data on the TCP module on the new board.18. The method of example 17 wherein the received data is synchronized on the new board without an explicit acknowledgment.19. The method of example 17 wherein the received data is synchronized on the new board with an explicit acknowledgment from the new board.20. The method of example 17 wherein the received data is synchronized on the new board with an implicit acknowledgment.21. The method of any of examples 13-20 and further including holding off on delivering data to the application on the newly added board until the data is synchronized with the primary board, sending a sequence number n corresponding to a last byte of the data delivered to the application before holding off copying the data from the primary board TCP module to the newly added board TCP module, and delivering data to the applications on all boards responsive to completion of the copying of the data. Sequential high availability examples include:1. A system includes a primary board having circuitry for executing a primary application and a TCP module, a secondary board having circuitry for executing a secondary copy of the primary application and a secondary TCP module, a third board having circuitry for executing a third copy of the primary application and a third TCP module, a line card coupled to the third board, wherein the primary board, secondary board and third board are coupled sequentially to transfer data and acknowledgments between them sequentially via their respective TCP modules, and wherein the boards are reconfigurable to communicate with the line card regardless of the failure of one or two of the boards.2. The system of example 1 wherein the circuitry comprises a processor.3. The system of example 2 wherein each TCP module includes an input buffer and an output buffer.4. The system of example 1 wherein the system comprises a switch.5. A method includes receiving incoming data from a peer device via a line card in a router, sending the received incoming data from the line card via a serial connection to the third board, from the third board to the secondary board, and from the secondary board to the primary board in sequence, wherein TCP modules in the boards receive the incoming data, providing the data to an application duplicated on the at least three boards via the TCP modules on each board each board acknowledging receipt of the data via the TCP modules and serial connection in sequence from the primary board, through the secondary board and the third board to the line card, and acknowledging receipt of the data to the peer device via the line card responsive to all the boards acknowledging receipt of the data.6. The method of example 5 wherein the primary board sends an acknowledgment after providing the data to the application on the primary board and each succeeding board sends an acknowledgment on the serial connection after receipt of an acknowledgment from a preceding board.7. The method of any of examples 5-6 and further comprising modifying the serial connection between the remaining boards responsive to a board failing.8. The method of example 7 and further comprising changing roles of boards that have not failed such that one board operates as the primary board and is the furthest board along the serial connection from the line card.9. The method of example 8 wherein the serial connection and roles of the boards are changed responsive to two boards failing.10. The method of example 9 wherein the line card resends data to the board having the role of primary board responsive to data not being acknowledged.11. The method of any of examples 5-10 and further comprising responsive to a new board being inserted holding off on the line card sending data to a last board, holding off on the last board sending data to the line card, synchronizing states and data structures of an application using TCP on the last board to a corresponding application on the new board, backing up TCP sockets on the new board, synchronizing a state and data structures of the TCP sockets between the last board and the new board, creating a connection between the last board and the new board and between the new board and the line card, and removing the connection between the last board and the line card.12. The method of example 11 and further including sending a sequence number m corresponding to a last byte of data delivered to the application from the last board to the new board, copying data in the application's TCP input buffer on the last board to a corresponding application on the new board such that a beginning of the data is a boundary of a data stream, and continuing to transfer data via the serial connection to the boards, including the new board.13. The method of example 12 wherein continuing to transfer data includes sending the incoming TCP data from the peer device from sequence number m+1 from the TCP module on AB to the corresponding application on AB and wherein the application on AB starts to snoop its incoming TCP data from the peer device.14. A method includes receiving data from an application running on a primary board in a router, the data being received by a TCP module on the primary board, the TCP module on the primary board providing the received data via a serial connection through at least two other boards each having a TCP module and a copy of the application, wherein one of the other boards is a last board, the TCP module on the last board providing the received data to a line card coupled via the serial connection to the last board, and providing an acknowledgment to each board in succession via the serial connection responsive to successful provision of the data by the line card to a peer device.15. The method of example 14 wherein each TCP module receiving the data from a preceding TCP module updates its application with the received data.16. The method of example 15 wherein the last board sends an acknowledgment to a preceding board on the serial connection responsive to receiving an acknowledgment from the line card and each succeeding board out to the primary board sends an acknowledgment on the serial connection after receipt of an acknowledgment from a preceding board and removes the data from a TCP buffer.17. The method of any of examples 14-16 and further comprising modifying the serial connection between the remaining boards responsive to a board failing.18. The method of example 17 and further comprising changing roles of boards that have not failed such that one board operates as the primary board and is the furthest board along the serial connection from the line card.19. The method of example 18 wherein the serial connection and roles of the boards are changed responsive to two boards failing.20. The method of any of examples 14-19 and further including responsive to a new board being inserted, holding off on the line card sending data to a last board, holding off on the last board sending data to the line card, backing up TCP sockets on the new board, synchronizing a state and data structures of the TCP sockets between the last board and the new board, synchronizing states and data structures of an application using TCP on the last board to an corresponding application on the new board, creating a connection between the last board and the new board and between the new board and the line card, and removing the connection between the last board and the line card.21. The method of example 20 and further comprising sending a sequence number n corresponding to a last byte of data delivered to the application from the last board to the new board.22. A method includes coupling a primary board having circuitry for executing a primary application and a TCP module, a secondary board having circuitry for executing a secondary copy of the primary application and a secondary TCP module, a third board having circuitry for executing a third copy of the primary application and a third TCP module, and a line card in series to transfer data and acknowledgments between them sequentially via their respective TCP modules, and wherein the boards are reconfigurable to communicate with the line card regardless of the failure of one or two of the boards, and changing a sequence of the boards such that roles of the board change corresponding to a new sequence, wherein the serial connection is reconfigured to match the new sequence.23. The method of example 22 wherein the sequence of boards is changed by bringing down and bringing up boards.24. The method of example 23 wherein the sequence of boards is changed by use of software to change the order of serial connections.25. The method of example 24 and further comprising freezing the boards prior to changing the order of serial connection and unfreezing the boards after changing the order of serial connections. Although a few embodiments have been described in detail above, other modifications are possible. For example, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Other embodiments may be within the scope of the following claims. | 97,994 |
11863371 | DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the present invention will be described with reference to the drawings. However, constituent portions to which functions correspond in all drawings of the present specification will be assigned same reference characters and descriptions thereof will be omitted when appropriate. <Configuration of Embodiment> FIG.1is a diagram showing a configuration of an IoT device management system provided with an IoT device management apparatus according to an embodiment of the present invention. An IoT device management system (also referred to as a system)10A shown inFIG.1differs from the conventional system10(FIG.12) in that an IoT device management apparatus (also referred to as a management apparatus)40is provided. The management apparatus40is connected to a plurality of cameras11to15, a plurality of applications11A to15A, and a plurality of computes31and32. As will be described later, the management apparatus40performs prioritization processing of the cameras11to15which are IoT devices, distributed arrangement processing and recovery (also referred to as healing) processing of the applications11A to15A upon a failure of the cameras11to15or the computes31and32. As shown inFIG.2, the management apparatus40is provided with an application control unit41and a failure detecting unit42. The application control unit41is configured so as to include a locality calculating unit44, a priority calculating unit45, a first DB (Data Base)46, a second DB47, and an application generation recovery control unit48. It should be noted that the application generation recovery control unit48constitutes the control unit described in the claims. The priority calculating unit45prioritizes the respective cameras11to15in an order in which various types of photographic data (acquired information) are obtained. When the pieces of photographic data of the cameras11to15are input to the applications11A to15A via the computes31and32, the applications11A to15A perform processing of analyzing people and types of items in each piece of photographic data. The priority calculating unit45prioritizes the respective cameras11to15in accordance with the number of types of data following the processing. A method of the prioritization will be described with reference toFIG.3. The cameras11to13are arranged in a first area21. Photographic data11D of the camera11among the cameras includes “Woman 1: pink clothes, 30s, canned food, milk, cart” as information representing a woman26in her 30s and wearing pink clothes and various items26ain a shopping cart. Furthermore, the photographic data11D includes “Man 1: green clothes, 30s, beer, Woman 2: . . . ” as information representing a man23in his 30s wearing green clothes and carrying a shopping bag containing beer (not illustrated) already paid for, two women24and25, and the like. Photographic data12D of the camera12includes “Woman 1: pink clothes, 30s, canned food, milk, eggs, cart” as information representing the woman26in her 30s and wearing pink clothes and various items26bin a shopping cart. Photographic data13D of the camera13includes “Woman 1: pink clothes, 30s, canned food, cart” as information representing the woman26in her 30s and wearing pink clothes and various items26cin a shopping cart. Among the cameras14and15arranged in the second area22, photographic data14D of the camera14includes “Woman 1: apron, 30s, bread” as information representing a woman27in her 30s and wearing an apron and various items27ain a shopping bag being carried by the woman27. In a similar manner to the camera14, photographic data15D of the camera15includes “Woman 1: apron, 30s, bread” as information representing the woman27in her 30s and wearing an apron and the various items27ain a shopping bag being carried by the woman27. Based on the pieces of photographic data11D to15D described above, the priority calculating unit45prioritizes the cameras11to15. A highest priority is to be given to the camera11that obtains the largest amount of information. As described above, since the photographic data11D of the camera11contains the largest amount of information, the camera11is given the highest priority (P1). Next, a second highest priority is to be given to a camera that obtains the largest amount of information excluding overlapping data with the highest priority (P1). In this case, the photographic data12D of the camera12is only “eggs” when excluding overlapping data with the photographic data11D of the highest priority camera11, and the photographic data13D of the camera13is “0”. Therefore, since the amount of information in the photographic data14D or15D of the camera14or15that is separated from the camera11is second largest, for example, the camera14is given the second highest priority (P2). However, since the amounts of information in the pieces of photographic data14D and15D of the cameras14and15are the same, one of the cameras14and15is randomly given the second highest priority. A third highest priority is given to a camera with the largest amount of information that remains after excluding overlapping data with the highest and second highest priorities. Specifically, the camera12is given the third highest priority (P3). Through similar processing, the camera13is given a fourth highest priority (P4) and the camera15is given a fifth highest priority (P5). In this manner, the priority calculating unit45determines the priorities of the cameras11to15and stores information on the priorities in a second DB47(FIG.2). When prioritization is performed in this manner, priorities are determined solely based on the pieces of photographic data11D to15D obtained from the cameras11to15regardless of performances of the cameras11to15. In addition, priorities are updated by the priority calculating unit45by regularly receiving the pieces of photographic data11D to15D of the cameras11to15. Furthermore, a performance level (an application performance level) or an amount of resource allocation of the applications11A to15A (FIG.1) may be changed in accordance with priorities. For example, the amount of resource allocation is increased and the application performance level is raised with respect to corresponding applications11A to15A in a descending order of priorities of the cameras11to15. In the example shown inFIG.3, the amount of resource allocation is increased and the application performance level is raised in an order of the application11A of the camera11(P1), the application14A of the camera14(P2), the application13A of the camera12(P3), the application14A of the camera13(P4), and the application15A of the camera15(P5). Alternatively, as shown inFIG.4, a higher priority may be given to an application capable of covering a larger amount of information. In the present example, the applications11A and14A indicated by solid lines have the largest amount of resource allocation and enable detailed analysis, the application12A indicated by a dashed line has the second largest amount of resource allocation and enables analysis at a standard level, and the applications15A and13A indicated by dashed-dotted lines have the third largest amount of resource allocation and enable simple analysis. The applications11A and14A are associated with the compute31, the applications12A and15A are associated with the compute32, and the application13A is associated with a compute33. On the other hand, with respect to cameras (for example, the cameras13and15) of which priorities are lower than a predetermined rank, the control unit48may prevent the pieces of photographic data13D and15D from being analyzed by the applications13A and15A. According to the processing, a processing load on the system10A can be reduced. Next, the locality calculating unit44shown inFIG.2calculates localities of the cameras11to15. The locality is an index indicating whether or not a plurality of (for example, two) cameras are arranged close to each other, and a higher locality indicates that the two cameras are arranged closer to each other. When calculating the locality, the locality calculating unit44obtains a degree of coincidence between pieces of photographic data of a plurality of cameras such as the pieces of photographic data14D and15D of the two cameras14and15and determines that the higher the degree of coincidence, the higher the locality. For example, since the degree of coincidence between the pieces of photographic data14D and15D of the two cameras14and15is a complete coincidence, the locality calculating unit44determines that the locality between the cameras14and15is highest and calculates locality information reflecting a result of the determination. The locality information is stored in the first DB46. The control unit48obtains a pair of cameras with a high locality from the locality information stored in the first DB46and controls the pair of cameras so that, according to an anti-affinity rule, each camera is associated with computes31and32that are as different from each other as possible. For example, among the cameras11,12, and13arranged in the first area21, the camera12and the camera13have a high degree of coincidence and a high locality. Therefore, as shown inFIG.4, the control unit48performs control so as to associate one camera12with the compute32and associate the other camera13with another compute33. Next, assigning a score indicating levels of priority and locality of the cameras11to15described above will be explained. First, scoring of priorities will be explained using an example of a service for checking purchase information of a shopper. In this case, test trials are regularly performed, and a relationship between a priority score of each camera and a locality between respective cameras is obtained from data produced by the test over a specific period in the past. The relationship of locality is obtained as follows from a degree of coincidence of information on a purchase of items by a person with a high identity per unit time between a certain pair of cameras. For example, the number of pieces of information obtained from the photographic data14D of the camera14shown inFIG.5are five pieces of information of “bread, carrots, daikon radish, cabbage, and tomatoes”. The number of pieces of information obtained from the photographic data15D of the camera15are assumed to be three pieces of information of “bread, carrots, and leeks”. In this case, the number of pieces of information that coincide between the cameras14and15are the two pieces of information of “bread and carrots”. The control unit48obtains a correlation coefficient of photographic information (locality information) of both cameras14and15from equation (1) below. (Number of coinciding pieces of information×number of cameras)/(number of pieces of photographed information of one camera+number of pieces of photographed information of other camera)=correlation coefficient (1) By substituting numerical values of the example shown inFIG.5described above into equation (1), since (2×2)/(5+3)=0.5, the correlation coefficient is obtained as 0.5. The correlation coefficient is used as follows. When the pieces of photographed information of the cameras14and15completely coincide with each other, since numerator/denominator of equation (1) are the same, the correlation coefficient is “1”. In the case of the correlation coefficient “1”, by associating the camera14and the camera15to different computes31and32according to control by the control unit48, even if one camera14fails, the other camera15can make up for the failed camera14. For example, the higher the correlation coefficient over 0.5, the control unit48associates the camera14and the camera15with computes31and32that differ from each other. In this case, the failure detecting unit42detects a failure of the cameras11to15or the computes31and32and notifies the control unit48of the failure. For example, when the camera14with the second highest priority (P2) fails as indicated by x inFIG.6, the application14A related to the camera14also becomes inoperable as indicated by x. The control unit48having been notified of the failure from the failure detecting unit42(FIG.2) raises the priority of the camera15that covers the same photographic information as the failed camera14from fifth highest (P5) to second highest (P2) that is the same rank as the failed camera14. In doing so, the application15A associated with the compute32is set to the application15A indicated by a solid line which has the largest amount of resource allocation and which enables detailed analysis as indicated by an arrow Y1. Due to this processing, same photographic information as the failed camera14can be obtained by the camera15. It should be noted that this processing is a temporary measure and, subsequently, regular prioritization is performed once again to set correct priorities. On the other hand, when the pieces of photographed information of the cameras14and15shown inFIG.5are completely non-coincident with each other, since the numerator of equation (1) is “0”, the correlation coefficient is “0”. In the case of the correlation coefficient “0”, since there is no correlative relationship between both cameras14and15, both cameras14and15may be associated with the same compute32or associated with different computes31and32. For example, the lower the correlation coefficient at or below 0.5, the control unit48may associate the camera14and the camera15with the same compute32or with computes31and32that differ from each other. Next, the control unit48shown inFIG.2obtains a priority score of each of the cameras11to15. A priority score indicates the number of pieces of purchase information of a shopper that is obtained per unit time from video information of cameras. For example, as described above, let us assume that the number of pieces of photographic information obtained from the camera14shown inFIG.5is “5”, the number of pieces of photographic information obtained from the camera15is “3”, and among these pieces of photographic information, the number of pieces of photographic information coinciding between the cameras14and15, namely “bread and carrots”, is “2”. In this case, in accordance with the prioritization method described earlier, the control unit48obtains “5” for “bread, carrots, daikon radish, cabbage, and tomatoes” as the priority score of the camera14and “1” for “leeks” as the priority score of the camera15. The priority score is an index indicating that, the higher the score, the higher the priority. In other words, the priority score is an index such that, the higher the priority of the cameras11to15of an application, the larger an amount of application allocation. In addition, the priority score is also an index of healing prioritization indicating which camera is to be recovered first when the compute31or32fails. For example, as indicated by x inFIG.7, in the event that a failure occurs in the compute31, when healing the applications11A and14A associated with the compute31, the control unit48refers to the priority information in the second DB47and sequentially performs healing of applications11A and14A in a descending order of priority of the cameras11and14. First, the application11A of the camera11with the highest priority is associated with the compute32as indicated by an arrow Y2and, next, the application14A of the camera14with the second highest priority is associated with the compute33as indicated by an arrow Y3. The association is reflected onto the priority information in the second DB47. In addition, as shown inFIG.1, when a camera (for example, the camera11) with a high priority among the cameras11to13associated with the same compute31fails, the control unit48sequentially raises priorities of the normal cameras12and13of which priorities are lower than the failed camera. The raising is reflected onto the priority information in the second DB47. Due to the raising of the priorities, the photographic data that should have been obtained from the failed camera11can be covered. In addition, when the failed camera11is recovered after raising the priorities, the control unit48once again prioritizes all cameras11to15including the recovered camera11. <Hardware Configuration> The IoT device management apparatus40according to the embodiment described above is realized by, for example, a computer100configured as shown inFIG.8. Hereinafter, the management apparatus40will be described as an example.FIG.8is a hardware configuration diagram showing an example of the computer100that realizes functions of the management apparatus40. The computer100has a CPU (Central Processing Unit)101, a ROM (Read Only Memory)102, a RAM (Random Access Memory)103, an HDD (Hard Disk Drive)104, an input/output I/F (Inter Face)105, a communication I/F106, and a media I/F107. The CPU101operates based on a program stored in the ROM102or the HDD104and controls the respective functional units. The ROM102stores a boot program that is executed by the CPU101when the computer100is started up, programs related to hardware of the computer100, and the like. The CPU101controls an output apparatus111such as a printer or a display and an input apparatus110such as a mouse or a keyboard via the input/output I/F105. The CPU101acquires data from the input apparatus110or outputs generated data to the output apparatus111via the input/output I/F105. The HDD104stores a program to be executed by the CPU101and data and the like to be used by the program. The communication I/F106receives data from another apparatus (not illustrated) via a communication network112and outputs the received data to the CPU101, and transmits data generated by the CPU101to the other apparatus via the communication network112. The media I/F107reads a program or data stored in a recording medium113and outputs the program or data to the CPU101via the RAM103. The CPU101loads a program related to desired processing onto the RAM103from the recording medium113via the media I/F107and executes the loaded program. The recording medium113is an optical recording medium such as a DVD (Digital Versatile Disc) or a PD (Phase change rewritable Disk), a magneto optical recording medium such as an MO (Magneto Optical disk), a magnetic recording medium, a conductor memory tape medium, a semiconductor memory, or the like. For example, when the computer100functions as the management apparatus40according to the embodiment, the CPU101realizes the functions of the management apparatus40by executing a program loaded onto the RAM103. In addition, data in the RAM103is stored in the HDD104. The CPU101reads a program related to desired processing from the recording medium113and executes the loaded program. Alternatively, the CPU101may read a program related to desired processing from another apparatus via the communication network112. <Operations According to Embodiment> Next, operations of the IoT device management system10A according to the present embodiment will be described with reference to flow charts shown inFIGS.9to11. <Initialization Operation> First, an operation of initialization of the system10A shown inFIG.1will be described with reference to the flow chart shown inFIG.9. In the system10A in an initial state, which of the computes31and32the applications11A to15A of the cameras11to15are to be associated with is not determined. In consideration thereof, the applications11A to15A corresponding to the cameras11to15are temporarily associated with the computes31and32. In addition, in step S1shown inFIG.9, the pieces of photographic data11D to15D of the cameras11to13arranged in the first area21and the cameras14and15arranged in the second area22in a shopping center are analyzed by the applications11A to15A. As a result of the analysis, various types of information such as people and purchased items in the pieces of photographic data11D to15D shown inFIG.3are obtained and output to the locality calculating unit44and the priority calculating unit45in the application control unit41of the management apparatus40. Next, in step S2, the locality calculating unit44derives localities of the cameras11to15from the various types of information in the input pieces of photographic data11D to15D as follows. Specifically, the locality calculating unit44obtains a degree of coincidence of the respective pieces of photographic data11D to15D using, for example, a correlation coefficient, and determines a locality such that the higher the degree of coincidence, the higher the locality. For example, since the degree of coincidence between the pieces of photographic data14D and15D (FIG.3) of the two cameras14and15is a complete coincidence, the locality calculating unit44determines that the locality between the cameras14and15is highest. Next, the locality calculating unit44determines that the locality between the cameras12and13is next highest and, further, determines that the locality of the camera11with respect to the cameras12and13is next highest. The locality calculating unit44stores locality information reflecting a result of the determination in the first DB46. Next, in step S3, the priority calculating unit45determines priorities of the cameras11to15as follows in accordance with the number of types of the input pieces of photographic data11D to15D. In other words, as shown inFIG.3, since the photographic data11D of the camera11contains the largest amount of information, the priority calculating unit45gives the camera11the highest priority (P1). Next, the priority calculating unit45gives a second highest priority (P2) to the camera14that obtains the largest amount of information excluding overlapping data with the highest priority. Next, the priority calculating unit45gives a third highest priority (P3) to the camera12with the largest amount of information that remains after excluding overlapping data with the highest priority and the second highest priority, and gives a fourth highest priority (P4) to the camera13and a fifth highest priority (P5) to the camera15through similar processing. Next, in step S4, the control unit48shown inFIG.2determines the computes31and32which the applications11A to15A that handle the pieces of photographic data11D to15D of the cameras11to15are to be associated with and amounts of resource allocation as follows. Specifically, the control unit48obtains a pair of cameras with a high locality from the locality information stored in the first DB46and makes a determination so that, according to an anti-affinity rule, the pair of cameras is associated with computes31,32, and33(FIG.4) that are as different from each other as possible. For example, among the cameras11,12, and13arranged in the first area21, since the camera12and the camera13have a high degree of coincidence and a high locality, as shown inFIG.4, the control unit48makes a determination to associate one camera12with the compute32and associate the other camera13with another compute33. In addition, based on priority information of the cameras11to15stored in the second DB47, the control unit48determines amounts of resource allocation of the applications11A to15A as follows. For example, the control unit48increases the amount of resource allocation and raises the application performance level with respect to corresponding applications11A to15A in a descending order of priorities of the cameras11to15. Specifically, the applications11A and14A indicated by solid lines inFIG.4are given a largest amount of resource allocation, the application12A indicated by a dashed line is given a second largest amount of resource allocation, and the applications15A and13A indicated by dashed-dotted lines are given a third largest amount of resource allocation. Next, in step S5, the control unit48generates the respective applications11A to15A with the amounts of resource allocation determined in step S4described above and performs control for associating the generated applications11A to15A with the respective computes31,32, and33. Specifically, as shown inFIG.4, the applications11A and14A with the largest amount of resource allocation are associated with the compute31, the application12A with the second largest amount of resource allocation and the application15A with the third largest amount of resource allocation are associated with the compute32, and the application13A with the third largest amount of resource allocation is associated with the compute33. <Compute Failure Operation> Next, an operation upon a compute failure of the system10A will be described with reference to the flow chart shown inFIG.10. In step S11, it is assumed that the control unit48receives a failure notification of the compute31shown inFIG.7from the failure detecting unit42. In step S12, the control unit48determines healing destinations of the applications11A and14A on the failed compute31. The determination is made so that the applications11A and14A are arranged in a distributed manner at normal computes32and33. In step S13, based on the priorities of the cameras11to15determined in step S3described earlier, the control unit48determines an order of healing of the applications11A to15A. Specifically, the control unit48makes a determination to associate, with the compute32, the application11A of the camera11with the highest priority among the applications11A and14A associated with the failed compute31as indicated by the arrow Y2. Next, the control unit48makes a determination to associate, with the compute33, the application14A of the camera14with the second highest priority as indicated by the arrow Y3. In step S14, the control unit48performs control to heal the applications11A and14A determined in step S12described above in the order of healing determined in step S13described above. Specifically, control is performed so that, first, the application11A of the camera11with the highest priority is associated with the compute32and, next, the application14A of the camera14with the second highest priority is associated with the compute33. <Camera Failure Operation> Next, an operation upon a camera failure of the system10A will be described with reference to the flow chart shown inFIG.11. In this case, the application14A related to the failed camera14has also become inoperable. In step S21, it is assumed that the control unit48receives a failure notification of the camera14shown inFIG.6from the failure detecting unit42. In step S22, the control unit48determines priorities of the cameras11to13and15excluding the failed camera14. In this case, as shown inFIG.6, the priority of the camera15that covers the same photographic information as the failed camera14is raised from P5 to P2 that is the same rank as the failed camera14. In step S23, based on the priorities of the cameras11to13and15determined in step S22described above, the control unit48determines the computes31,32, and33which corresponding applications11A to13A and15A are to be associated with and amounts of resource allocation. Specifically, the control unit48makes a determination to associate the application11A of the camera11with the highest priority with the compute31, associate the application15A of the camera15with the second highest priority and the application12A of the camera12with the third highest priority with the compute32, and associate the application13A of the camera13with the fourth highest priority with the compute33. In doing so, the control unit48gives a largest amount of resource allocation to the applications11A and15A indicated by solid lines inFIG.6, gives a second largest amount of resource allocation to the application12A indicated by a dashed line, and gives a third largest amount of resource allocation to the application13A indicated by a dashed-dotted line. In this determination, the application15A given the second highest priority (P2) in step S22described above becomes the application15A indicated by a solid line with the largest amount of resource allocation as indicated by an arrow Y1. Next, in step S24, the control unit48generates, on the computes31,32, and33determined in step S23described above, the respective applications11A to13A and15A in the amounts of resource allocation determined in S23described above. Specifically, the application11A with the largest amount of resource allocation is generated in association with the compute31and, at the same time, the application15A with the largest amount of resource allocation is generated in association with the compute32. The application12A with the second largest amount of resource allocation is generated in association with the compute32, and the application13A with the third largest amount of resource allocation is generated in association with the compute33. <Effects> (1) The IoT device management apparatus according to the present invention is configured so as to include: a locality calculating unit which receives a result of predetermined processing having been performed with respect to acquired information from an IoT device such as a camera by an application associated with a compute so as to constitute a virtual machine and which, by determining a locality indicating a closeness of arrangement positions of a plurality of IoT devices to be high when a degree of coincidence among pieces of acquired information from the IoT devices representing received results is high, calculates locality information that is a result of the determination; and a control unit which extracts a pair of IoT devices with a high locality from the locality information and which performs control for associating applications related to the extracted pair of IoT devices with different computes. According to this configuration, even if one compute fails, an application corresponding to an application associated with the failed compute (or described as an application on the compute) is associated with another compute that is normal. Therefore, acquired information that is equivalent to a camera related to the application on the failed compute can be acquired from a camera related to an application on another compute. Therefore, in the event of a failure of a compute, availability of continuously performing predetermined processing on information acquired from a camera can be prevented from declining without making all of the applications on the failed compute inoperable. (2) The IoT device management apparatus according to (1) described above is configured so as to include: a priority calculating unit that calculates priority information representing a prioritization of the plurality of IoT devices by performing processing of ranking the IoT devices in a descending order such that, among pieces of acquired information from the plurality of IoT devices, an IoT device that obtains a largest number of types of acquired information is given a highest priority, an IoT device that obtains a largest number of types of information excluding overlapping data with the highest priority is given a second highest priority, an IoT device that obtains a largest number of types of information excluding overlapping data with the highest priority and the second highest priority is given a third highest priority, and performing the processing in a similar manner with respect to the remaining IoT devices, wherein the control unit refers to the priority information and performs control to increase an amount of resource allocation with respect to corresponding applications in a descending order of priorities of the IoT devices. According to this configuration, since the higher a priority of a camera, the larger the amount of resource allocation with respect to a corresponding application, an application performance level increases. Therefore, even when a large number of types of acquired information is obtained from a camera, predetermined processing such as analysis can be appropriately performed. (3) The IoT device management apparatus according to (1) or (2) described above is configured so as to include: a failure detecting unit that detects a failure of the compute, wherein the control unit performs control of recovery when the failure detecting unit detects a failure to cause a plurality of applications associated with the failed compute to migrate to normal computes in a distributed manner. According to this configuration, even when a compute fails, a plurality of applications which have become inoperable due to the failure can be recovered by causing the applications to migrate to normal computes in a distributed manner. Therefore, even when a compute fails, acquired information from a camera that corresponds to a camera related to the failed compute can be processed by an application on another compute that corresponds to the application on the failed compute. (4) The IoT device management apparatus according to (2) described above is configured so as to include: a failure detecting unit that detects a failure of the IoT device, wherein the control unit performs control of recovery when the failure detecting unit detects a failure to cause a plurality of applications related to the failed IoT device to migrate to normal computes in a distributed manner in a descending order of priorities of the IoT devices. According to this configuration, even when a camera fails, a plurality of applications which have become inoperable due to the failure can be recovered by causing the applications to migrate to normal computes in a distributed manner in a descending order of priorities of the cameras. Therefore, even when a camera fails, since analysis processing of photographic information of a camera with a priority corresponding to the failed camera can be performed by applications of the failed camera which have been caused to migrate to normal computes, the photographic information that should have been obtained from the failed camera can be covered. Furthermore, the specific configurations can be appropriately modified without departing from the scope and spirit of the present invention. REFERENCE SIGNS LIST 10A IoT device management system11to15Camera (IoT device)11A to15A Application21First area22Second area31to33Compute40IoT device management apparatus41Application control unit42Failure detecting unit44Locality calculating unit45Priority calculating unit46First DB47Second DB48Application generation recovery control unit (control unit) | 34,453 |
11863372 | Throughout the drawings, like reference numerals will be understood to refer to like parts, components, and structures. DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. In the following description, a term to identify an access node, a term to denote network entities, a term to denote messages, a term to denote an interface between network entities, and a term to denote a variety of types of identity information have been illustrated for convenience of description. Accordingly, the disclosure is not limited to the following terms, and other terms to denote targets having equivalent technical meanings may be used. For convenience of description, in the disclosure, terms and names defined in long term evolution (LTE) of a 3rd generation partnership project (hereinafter referred to as “3GPP”), new radio (NR) standards are used. However, the disclosure is not restricted by the terms and names, and may be identically applied to systems complying with other standards. First, terms used in this specification are defined. In this specification, a radio bearer may include a data radio bearer (DRB) and a signaling radio bearer (SRB). For example, a data radio bearer (DRB) provided in a radio interface between a terminal and a base station is a path through which the data of a user plane is forwarded. A signaling radio bearer (SRB) may be a path through which the data of a control plane, such a radio resource control (RRC) layer and non-access-stratum (NAS) control message, is forwarded. In this specification, a wireless communication system supported in a network over which a plurality of communication systems interwork may support interworking between heterogeneous technologies frequency bands (multi-RAT interworking). For example, the radio access technology may be a new radio access network (new RAN) supporting all of a 4G radio access technology (E-UTRA), a radio access technology evolved from 4G (evolved E-UTRA), and a 5G radio access technology (new radio (NR)). In this specification, an inter system supporting same or different communication networks may be basically divided into a terminal, a radio access network, and a plurality of core networks (CNs). In this specification, a terminal may be an integrated terminal supporting all of a 4G radio access technology (E-UTRA), a radio access technology evolved from 4G (evolved E-UTRA), and a 5G radio access technology (new radio (NR)). In this specification, a radio access network, a base station, and a network node may be used as the same meaning. A base station may include a 5G base station (or new radio base station or gNB) using the 5G radio access technology (new radio (NR)), a 4G base station (LTE-eNB) using the 4G radio access technology (E-UTRA), and a base station (eLTE eNB) using the radio access technology evolved from 4G (evolved E-UTRA). Furthermore, the base station (eLTE eNB) may support the 4G radio access technology and the 5G radio access technology at the same time. According to this specification, a wireless communication system, in which a terminal can perform communication with at least one cell associated with a first base station and at least one cell associated with a second base station, may support dual connectivity between the first base station and the second base station supporting heterogeneous or homogeneous radio access technology. For example, the dual connectivity disclosed in this specification may include a case where both the first and second base stations relates to dual connectivity which concerns a 4G system or a case where the first base station relates to a 4G system and the second base station supports an NR system (E-UTRA-NR dual connectivity, EN-DC). Furthermore, even though the wireless communication system being disclosed in this specification relates to an EN-DC system, the system need not be limited thereto and can also embrace a multi-radio dual connectivity (MR-DC) system. In an EN-DC system disclosed in this specification, a main base station may be used as the same meaning as a master base station, a master node (MN), or a master eNB (MeNB). A sub-base station may be used as the same meaning as a secondary base station, a secondary node (SN), or a secondary gNB (SgNB). In the EN-DC system disclosed in this specification, a terminal may be connected to one eNB that operates as a master base station and one en-gNB that operates as a secondary base station. The eNB may be connected to an EPC through an S1 interface and may be connected to an en-gNB through an X2 interface, and the en-gNB may be connected to the EPC through the S1. The en-gNB may be connected to the EPC through an X2-U interface or an S1-U interface. In a homogeneous or heterogeneous network supporting small cell evolution, there are various requirements related to mobility robustness, signaling load being increased due to frequent handovers, improvement of throughput per user, system capacity, and the like. The dual connectivity (CE) may imply control and data disconnection. For example, control signaling for mobility is provided through a macro cell at the same time as the time when a high-speed data connection is provided through a small cell. Further, a disconnection between a downlink and an uplink and a connection between the downlink and the uplink are provided through other cells. In the dual connectivity, the UE may be connected to one master node (MN) and one secondary node (SN). In addition, a DC in which a carrier aggregation (CA) is configured means an operation mode of the UE in an RRC connected state, and it is composed of a master cell group and a secondary cell group. Here, “cell group” indicates a group of serving cells related to a master base station or a secondary base station in the dual connectivity. A “master cell group (MCG)” is a group of serving cells related to the master base station, and it includes a primary cell (PCell) and selectively one or more secondary cells (SCells) in the dual connectivity. A “secondary cell group (SCG)” indicates a group of serving cells related to the secondary base station including a primary SCell (PSCell) and selectively one or more SCells. Here, the “cell” as described hereinafter should be discriminated from a “cell” as a general area covered by the base station. That is, the cell indicates a combination of resources of a downlink and selectively an uplink. Linking between a carrier frequency (e.g., center frequency of a cell) of a downlink resource and a carrier frequency of an uplink resource is indicated in system information that is transmitted from downlink resources. An MCG bearer is a radio protocol located in the master base station only to use only resources provided by the master base station in the dual connectivity, and a SCG bearer is a radio protocol located in the secondary base station only to use resources provided by the secondary base station in the dual connectivity. Further, a split bearer is a radio protocol located in both the master base station and the secondary base station to use all resources provided by the master base station and the secondary base station in the dual connectivity. FIG.1is a diagram showing the architecture of an LTE system according to an embodiment of the disclosure. Referring toFIG.1, the radio access network of the LTE system is configured with next-generation evolved Node Bs (hereinafter referred to as “ENBs”, “Node Bs” or “base stations”)105,110,115, and120, a mobility management entity (MME)125, and a serving-gate (S-GW)130. A user equipment (hereinafter referred to as a “UE or terminal”)135accesses an external network through the ENBs105˜120and the S-GW130. Referring toFIG.1, the ENBs105˜120correspond to the existing Node Bs of a universal mobile telecommunication system (UMTS) system. The ENB is connected to the UE135through a radio channel and performs a more complex function than the existing Node B. In the LTE system, all of types of user traffic including real-time services, such as voice over IP (VoIP), through the Internet protocol, are served through a shared channel. Accordingly, a device that performs schedules by collecting state information, such as the buffer state, available transmission power state, and channel state of UEs, is used. The ENBs105˜120are in charge of such a device. In general, one ENB controls multiple cells. For example, in order to implement the transfer rate of 100 Mbps, the LTE system uses orthogonal frequency division multiplexing (hereinafter referred to as “OFDM”) as a radio access technology in the 20 MHz bandwidth, for example. Furthermore, the LTE system adopts an adaptive modulation & coding (hereinafter referred to as “AMC”) scheme for determining a modulation scheme and a channel coding rate based on the channel state of a UE. The S-GW130provides a data bearer and generates or removes a data bearer under the control of the MME125. The MME125is in charge of various control functions in addition to a mobility management function for a UE, and is connected to multiple ENBs. FIG.2is a diagram showing a radio protocol structure in an LTE system according to an embodiment of the disclosure. Referring toFIG.2, the radio protocol of the LTE system includes packet data convergence protocols (PDCPs)205and240, radio link control (RLC)210and235, and medium access control (MAC)215and230in a UE and an ENB, respectively. The PDCPs205and240are in charge of an operation, such as IP header compression/restoration. Major functions of the PDCP are summarized as follows.Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUs in PDCP re-establishment procedure for RLC AMReordering function (for split bearers in DC (only support for RLC AM): PDCP PDU routing for transmission and PDCP PDU reordering for reception)Duplicate detection of lower layer SDUs in PDCP re-establishment procedure for RLC AMRetransmission of PDCP SDUs at handover and, for split bearers in DC, of PDCP PDUs at PDCP data-recovery procedure, for RLC AMCiphering and decipheringTimer-based SDU discard in uplink. The RLC210,235reconfigures a PDCP packet data unit (PDU) in a proper size and performs an ARQ operation. Major functions of the RLC are summarized as follows.Transfer of upper layer PDUsARQ function (Error Correction through ARQ (only for AM data transfer))Concatenation, segmentation and reassembly of RLC SDUs (only for UM and AM data transfer)Re-segmentation of RLC data PDUs (only for AM data transfer)Reordering of RLC data PDUs (only for UM and AM data transfer)Duplicate detection (only for UM and AM data transfer)Protocol error detection (only for AM data transfer)RLC SDU discard (only for UM and AM data transfer)RLC re-establishment The MAC215,230is connected to multiple RLC layer devices configured in one UE, and performs an operation of multiplexing RLC PDUs with a MAC PDU and demultiplexing RLC PDUs from a MAC PDU. Major functions of the MAC are summarized as follows.Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channels)Scheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding A physical layer220,225performs an operation of channel-coding and modulating higher layer data, generating the higher layer data into an OFDM symbol, and transmitting the OFDM symbol through a radio channel or of demodulating an OFDM symbol received through a radio channel, channel-decoding the OFDM symbol, and transmitting the OFDM symbol to a higher layer. FIG.3is a diagram showing the architecture of a next-generation mobile communication system according to an embodiment of the disclosure. Referring toFIG.3, the radio access network of the next-generation mobile communication system (hereinafter referred to as “NR” or “5G”) is configured with a new radio Node B (hereinafter referred to as an “NR gNB” or an “NR base station”)310and a new radio core network (NR CN)305. A new radio user equipment (hereinafter referred to as an “NR UE” or a “terminal”)315accesses an external network through the NR gNB310and the NR CN305. Referring toFIG.3, the NR gNB310corresponds to the existing evolved Node B (eNB) of an LTE system. The NR gNB is connected to the NR UE315through a radio channel320, and may provide an excellent service compared to the existing Node B. In a next-generation mobile communication system, a device for performing scheduling by collecting state information, such as the buffer state, available transmission power state, and channel state of UEs is used because all of types of user traffic are served through a shared channel. The NR gNB310is in charge of the device. In general, one NR gNB controls multiple cells. In order to implement ultra-high speed data transfer compared to the existing LIE, the next-generation mobile communication system may have the existing maximum bandwidth or more and may additionally graft the beamforming technology using OFDM as a radio access technology. Furthermore, the next-generation mobile communication system adopts the AMC scheme that determines a modulation scheme and a channel coding rate based on the channel state of a UE. The NR CN305performs functions, such as mobility support, a bearer configuration, and a QoS configuration. The NR CN is in charge of various control functions in addition to a mobility management function for a UE, and is connected to multiple eNBs. Furthermore, the next-generation mobile communication system may also operate in conjunction with the existing LTE system. The NR CN is connected to an MME325through a network interface. The MME is connected to an eNB330, that is, the existing base station. FIG.4is a diagram showing the radio protocol structure of a next-generation mobile communication system according to an embodiment of the disclosure. Referring toFIG.4, the radio protocol of the NR is configured with NR PDCPs405and440, NR RLC410and435, and NR MAC415and430, respectively, in an NR UE and an NR base station. Major functions of the NR PDCP405,440may include some of the following functions.Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUsPDCP PDU reordering for receptionDuplicate detection of lower layer SDUsRetransmission of PDCP SDUsCiphering and decipheringTimer-based SDU discard in uplink. The reordering function of the NR PDCP entity refers to a function for sequentially reordering PDCP PDUs received from a lower layer based on a PDCP sequence number (SN). The reordering function may include a function for transmitting data to a higher layer in a reordered sequence. Furthermore, the reordering function of the NR PDCP entity may include a function for reordering sequences and recording lost PDCP PDUs, a function for making a status report on lost PDCP PDUs to the transmission side, and a function for requesting the retransmission of lost PDCP PDUs. Major functions of the NR RLC410,435may include some of the following functions.Transfer of upper layer PDUsIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsError Correction through ARQConcatenation, segmentation and reassembly of the RLC SDUsRe-segmentation of RLC data PDUsReordering of RLC data PDUsDuplicate detectionProtocol error detectionRLC SDU discardRLC re-establishment The in-sequence delivery function for the NR RLC entity refers to a function for transmitting RLC SDUs, received from a lower layer, to a higher layer in sequence, and may include a function for reassembling and transmitting multiple RLC SDUs if one RLC SDU has been originally segmented into the multiple RLC SDUs and received. Furthermore, the in-sequence delivery function of the NR RLC entity may include a function for reordering received RLC PDUs based on an RLC sequence number (SN) or a PDCP sequence number (SN) and a function for reordering sequences and recording lost RLC PDUs. Furthermore, the in-sequence delivery function of the NR RLC entity may include a function for transmitting a status report on lost RLC PDUs to the transmission side, a function for requesting the retransmission of lost RLC PDUs, and a function for transmitting only RLC SDUs prior to a lost RLC SDU to a higher layer in sequence if the lost RLC SDU is present. Furthermore, the in-sequence delivery function of the NR RLC entity may include a function for transmitting all RLC SDUs, received before a given timer expires, to a higher layer in sequence when the timer expires although there is a lost RLC SDU or a function for transmitting all RLC SDUs, received so far, to a higher layer when a given timer expires although there is a lost RLC SDU. Furthermore, in the above, RLC PDUs may be processed in sequence that they are received (in order of arrival regardless of the sequence of sequence numbers) and transmitted to the PDCP entity regardless of their sequence (i.e., out-of sequence delivery). In the case of a segment, segments stored in a buffer or segments to be received subsequently may be received and reconfigured into one complete RLC PDU. The one complete RLC PDU may be processed and transmitted to the PDCP entity. The NR RLC layer may not include a concatenation function. The concatenation function may be performed in the NR MAC layer or may be substituted with the multiplexing function of the NR MAC layer. The out-of-sequence delivery function for the NR RLC entity refers to a function for directly transmitting RLC SDUs received from a lower layer to a higher layer regardless of their sequence. The out-of-sequence delivery function may include a function for reassembling multiple RLC SDUs if one RLC SDU has been originally segmented into the multiple RLC SDUs and received and a function for storing the RLC SN or PDCP SN of received RLC PDUs, reordering their sequence, and recording lost RLC PDUs. The NR MAC415,430may be connected to multiple NR RLC layer devices configured in one UE. Major functions of the NR MAC may include some of the following functions.Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding An NR PHY layer420,425may perform an operation of channel-coding and modulating higher layer data, generating the higher layer data into an OFDM symbol, and transmitting the OFDM symbol to a radio channel or demodulating an OFDM symbol received through a radio channel, channel-decoding the OFDM symbol, and transferring the OFDM symbol to a higher layer. FIG.5Ais a diagram explaining intra-eNB carrier aggregation (CA) according to an embodiment of the disclosure. Referring toFIG.5A, an eNB transmits and receives signals through multiple carriers across a plurality of frequency bands. For example, the eNB505acan be configured to use the carrier515awith center frequency f1and the carrier510awith center frequency f3. If carrier aggregation is not supported, the UE530ahas to transmit/receive data using one of the carriers510aand515a. However, the UE530ahaving the carrier aggregation capability can transmit/receive data using both the carriers510aand515a. The eNB can increase the amount of the resource to be allocated to the UE having the carrier aggregation capability in adaptation to the channel condition of the UE so as to improve the data rate of the UE530a. This approach of aggregating the downlink carriers transmitted by or uplink carriers received by an eNB is referred to as intra-eNB carrier aggregation. However, there may be a situation requiring an approach of aggregating the downlink carriers transmitted by different eNBs or the uplink carriers received by different eNBs unlike the situation ofFIG.5A. FIG.5Bis a diagram illustrating inter-eNB carrier aggregation according to an embodiment of the disclosure. Referring toFIG.5B, assuming that eNB1(master node)505boperates a carrier with the center frequency at f1and eNB2(secondary node)515ba carrier with the center frequency at f2, if the UE530baggregates the carrier with the downlink center frequency at f1and the carrier with the downlink center frequency at f2, i.e. one UE530baggregates the carriers of two different eNBs, and this is referred to as inter-eNB Carrier Aggregation (CA) in the disclosure. In the following description, the term ‘Dual Connectivity (DC)’ is used interchangeably with the term ‘inter-eNB CA’. For example, if DC is configured, this means that the inter-eNB CA is configured. The following definitions are provided to facilitate understanding of certain terms used frequently herein. Assuming that a cell is configured with one downlink carrier and one uplink carrier of an eNB in the concept of the related art, the carrier aggregation can be understood as if the UE communicates data via multiple cells. At this time, the peak data rate and the number of aggregated carriers have positive correlation. In the following description, if a UE receives data through a certain downlink carrier or transmits data through a certain uplink carrier, this means the UE transmits/receives data through a control channel and a data channel provided by the cell corresponding to the center frequency and frequency band characterizing the carrier. In the following description, the carrier aggregation can be expressed like this ‘a plurality of serving cells are configured’ along with the use of the terms ‘Primary Serving cell (PCell),’ ‘Secondary Serving cell (SCell),’ ‘activated service cell,’ etc. These terms are used in the same meaning as those used in the LTE/NR mobile communication system. In following description, the terms ‘carrier,’ ‘component carrier,’ and ‘serving cell’ are used interchangeably in the same meaning. In the following description, a set of the serving cells controlled by one eNB is referred to as a Cell Group or Carrier Group (CG). A cell group is classified into one of Master Cell Group (MCG) and Secondary Cell Group (SCG). The MCG denotes a set of the serving cell controlled by an eNB controlling the PCell (hereinafter, master node), and the SCG denotes a set of the serving cells controlled by the eNB which does not control the PCell, i.e. the eNB which controls only SCells (hereinafter, secondary node). The eNB notifies the UE whether a serving cell belongs to the MCG or SCG in the procedure of configure the corresponding serving cell. A UE may be configured with one MCG and one or more SCGs. Although the description is directed to the case where one SCG is configured for convenience purpose, the subject matter of the disclosure can be applied, without modification, to the case where more than one SCG are configured. The PCell and SCell are terms expressing the types of the serving cell configured to the UE. The PCell and SCell are different in that the PCell always remains in the activated state while the SCell transitions between the activated state and the deactivated state repeatedly according to the command of the eNB. The UE mobility is controls mainly in association with the PCell, and the SCell may be understood as an extra serving cell for data communication. In the following description, the terms ‘PCell’ and ‘SCell’ are used in the same meaning as those defined in the LTE/NR standards. The disclosure is directed to the network in which the macro and pico cells coexist. The macro cell is the cell controlled by a macro eNB and has a relatively large service coverage area. In contrast, the pico cell is the cell controlled by the SeNB and has a small service coverage area as compared to the macro cell. Although there is no strict criterion for distinguishing between the macro and pico cells, it is assumed that the macro cell has a radius about 500 m while the pico cell has a radius about a few meters. In the following description, the terms ‘pico cell’ and ‘small cell’ are used interchangeably. Referring toFIG.5B, if the eNB1505bis the MeNB and the eNB2515bis the SeNB, the serving cell510bhaving the center frequency at f1is the serving cell belonging to the MCG, and the serving cell520bhaving the center frequency at f2is the serving cell belonging to the SCG. In the following description, other terms may be used interchangeably with MCG and SCG to help understanding. For example, the terms ‘primary set’ and ‘secondary set’ and ‘primary carrier group’ and ‘secondary carrier group’ may be used interchangeably. However, it is noted that they are different in spelling but the same in meaning. The main purpose of these terms is to clarify which cell is under the control of the eNB controlling the PCell of a specific UE, and the UE and the corresponding cell may operate differently depending on whether the corresponding cell is controlled by the eNB controlling the PCell of the specific UE or not. The UE may be configured with one or more SCGs. The SCG may include a plurality of SCells of which one has a special attribute. In the intra-eNB CA, the UE transmits the HARQ feedback and CSI for the SCell(s) as well as the HARQ feedback and CSI for the PCell through the PCell PUCCH. This is to apply the CA to the UE having no simultaneous uplink transmission capability. In the inter-eNB CA, it may be impossible to transmit the HARQ feedback and CSI of the SCG SCells on the PCell PUCCH. This is because although the HARQ feedback has to be delivered within the HARQ Round Trip Time (RTT) (typically 8 ms) the transmission delay between the MeNB and SeNB may be longer than the HARQ RTT. In order to solve this problem, PUCCH transmission resource is configured to one of the SCG SCells to transmit the HARQ feedback and CSI for the SCG SCells. This special SCell is referred to as primary SCell (PSCell). FIG.6Ais a diagram showing the structure of network elements included in a wireless communication system supporting EN-DC according to an embodiment of the disclosure. Terms described in TS38.401, that is, a gNB central unit (gNB-CU), a gNB-CU-control plane (gNB-CU-CP), a gNB-CU-user plane (gNB-CU-UP), and a gNB distributed unit (gNB-DU) may correspond to a central unit (CU) included in a base station supporting 5G system, a central unit-control plane (CU-CP) included in the base station, a central unit-user plane (CU-UP) included the base station, and a distributed unit (DU) included in the base station. In this specification, the gNB-CU-control plane (gNB-CU-CP), the gNB-CU-user plane (gNB-CU-UP), and the gNB distributed unit (gNB-DU) may be indicated as the CU-CP, the CU-UP, and the DU, respectively.gNB Central Unit (gNB-CU): a logical node hosting RRC, SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that controls the operation of one or more gNB-DUs. The gNB-CU terminates the F1 interface connected with the gNB-DU. gNB Distributed Unit (gNB-DU): a logical node hosting RLC, MAC an d PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU. One gNB-DU supports one or multiple cells. One cell is supported by only one gNB-DU. The gNB-DU terminates the F1 interface connected with the gNB-CU. gNB-CU-Control Plane (gNB-CU-CP): a logical node hosting the RRC and the control plane part of the PDCP protocol of the gNB-CU for an en-gNB or a g NB. The gNB-CU-CP terminates the E1 interface connected with the gNB-CU-UP and the F1-C interface connected with the gNB-DU. gNB-CU-User Plane (gNB-CU-UP): a logical node hosting the user plane part of the PDCP protocol of the gNB-CU for an en-gNB, and the user plane part of the PDCP protocol and the SDAP protocol of the gNB-CU for a gNB. The gNB-CU-U P terminates the E1 interface connected with the gNB-CU-CP and the F1-U interface connected with the gNB-DU. In this specification, the gNB-CU-control plane (gNB-CU-CP), the gNB-CU-user plane (gNB-CU-UP), and the gNB distributed unit (gNB-DU) may be indicated as the CU-CP, the CU-UP, and the DU, respectively. In this specification, CU-CP and CU-UP may be indicated as a network device (entity) implementing CU-CP and a network device (entity) implementing CU-UP, respectively. The present invention features a central unit controlling an operation of a distributed unit through signaling. The realization of the central unit and distributed unit is not confined to the above example. For example, the realization of the “CU-CP, CU-UP, and DU” illustrated inFIG.6Brelates to one embodiment of the invention and is not confined to the feature illustrated inFIG.6B. A central unit and a distributed unit can be realized in one base station. Or the distributed unit can be realized as a separate device from the base station. In addition, in the invention, the central unit and distributed unit may be in a virtualized structure where the CU and DU are separated. Or they may be in a non-virtualized structure where they are not separated. In a non-virtualized structure, an operation can be achieved through control signaling within a base station. However, in a virtualized structure, it is necessary to define F1-C Interface/F1-U Interface operations and messages, for the purpose of signaling between the central unit and distributed unit as inFIG.6B. In addition, the central unit may be in a virtualized structure where a device achieving the CP and a device achieving the UP are separated. Or it can be in a non-virtualized structure where there is no distinction between the CP and UP and they are of one device. In a non-virtualized structure, an operation can be performed through control signaling within a base station. However, in a virtualized structure, as can be seen inFIG.6B, it is necessary to define E1 Interface operations and messages for signaling between a device realizing the CP and a device realizing the UP. In this specification, a gNB central unit (gNB-CU), a gNB-CU-control plane (gNB-CU-CP), a gNB-CU-user plane (gNB-CU-UP), and a gNB distributed unit (gNB-DU) may correspond to a central unit (CU) included in a secondary node (SN) (or secondary gNB (SgNB)), a central unit-control plane (CU-CP) included in a secondary node (SN) (or secondary gNB (SgNB)), a central unit-user plane (CU-UP) included in a secondary node, and a distributed unit (DU) included in a secondary node in an EN-DC system disclosed in this specification. Referring toFIG.6A, a 4G eNB is configured with one network element (NE), and a 5G gNB is configured with a CU-CP, a CU-UP, and a DU, that is, three network elements. As shown inFIG.6A, the CU-CP that is a control plane, the CU-UP that is a user plane, and the DU including MAC/RLC/PHY layers may be connected to an E1, an F1 Control plane interface (F1-C)/F1 user plane interface (F1-U), that is, external interfaces, respectively. FIG.7is a diagram illustrating an operation of a SN when a radio link failure (RLF) occurs in an existing wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.7, at operation705, a gNB-CU-UP configures a suspension operation to be in an On state.FIG.7illustrates a UE700, a gNB-DU701, an eNB702, a gNB-CU-UP703, and a gNB-CU-CP704. At operation710, a terminal (UE) is connected to a CU at an RRC and PDCP level in an EN-DC structure, and the UE is in a state where it is connected to a gNB-DU and an eNB. At operation715, the UE may detect a radio link failure (RLF). The UE may detect whether a specific event related to a MAC occurs, and the MAC related event may include the radio link failure (RLF). That is, the UE detect the occurrence of the RLF. For example, if a T310timer or a T310stimer expires, it may be determined that the RLF is detected. For example, if a T310timer or a T304timer expires, and random access problem, RLC that the maximum number of retransmissions, RRC message decoding error, or RRC message Integrity failure, it may be determined that the RLF is detected. Meanwhile, the UE checks whether the RLF occurs in MCG or SCG. Occurrence of the RLF in MCG (MCG-RLF) means that a state where a channel quality of a specific cell (e.g., PCell) among serving cells belonging to the MCG is equal to or lower than a specific reference level continues over another specific time period. That is, the MCG-RLF occurs due to a problem of an MCG serving cell, and in particular, the MCG-RLF occurs if the state where the channel state of PSCell is equal to or lower than the specific reference level continues over the specific reference time period. Occurrence of the RLF in SCG (SCG-RLF) means that a state where a channel quality of a specific cell (e.g., PSCell) among serving cells belonging to the SCG is equal to or lower than a specific reference level continues over another specific time period. That is, the SCG-RLF occurs due to a problem of a SCG serving cell, and in particular, the SCG-RLF occurs if the state where the channel state of PSCell is equal to or lower than the specific reference level continues over the specific reference time period. Here, the channel quality may mean a reception quality of a PDCCH channel. Meanwhile, the occurrence of the RLF in the MCG means that the UE is unable to keep the current RRC connection any more, and thus the UE rushes into an RRC connection reestablishment procedure. Further, the occurrence of the RLF in the SCG means that data transmission and reception is not possible in the SCG. However, even in this case, a normal communication is possible through the MCG. In this specification, the occurrence of the RLF in a SN may be used as the same meaning as that of the occurrence of the RLF in the SCG, the SCG-RLF or the SCG failure. At operation720, if the RLF occurrence is determined, the UE may suspend the SCG transmission to the gNB-DU. If the SCG-RLF occurrence is determined at operation715, the UE suspends uplink transmission of SCell and PSCell included in the SCG. In this case, the UE keeps the SCG serving cell and PSCell without releasing. This is to refer to the current configuration in the case of reconfiguring the SCG serving cell thereafter. That is, if the RLF is determined in relation to the SCG, the UE may stop the SCG transmission to the gNB-DU. At operation725, if the SCG-RLF occurrence is determined, the UE may generate and transmit information related to the RLF to the master base station (MeNB) through the MCG serving cell. For example, the information related to the RLF may include information reporting the occurrence of the SCG-RLF (e.g., SCG failure information) or measurement information of neighboring cells of the SCG serving cell frequency. Further, the SCG failure information may include failure cause (e.g., out of sync) information. At operation730, the master base station (MeNB) may transfer information related to the RLF (e.g., SCG failure information) that is received from the UE to the CU-CP (SgNB-CU-CP) included in the secondary base station. For example, the master base station (MeNB) may transfer a SGNB MODIFICATION REQUEST including the SCG failure information to the CU-CP (SgNB-CU-CP) included in the secondary base station. Further, the SCG failure information may include failure cause (e.g., out of sync) information. At operation735, if the SGNB MODIFICATION REQUEST is received, the SgNB-CU-CP may transmit a SGNB MODIFICATION REQUEST ACKNOWLEDGE to the MeNB as a response. At operation740, if the SCG failure occurs, the MeNB or the SgNB-CU-CP can perform the PScell change. However, as illustrated, if the SCG failure occurs at operation740, the SgNB-CU-CP is unable to perform a control operation of a data radio bearer (DRB) and a signaling radio bearer (SRB) for the existing serving cell. Further, if the SCG failure occurs in the existing wireless communication system supporting dual connectivity, there is no signaling whereby the gNB-CU-CP transmits the SCG failure information to the gNB-CU-UP or the gNB-DU, and thus the gNB-CU-UP or the gNB-DU is unable to perform the process related to the SCG failure occurrence. Accordingly, if the SCG failure occurs, the gNB-CU-UP or the gNB-DU is unable to perform a SCG keep operation that is an operation of keeping a UE context, a bearer context, or a cell context. That is, when the SCG failure occurs, the gNB-CU-UP or the gNB-DU is unable to keep the SCG serving cell and PSCell without releasing the same. Accordingly, as indicated at operation745, the gNB-CU-UP is unable to receive the SCG failure information from the gNB-CU-CP, and thus it is unable to perform a separate handling operation when the SCG failure occurs. In the same manner, at operation750, the gNB-DU is unable to receive the SCG failure information from the gNB-CU-CP, and thus it is unable to perform the separate handling operation when the SCG failure occurs. As described above, if the SCG failure occurs in the existing wireless communication system supporting the dual connectivity, the gNB-CU-UP and the gNB-DU are unable to perform a separate process related to the SCG-RLF. FIG.8is a diagram illustrating an operation of a SN in the case where a radio link failure (RLF) occurs (SCG-RLF) in the SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.8, at operation805, a gNB-CU-UP configures a suspension operation to be in an On state.FIG.8illustrates a UE800, a gNB-DU801, an eNB802, a gNB-CU-UP803, and a gNB-CU-CP804. At operation810, a UE is connected to a CU at an RRC and PDCP level in an EN-DC structure, and the UE is in a state where it is connected to a gNB-DU and an eNB. Referring toFIG.8, Case A-1is a case where the UE detects an RLF, and an operation in the corresponding case is as follows. At operation815, the UE may detect the occurrence of SCG-RLF (e.g., out of sync). At operation820, if the SCG-RLF is determined, the UE may suspend the SCG transmission to the gNB-DU. At operation825, if the SCG-RLF is determined, the UE may transmit information related to the RLF (SCG failure information) to the MeNB. For example, the SCG failure information may include failure cause (e.g., out of sync) information. At operation830, the MeNB transfers a SGNB MODIFICATION REQUEST including information related to the RLF received from the UE (SCG failure information) to the SgNB-CU-CP. For example, the SCG failure information may include failure cause (e.g., out of sync) information. At operation835, if the SGNB MODIFICATION REQUEST is received, the SgNB-CU-CP may transmit a SGNB MODIFICATION REQUEST ACKNOWLEDGE to the MeNB as a response. As illustrated inFIG.8, Case A-2is a case where the gNB-DU detects the RLF, and an operation in the corresponding case is as follows. At operation840, the gNB-DU detects the occurrence of the radio link failure (hereinafter, RLF) (e.g., out of sync). At operation845, the gNB-DU may transfer a UE CONTEXT MODIFICATION REQUIRED including the information related to the RLF to the SgNB-CU-CP. For example, the RLF related information may include failure cause (e.g., radio link fail) information. At operation850, if the UE CONTEXT MODIFICATION REQUIRED is received, the gNB-CU-CP may transmit the SGNB MODIFICATION REQUEST ACKNOWLEDGE to the MeNB as a response. As illustrated inFIG.8, only one or both of Case A-1and Case A-2as described above may occur at the same time, and a subsequent operation after the case occurrence is as follows. At operation855, the gNB-CU-CP may detect the occurrence of the SCG failure based on the information related to the received RLF, and it may determine stop operation of the downlink transmission with respect to the existing serving cell. At operation860, the gNB-CU-CP may transmit a BEARER CONTEXT MODIFICATION REQUEST including information indicating the downlink transmission stop (NR DL stop indicator) of the gNB-CU-UP. At operation865, if the information indicating the downlink transmission stop (NR DL stop indicator) is received, the gNB-CU-UP may stop the downlink transmission to the gNB-DU with respect to the corresponding SCG bearer. Further, if necessary, the gNB-CU-UP may perform a path switching and retransmission operation to the MeNB. At operation870, the gNB-CU-UP may transmit a BEARER CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the BEARER CONTEXT MODIFICATION REQUEST. At operation875, the gNB-CU-CP may transmit a UE CONTEXT MODIFICATION REQUEST including information indicating downlink transmission stop of the gNB-DU (transmission stop indicator) to the gNB-DU. At operation880, if the information indicating the downlink transmission stop (transmission stop indicator) is received, the gNB-DU may stop the downlink transmission to the UE with respect to the corresponding SCG bearer. At operation885, the gNB-DU may transmit a UE CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the UE CONTEXT MODIFICATION REQUEST. Referring toFIG.8, the operation860in which the gNB-CU-CP transmits information indicating the downlink transmission stop of the gNB-CU-UP and the operation875in which the gNB-CU-CP transmits the information indicating the transmission of the gNB-DU (transmission stop indicator) to the gNB-DU are not limited to their operation order as described above, and the operation875may be performed prior to the operation860, or the operations860and875may be simultaneously performed in parallel. FIG.9is a diagram illustrating an operation of a SN in the case where a radio link (RL) recovery occurs in the SN (SCG-RL recovery) in a wireless communication system supporting EN-DC according to an embodiment of the disclosure. Referring toFIG.9, Case B-1is a case where a UE detects normal recovery of a radio link, and an operation in the corresponding case is as follows.FIG.9illustrates a UE900, a gNB-DU901, an eNB902, a gNB-CU-UP903, and a gNB-CU-CP904. At operation905, the UE may perform a measurement with respect to at least one cell (NR cell) related to a secondary base station. At operation910, if a report event is triggered during the measurement, the UE may transmit a measurement report (MR) to a master base station (eNB). For example, if an event in which a radio link of PSCell of the SN is recovered to a normal signal value occurs, the MR may be triggered. At operation915, the master base station (eNB) may transmit the measurement report (NR MR) received from the UE to the secondary base station (gNB). As illustrated inFIG.9, Case B-2is a case where a DU detects normal recovery of a radio link, and an operation in the corresponding case is as follows. At operation920, a gNB-DU may detect the normal recovery of the radio link. At operation925, the gNB-DU may transmit a UE CONTEXT MODIFICATION REQUIRED including information related to the radio link recovery (RL recovery information) to a gNB-CU-CP. At operation930, the gNB-CU-CP may transmit a UE CONTEXT MODIFICATION RESPONSE to the gNB-DU in response to the UE CONTEXT MODIFICATION REQUIRED. As illustrated inFIG.9, only one or both of Case B-1and Case B-2as described above may occur at the same time, and a subsequent operation after the occurrence of the case is as follows. At operation935, if MR is received from the UE, or if RL recovery information is received from the DU, the gNB-CU-CP may determine that the radio link is recovered in the SN, and it may determine an operation of resuming the downlink transmission with respect to the existing serving cell or a new serving cell. For example, the resuming operation with respect to an existing serving cell or another neighboring cell in the same DU may include an operation of performing an intra-DU handover, and the resuming operation with respect to a neighboring cell in another DU may include an operation of performing an inter-DU handover. FIG.10is a diagram illustrating an operation of a SN to resume downlink transmission with respect to a serving cell within the same DU in the SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.10, Case C-1is a case where handover to one of cells in a DU including an existing serving cell is performed, and an operation in the corresponding case is as follows.FIG.10illustrates a UE1000, a gNB-DU1001, an eNB1002, a gNB-CU-UP1003, and a gNB-CU-CP1004. At operation1005, a gNB-CU-CP may transmit, to a gNB-DU, information indicating transmission resume of the gNB-DU (transmission restart indication) and cell setup information for the existing serving cell or another cell in the DU. At operation1010, the gNB-DU may transmit a UE CONTEXT MODIFICATION RESPONSE including existing DL transport network layer (TNL) information to the gNB-CU-CP. At operation1015, the gNB-CU-CP may transmit, to the gNB-CU-CP, a BEARER CONTEXT MODIFICATION REQUEST including information indicating downlink transmission resume of the gNB-CU-UP (NR DL resume indicator) and DL TNL information received from the DU. At operation1020, the gNB-CU-UP may transmit a BEARER CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the BEARER CONTEXT MODIFICATION REQUEST. At operation1025, the gNB-CU-CP may transmit a SGNB MODIFICATION REQUIRED including cell (NR cell) information related to a secondary base station and bearer information to a master base station (eNB). At operation1030, the eNB may transmit an RRC connection reconfiguration including the NR cell information and the bearer information to the UE. At operation1035, the UE may transmit an RRC connection reconfiguration complete to the eNB in response to the RRC connection reconfiguration. At operation1040, the eNB may transmit a SGNB MODIFICATION CONFIRM to the gNB-CU-CP in response to the SGNB MODIFICATION REQUIRED. At operation1045, the gNB-CU-CP may transmit a sequence number (SN) status transfer to the gNB-CU-UP. At operation1050, the gNB-CU-UP may resume SCG transmission to the existing gNB-DU. For example, the SCG transmission means the downlink transmission of the gNB-CU-UP with respect to the SCG bearer, and at operation1050, the gNB-CU-UP may resume the downlink transmission to the existing gNB-DU with respect to the corresponding SCG bearer. At operation1055, if the RRC connection reconfiguration including the NR cell information and the bearer information is received, the UE may perform a random access procedure with the corresponding NR cell. At operation1060, the existing gNB-DU may resume the downlink transmission to the UE. Further, although not illustrated in the drawing, the existing gNB-DU may resume the transmission to the UE with respect to the corresponding SCG bearer. Referring toFIG.10, the operation1005in which the gNB-CU-CP transmits information indicating the downlink transmission resume of the gNB-DU (transmission restart indicator) to the gNB-DU and the operation1015in which the gNB-CU-CP transmits the information indicating the downlink transmission resume of the gNB-CU-UP (NR DL resume indicator) to the gNB-CU-UP are not limited to their operation order as described above, and the operation1015may be performed prior to the operation1005, or the operations1005and1015may be simultaneously performed in parallel. FIG.11is a diagram illustrating an operation of a SN to resume downlink transmission with respect to a serving cell within another DU in the SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.11, Case C-2is a case where handover to one of cells of another DU is performed, and an operation in the corresponding case is as follows.FIG.11illustrates a UE1100, a gNB-DU1101, an eNB1102, a gNB-CU-UP1103, and a gNB-CU-CP1104. At operation1110, a gNB-CU-CP1105may transmit a UE CONTEXT SETUP REQUEST to a gNB-DU for UE setup. At operation1115, a target gNB-DU may transmit a UE CONTEXT MODIFICATION RESPONSE including new DL TNL information to the gNB-CU-CP. At operation1120, the gNB-CU-CP may transmit a BEARER CONTEXT MODIFICATION REQUEST including information indicating downlink transmission resume of the gNB-CU-UP (NR DL resume indicator) and the DL TNL information received from the target gNB-DU to the gNB-CU-UP. At operation1125, the gNB-CU-UP may transmit a BEARER CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the BEARER CONTEXT MODIFICATION REQUEST. At operation1130, the gNB-CU-CP may transmit a UE CONTEXT RELEASE COMMAND for releasing the UE configuration information to the existing gNB-DU. At operation1135, the existing gNB-DU may release the corresponding UE configuration, and it may transmit a UE CONTEXT RELEASE COMPLETE. At operation1140, the gNB-CU-CP may transmit a SGNB MODIFICATION REQUIRED including the NR cell information and the bearer information to the master base station (eNB). At operation1145, the master base station (eNB) may transmit an RRC connection reconfiguration including the NR cell information and the bearer information to the UE. At operation1150, the UE may transmit an RRC connection reconfiguration complete to the eNB in response to the RRC connection reconfiguration. At operation1155, the eNB may transmit a SGNB MODIFICATION CONFIRM to the gNB-CU-CP in response to the SGNB MODIFICATION REQUIRED. At operation1160, the gNB-CU-CP may transmit the SN status transfer message to the gNB-CU-UP. At operation1165, the gNB-CU-UP may resume the SCG transmission to the target gNB-DU. At operation1170, if the RRC connection reconfiguration including the NR cell and bearer information is received, the UE may perform a random access procedure with the corresponding NR cell. At operation1175, the target gNB-DU may resume the downlink transmission to the UE1100. Only one of case C-1and case C-2as illustrated inFIGS.10to11may occur. Although the above-described flowcharts are flowcharts between the MeNB and the SgNB in the EN-DC structure, the transmission stop/resume configuration operation to the DU performed by the CU-CP with respect to the CU-UP or the transmission stop/resume configuration information to the UE performed by the CU-CP with respect to the DU disclosed in this specification may be applied in the same manner even in the MR-DC, NR-DC, or SA NR. FIG.12is a diagram illustrating an operation in which a CU-UP or a DU suspends downlink transmission in the case where a radio link failure (RLF) occurs in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. FIG.12illustrates an embodiment in which a CU-CP transmits information indicating downlink transmission stop to a CU-UP or a DU if a UE detects SCG-RLF.FIG.12illustrates a UE1200, an MeNB1201, an SgNB-DU1202, an SgNB-CU-UP1203and an SgNB-CU-CP1204. Referring toFIG.12, at operation1205, a UE may detect the occurrence of SCG-RLF (e.g., out of sync). Although not illustrated in the drawing, if the SCG-RLF is determined, the UE may stop SCG transmission to a gNB-DU. At operation1210, if the SCG-RLF is determined, the UE may transmit information related to the RLF (SCG failure information) to a MeNB. For example, the SCG failure information may include failure cause (e.g., out of sync) information. At operation1215, the MeNB may transfer a SGNB MODIFICATION REQUEST including information related to the RLF received from the UE (SCG failure information) to a SgNB-CU-CP. For example, the SCG failure information may include failure cause (e.g., out of sync) information. At operation1220, if the SGNB MODIFICATION REQUEST is received, the SgNB-CU-CP may transmit a SGNB MODIFICATION REQUEST ACKNOWLEDGE to the MeNB as a response. At operation1225, if the SCG failure occurrence is detected, the gNB-CU-CP may keep a SCG serving cell and PSCell without releasing the same based on the information related to the received RLF, and it may determine the stop of the downlink transmission of the CU-UP and the DU with respect to the existing serving cell. At operation1230, the gNB-CU-CP may transmit a BEARER CONTEXT MODIFICATION REQUEST including information indicating the downlink transmission stop of the gNB-CU-UP (DL stop indicator) to the gNB-CU-UP. At operation1235, if the information indicating the downlink transmission stop (DL stop indicator) is received, the gNB-CU-UP may stop the downlink transmission to the gNB-DU with respect to the corresponding SCG bearer. Further, if necessary, the gNB-CU-UP may perform a path switching and retransmission operation to the MeNB. At operation1240, the gNB-CU-UP may transmit a BEARER CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the BEARER CONTEXT MODIFICATION REQUEST. At operation1245, the gNB-CU-CP may transmit a UE CONTEXT MODIFICATION REQUEST including information indicating downlink transmission stop of the gNB-DU (transmission stop indicator) to the gNB-DU. At operation1250, if the information indicating the downlink transmission stop (transmission stop indicator) is received, the gNB-DU may stop the transmission to the UE with respect to the corresponding SCG bearer. At operation1255, the gNB-DU may transmit a UE CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the UE CONTEXT MODIFICATION REQUEST. Referring toFIG.12, the operation1230in which the gNB-CU-CP transmits information indicating the downlink transmission stop of the gNB-CU-UP to the gNB-CU-UP and the operation1245in which the gNB-CU-CP transmits the information indicating the transmission stop of the gNB-DU (transmission stop indicator) to the gNB-DU are not limited to their operation order as described above, and the operation1245may be performed prior to the operation1230, or the operations1230and1245may be simultaneously performed in parallel. The DL Tx Stop IE was introduced in Cell Group Information IE. The intention is to allow CU-CP to stop the CU-UP DL transmission to a specific cell group. One example of using this DL Tx Stop IE is when the CU-CP wants to remove SCG. Except removing SCG, the CU-CP may stop DL TX of CU-UP to a specific cell group in some other cases, e.g., Case1: SCG failure According to the37.340, when the MN receives the SCG failure information from the UE, the MN “may decide to keep, change, or release the SN/SCG”. In case that the CU-CP decides to keep SCG, the CU-CP of SN can decide to stop DL Tx of CU-UP to the SCG. Case2: DU detects RLF When the DU of SN detects the RLF, the DU can send the UE Context Modification Required message to indicate the failure. In this case, the CU-CP of SN can decide to stop DL Tx of CU-UP to the SCG. Case3: SN initial PSCell change procedure When PSCell change is initiated by SN, the SgNB can decides to stop DL Tx transmission from the UP since SgNB decides to stop the DL Tx of DU. However, different from the case of removing SCG, in the above cases, the CU-CP may want to resume the DL Tx of the CU-UP later. For example,For Case1: If SCG failure occurs, MN may decide to keep the SCG. After some while, the PSCell in the SCG may become better than before. In this case, the MN may decide to recover the SCG by re-adding the PSCell. Then, the SgNB-CU-CP can decide to resume DL Tx of SgNB-CU-UP and SgNB-DU. The flowchart is given inFIG.13.For Case2, similar as Case1, when UE reports the recovery of the PSCell through MeNB, the SgNB-CU-CP can decide to recover SCG can decide to recover SCG by re-adding the PSCell. Then, the SgNB-CU-CP can inform SgNB-CU-UP and SgNB-DU to resume the DL transmission.For Case3, it is similar to the case used for introducing “restart” codepoint for Transmission Stop Indicator IE in F1AP message, i.e., UE CONTEXT MODIFICATION REQUEST. Specifically, in the SN-initiated modification procedure, after SN sends the SgNB modification required message, the SgNB-CU-CP should send UE Context Modification Request with transmission Stop indicator to SgNB-DU, and Bearer Context Modification request with DL Tx Stop IE to SgNB-CU-UP. However, at the same time, if MN also initiates SgNB modification request procedure which collides with this SN initiated procedure, such SN initiated procedure should be regard ed as failed while MN initiated SN modification procedure continues. In this case, the SgNB-CU-CP can resume the transmission at both SgNB-CU-UP and SgNB-DU. With the above consideration, we propose that after the DL Tx is stopped at CU-UP, the CU-CP is allowed to resume the DL Tx of CU-UP by setting DL TX Stop IE as “Resume”. FIG.13is a diagram illustrating an operation in which a CU-UP or a DU resumes downlink transmission in the case where a radio link (RL) recovery occurs in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. FIG.13illustrates an embodiment in which if a UE detects RL recovery, a CU-CP transmits information indicating downlink transmission resume to a CU-UP or a DU. Although not illustrated in the drawing, the UE may perform a measurement with respect to at least one cell (NR cell) related to a secondary base station. Referring toFIG.13, at operation1305, if a report event is triggered during the measurement, the UE may transmit a measurement report (MR) to a master base station (MeNB).FIG.13illustrates a UE1300, an MeNB1301, an SgNB-DU1302, an SgNB-CU-UP1303and an SgNB-CU-CP1304. For example, if an event in which a radio link of PSCell is recovered to a normal signal value occurs, the MR may be triggered. At operation1310, the master base station (MeNB) may transmit the measurement report (NR MR) received from the UE to a secondary base station (SgNB). For example, the master base station (MeNB) may transmit a SGNB MODIFICATION REQUEST in order to re-add the PSCell again. At operation1315, if the SGNB MODIFICATION REQUEST to add the PSCell is received, the gNB-CU-CP may determine that the radio link related to a specific cell (e.g., PSCell) among SCG serving cells is recovered, and it may determine an operation of resuming the downlink transmission of the CU-CP and the DU. At operation1320, the gNB-CU-CP may transmit a BEARER CONTEXT MODIFICATION REQUEST including information indicating the downlink transmission resume of the gNB-CU-UP (DL resume indicator) to the gNB-CU-UP. At operation1325, if the information indicating the downlink transmission resume (DL resume indicator) is received, the gNB-CU-UP may resume the downlink transmission to the gNB-DU with respect to the corresponding SCG bearer. At operation1330, the gNB-CU-UP may transmit a BEARER CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the BEARER CONTEXT MODIFICATION REQUEST. At operation1335, the gNB-CU-CP may transmit a UE CONTEXT MODIFICATION REQUEST including information indicating downlink transmission resume of the gNB-DU (transmission restart indicator) to the gNB-DU. At operation1340, if the information indicating the downlink transmission resume (transmission restart indicator) is received, the gNB-DU may resume the downlink transmission of the UE with respect to the corresponding SCG bearer. At operation1345, the gNB-DU may transmit a UE CONTEXT MODIFICATION RESPONSE to the gNB-CU-CP in response to the UE CONTEXT MODIFICATION REQUEST. At operation1350, the gNB-CU-CP may transmit an RRC SGNB MODIFICATION RESPONSE to the master base station (MeNB) in response to the SGNB MODIFICATION REQUEST. FIG.14is a flowchart illustrating an operation of a CU-CP in the case where a radio link failure (RLF) or a radio link (RL) recovery occurs in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.14, at operation1400, a gNB-CU-CP may identify information related to a radio link of at least one cell related to a base station. The information related to the radio link of the at least one cell related to the base station according to various embodiments of the disclosure may include information related to SCG-RLF. For example, if a UE detects the RLF, the gNB-CU-CP may receive information related to the SCG-RLF (SCG failure information) from a master base station (MeNB) (UE-detected RLF) Further, if a DU detects the RLF, the gNB-CU-CP may receive information related to the RLF (radio link fail information) from a gNB-DU (DU-detected RLF). According to various embodiments of the disclosure, the information related to a radio link of at least one cell related to the base station may include information related to a radio link recovery of a specific cell among serving cells belonging to a SN. For example, if the UE detects the radio link recovery, the gNB-CU-CP may receive a NR measurement report (MR) related to RL recovery from the master base station (MeNB) (UE-detected RL recovery). Further, if the DU detects the radio link recovery, the gNB-CU-CP may receive information related to SCG-RL recovery (e.g., radio link recovery cause information) from the gNB-DU (DU-detected RL recovery). At operation1410, the gNB-CU-CP may transmit a message (MSG) including information indicating downlink transmission stop or resume of the gNB-CU-UP (NR DL stop/resume indicator) or an information element (IE) to the gNB-CU-UP. For example, if information related to the SCG-RLF (SCG failure information) is received from the master base station (MeNB) (UE-detected RLF) or information related to the RLF (radio link fail information) is received from the gNB-DU (DU-detected RLF) at operation1400, the gNB-CU-CP may determine the SCG RLF, and it may transmit a message (MSG) including the information indicating the downlink transmission stop of the gNB-CU-UP (NR DL stop indicator) or the information element (IE) to the gNB-CU-UP. For example, if it is determined that PScell is in a normal state through reception of a NR measurement report (MR) from the UE (UE-detected RL recovery) or information related to the radio link recovery from the gNB-DU (UE-detected RL recovery) at operation1400, the gNB-CU-CP may determine the radio link recovery, and it may transmit the MSG including the information indicating the downlink transmission resume (NR DL resume indicator) of the gNB-CU-UP or the IE to the gNB-CU-UP. At operation1420, the gNB-CU-CP may transmit a message (MSG) including information indicating downlink transmission stop or resume of the gNB-DU (transmission stop/restart indicator) or an information element (IE) to the gNB-DU. For example, if information related to the SCG-RLF (SCG failure information) is received from the master base station (MeNB) (UE-detected RLF) or information related to the RLF (radio link fail information) is received from the gNB-DU (DU-detected RLF) at operation1400, the gNB-CU-CP may determine the SCG RLF, and it may transmit a message (MSG) including the information indicating the downlink transmission stop of the gNB-DU (transmission stop indicator) or the information element (IE) to the gNB-DU. For example, if it is determined that the PScell is in a normal state through reception of the NR measurement report (MR) from the UE (UE-detected RL recovery) or information related to the radio link recovery from the gNB-DU (UE-detected RL recovery) at operation1400, the gNB-CU-CP may determine the radio link recovery, and it may transmit the MSG including the information indicating the downlink transmission resume (transmission restart indicator) of the gNB-DU or the IE to the gNB-DU. Referring toFIG.14, the operation1410in which the gNB-CU-CP transmits information indicating the stop/resume of the downlink transmission to the gNB-CU-UP (NR DL stop/resume indicator) to the gNB-CU-UP and the operation1420in which the gNB-CU-CP transmits the information indicating the stop/resume of the transmission to the gNB-DU (transmission stop/restart indicator) to the gNB-DU are not limited to their operation order as described above, and the operation1420may be performed prior to the operation1410, or the operations1410and1420may be simultaneously performed in parallel. If the information related to the SCG-RLF (SCG failure information) is received from the master base station (MeNB) (UE-detected RLF) or the information related to the RLF (radio link fail information) is received from the gNB-DU (DU-detected RLF), the gNB-CU-CP according to various embodiments of the disclosure may transmit the message (MSG) including the information indicating the downlink transmission stop (NR DL stop indicator) or the information element (IE) to the gNB-CU-UP, and the gNB-CU-CP may also transmit the MSG including the information indicating the download transmission stop (transmission stop indicator) or the IE to the gNB-DU. If it is determined that the PScell is in a normal state through reception of a NR measurement report (MR) from the UE (UE-detected RL recovery) or information related to the radio link recovery from the gNB-DU (UE-detected RL recovery) after transmitting the NR DL stop indicator to the gNB-CU-UP and transmitting the transmission stop indicator to the gNB-DU, the gNB-CU-CP according to various embodiments of the disclosure may transmit the MSG including the information indicating the downlink transmission resume (NR DL resume indicator) or the IE to the gNB-CU-UP, and the gNB-CU-CP may transmit the MSG including the information indicating the downlink transmission resume (transmission restart (resume) indicator) or the IE to the gNB-DU. FIG.15is a flowchart illustrating an operation of a CU-UP in the case where a radio link failure (RLF) or a radio link (RL) recovery occurs in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.15, at operation1500, a gNB-CU-UP may receive a message (MSG) including information indicating downlink transmission stop or resume (NR DL stop/resume indicator) or an information element (IE) from a gNB-CU-CP. According to various embodiments of the disclosure, if a UE or a DU determines a SCG RLF, the gNB-CU-UP may receive a message (MSG) including information indicating the downlink transmission stop (NR DL stop indicator) or the information element (IM) from the gNB-CU-CP. According to various embodiments of the disclosure, if the UE or the DU determines a radio link recovery (e.g., if it is determined that PScell is in a normal state), the gNB-CU-UP may receive a MSG including information indicating the downlink transmission resume (NR DL resume indicator) or the IE from the gNB-CU-CP. At operation1510, the gNB-CU-UP may stop or resume the downlink transmission to the DU based on the information indicating the downlink transmission stop or resume (NR DL stop/resume indicator). According to various embodiments of the disclosure, if the information indicating the downlink transmission stop (DL stop indicator) is received, the gNB-CU-UP according to various embodiments of the disclosure may stop the downlink transmission to the gNB-DU with respect to the corresponding SCG bearer. For example, after receiving the information indicating the downlink transmission stop (NR DL stop indicator), if necessary, the gNB-CU-UP may suspend the NR path with respect to a SCG/split bearer in the corresponding UE. Further, if necessary, the gNB-CU-UP may perform a path switching and retransmission operation to LTE. The gNB-CU-UP according to various embodiments of the disclosure may perform a resume operation with respect to the SCG/split bearer after receiving information indicating the downlink transmission resume (NR DL resume indicator). For example, if information indicating the downlink transmission resume (DL stop indicator) is received, the gNB-CU-UP may resume the transmission to the gNB-DU with respect to the corresponding SCG bearer. FIG.16is a diagram illustrating a configuration of a message that a CU-CP included in SN transmits to a CU-UP or an information element in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.16, the information element (IE)1600indicated as information related to a cell group (cell group information) may include “DL TX stop”1610including information1611indicating downlink transmission stop of a CU-UP or information indicating a resume1613. As illustrated inFIG.16, the information indicating the downlink transmission stop or resume of the CU-UP may be included in the information element of the “DL TX stop”. For example, the information indicating the downlink transmission stop or resume “DL TX stop” of the CU-UP may be transmitted through a BEARER CONTEXT MODIFICATION REQUEST. FIG.17is a flowchart illustrating an operation of a DU in the case where a radio link failure (RLF) or a radio link (RL) recovery occurs in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.17, at operation1700, a gNB-DU may receive a message (MSG) including information indicating downlink transmission stop or resume (transmission stop/restart indicator) or an information element (IE) from a gNB-CU-CP. According to various embodiments of the disclosure, if a UE or a DU determines SCG RLF, the gNB-DU may receive the message (MSG) including information indicating downlink transmission stop (transmission stop indicator) or the information element (IE) from the gNB-CU-CP. According to various embodiments of the disclosure, if the UE or the DU determines a radio link recovery (e.g., if it is determined that PScell is in a normal state), the gNB-DU may receive the MSG including the information indicating the downlink transmission resume (transmission restart indicator) or the IE from the gNB-CU-CP. At operation1710, the gNB-DU may stop or resume the downlink transmission to the UE based on the information indicating the downlink transmission stop or resume (transmission stop/restart indicator). If the information indicating the downlink transmission stop (transmission stop indicator) is received, the gNB-DU according to various embodiments of the disclosure may stop the downlink transmission to the UE with respect to the corresponding SCG bearer. For example, the gNB-DU may perform a suspend operation with respect to a SCG/split bearer for the corresponding UE after receiving the information indicating the downlink transmission stop (transmission stop indicator). The gNB-DU according to various embodiments of the disclosure may perform the resume operation with respect to the SCG/split bearer for the corresponding UE after receiving the information indicating the downlink transmission resume (transmission restart (resume) indicator). For example, if the information indicating the transmission resume (transmission restart indicator) is received, the gNB-DU may resume the transmission to the UE with respect to the corresponding SCG bearer. FIG.18is a diagram illustrating a configuration of a message that a CU-CP included in SN transmits to a DU or an information element in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.18, the information element indicating downlink transmission or resume of the DU (transmission stop indicator1810) may include information1811indicating the downlink transmission stop of the DU or information1813indicating the resume. For example, the information indicating the downlink transmission or resume of the DU may be transmitted through a UE CONTEXT MODIFICATION REQUEST. FIG.19is a diagram illustrating a configuration of a message that a DU included in SN transmits to a CU-CP or an information element when radio link (RL) recovery occurs in the SN of a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.19, if the DU detects the radio link recovery, “radio network layer cause”1910including information related to the recovery (RL recovery1911) may be transmitted to the CU-CP. For example, if the DU detects the radio link recovery, the information related to the recovery may be transmitted through a UE CONTEXT MODIFICATION REQUIRED. FIG.20is a block diagram of a control plane CU-CP2000included in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.20, the CU-CP2000according to various embodiments may include a transceiver2010, a controller2020, and a memory2030. Hereinafter, the above constituent elements will be described in due course. The transceiver according to various embodiments may transmit and receive signals, information or data to and from a CU-UP and a DU, which are external network elements included in a master base station or a secondary base station according to various embodiments of the disclosure. The controller according to various embodiments may include at least one processor. The processor according to various embodiments may control an overall operation of the CU-CP. The processor may control an overall operation of the CU-CP according to the above-described various embodiments of the disclosure. The at least one processor according to various embodiments may control the transceiver to receive, from the master base station, a secondary base station addition/modification/release request for requesting a secondary base station to allocate/modify/release a radio resource for a bearer. The transceiver according to various embodiments of the disclosure may receive information related to a radio link of at least one cell related to the secondary base station. For example, the information related to the radio link may include information related to radio link failure or information related to radio link discovery. Further, the information related to the radio link may be received from a distributed unit (DU) in the master base station or the secondary base station. The processor according to various embodiments of the disclosure may identify the information related to the radio link of the at least one cell related to the base station. The processor according to various embodiments of the disclosure may control the transceiver to transmit, to the CU-CP, information indicating downlink transmission stop or resume of a central unit-user plane (CU-UP) included in the base station based on the information related to the radio link. For example, the information indicating the downlink transmission stop or resume of the CU-UP may be transmitted through a BEARER CONTEXT MODIFICATION REQUEST. The processor according to various embodiments of the disclosure may control the transceiver to transmit, to the distributed unit (DU), information indicating downlink transmission stop or resume of the DU based on the information related to the radio link. For example, the information indicating the downlink transmission stop or resume of the DU may be transmitted through a UE CONTEXT MODIFICATION REQUEST. If the information related to the radio link is information related to the radio link failure, the processor according to various embodiments of the disclosure may control the transceiver to transmit the information indicating the downlink transmission stop of the CU-UP to the CU-UP, and to transmit the information indicating the downlink transmission stop of the DU to the DU. If the information related to the radio link is information related to the radio link recovery, the processor according to various embodiments of the disclosure may control the transceiver to transmit the information indicating the downlink transmission resume of the CU-UP to the CU-UP, and to transmit the information indicating the downlink transmission resume of the DU to the DU. FIG.21is a block diagram of a user plane CU-UP2100included in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.21, the CU-UP2100according to various embodiments may include a transceiver2110, a controller2120, and a memory2130. Hereinafter, the above constituent elements will be described in due course. The transceiver according to various embodiments may transmit and receive signals, information or data to and from a CU-CP and a DU, which are external network elements included in a master base station or a secondary base station according to various embodiments of the disclosure. The controller according to various embodiments may include at least one processor. The processor according to various embodiments may control an overall operation of the CU-UP. The processor may control an overall operation of the CU-UP according to the above-described various embodiments of the disclosure. The processor according to various embodiments of the disclosure may control the transceiver to receive information indicating downlink transmission stop or resume from a central unit-control plane (CU-CP) included in the base station. For example, the information indicating the downlink transmission stop or resume may be received through a BEARER CONTEXT MODIFICATION REQUEST. The processor according to various embodiments of the disclosure may control to stop or resume downlink transmission to a distributed unit (DU) based on the information indicating the downlink transmission stop or resume. For example, if the information indicating the downlink transmission stop is received, the processor may control to stop the downlink transmission to the distributed unit (DU) and to perform a path switching and retransmission operation to another base station. For example, if the information indicating the downlink transmission resume is received, the processor may control to resume the downlink transmission to the distributed unit (DU). FIG.22is a block diagram of a distributed unit (DU) included in a SN in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.22, a DU2200according to various embodiments may include a transceiver2210, a controller2220, and a memory2230. Hereinafter, the above constituent elements will be described in due course. The transceiver according to various embodiments may transmit and receive signals, information or data to and from a CU-CP and a CU-UP, which are external network elements included in a master base station or a secondary base station according to various embodiments of the disclosure. The controller according to various embodiments may include at least one processor. The processor according to various embodiments may control an overall operation of the DU. The processor may control the overall operation of the DU according to the above-described various embodiments of the disclosure. The processor according to various embodiments of the disclosure may detect radio link failure or radio link recovery with respect to at least one cell related to the base station. For example, if the radio link failure is detected, the processor may control the transceiver to transmit information related to the RLF to a SgNB-CU-CP. For example, the information related to the RLF may include failure cause (e.g., radio link fail, Out of sync) information. For example, if the radio link recovery is detected, the processor may control the transceiver to transmit information related to the radio link recovery (RL recovery information) to a gNB-CU-CP. The processor according to various embodiments of the disclosure may receive the information indicating the downlink transmission stop or resume through a UE CONTEXT MODIFICATION REQUEST received from the CU-CP. The processor according to various embodiments of the disclosure may control to stop or resume the downlink transmission to a UE based on the information indicating the downlink transmission stop or resume. FIG.23is a block diagram of a UE2300in a wireless communication system supporting dual connectivity according to an embodiment of the disclosure. Referring toFIG.23, the UE2300according to various embodiments may include a transceiver2310, a controller2320, and a memory2330. Hereinafter, the above constituent elements will be described in due course. The transceiver according to various embodiments may transmit and receive signals, information or data to and from a master base station or a secondary base station according to various embodiments of the disclosure. The controller according to various embodiments may include at least one processor. The processor according to various embodiments may control an overall operation of the UE. The processor may control the overall operation of the UE according to the above-described various embodiments of the disclosure. The UE according to various embodiments is a UE supporting dual connectivity, and it may use a radio resource having a high data rate of the secondary base station through procedures for addition/release/modification of the secondary base station in accordance with the condition in a state where it is basically connected to the master base station. The at least one processor according to various embodiments may control the transceiver to simultaneously transmit and receive packets to and from the master base station and the secondary base station. The processor according to various embodiments of the disclosure may detect radio link failure or radio link recovery with respect to at least one cell related to the base station. For example, if SCG-RLF is determined, the processor may stop SCG transmission to a gNB-DU. For example, if the SCG-RLF is determined, the processor may control the transceiver to transmit the information related to the RLF (SCG failure information) to the master base station (MeNB). For example, the SCG failure information may include failure cause (e.g., out of sync) information. In this case, the master base station (eNB) may transmit the SCG failure information received from the UE to the secondary base station (gNB). For example, if the radio link recovery is detected, the processor may perform a measurement operation with respect to at least one cell (NR cell) related to the secondary base station. Further, if a report event is triggered during the measurement, the processor may control the transceiver to transmit a measurement report (MR) to the master base station (eNB). For example, if an event in which a radio link of PSCell of the secondary base station is recovered occurs, the MR may be triggered. For example, the master base station (eNB) may transmit the measurement report (NR MR) received from the UE to the secondary base station (gNB). Even if the RLF or RL recovery occurs in a secondary node during the dual connectivity operation between heterogeneous or homogeneous RATs in the UE, it is possible to perform efficient failure/recovery management through the stop/resume operations with respect to the existing cell and bearer under leading of the base station, and through this, an improved data interruption time as compared with the existing one can be expected. In the above-described detailed embodiments of the disclosure, the elements included in the disclosure may be expressed in the singular or plural form depending on the proposed detailed embodiment. However, the singular or plural expression has been selected suitably for a situation proposed for convenience of description, and the disclosure is not limited to the singular or plural elements. Although an element has been expressed in the plural form, it may be configured in the singular form. Although an element has been expressed in the singular form, it may be configured in the plural form. The embodiments described in this specification have been individually described, but two or more of the embodiments may be combined and practiced. Although the detailed embodiments have been described in the detailed description of the disclosure, the disclosure may be modified in various ways without departing from the scope of the disclosure. Accordingly, the scope of the disclosure should not be limited to the above-described embodiments, but should be defined by not only the claims, but equivalents thereof. The embodiments of the disclosure and the terms used in the embodiments are not intended to limit the technology described in this document to a specific embodiment, but should be construed as including various changes, equivalents and/or alternatives of a corresponding embodiment. Regarding the description of the drawings, similar reference numerals may be used in similar elements. An expression of the singular number may include an expression of the plural number unless clearly defined otherwise in the context. In this document, an expression, such as “A or B”, “at least one of A or/and B”, “A, B or C” or “at least one of A, B and/or C”, may include all of possible combinations of listed items together. Expressions, such as “a first,” “a second,” “the first” and “the second”, may modify corresponding elements regardless of the sequence and/or importance, and are used to only distinguish one element from the other element and do not limit corresponding elements. When it is described that one (e.g., first) element is “(operatively or communicatively) connected to” or “coupled with” the other (e.g., second) element, one element may be directly connected to the other element or may be connected to the other element through another element (e.g., third element). The “module” used in the disclosure includes a unit configured with hardware, software or firmware, and may be interchangeably used with a term, such as logic, a logical block, a part or a circuit. The module may be an integrated part, a minimum unit to perform one or more functions, or a part thereof. For example, the module may be configured with an application-specific integrated circuit (ASIC). The various embodiments of the disclosure may be implemented as machine (e.g., computer)-readable storage media (e.g., software (e.g., program) including instructions stored in an internal memory or external memory). A device is an apparatus capable of fetching instructions stored in the storage media and operating according to the fetched instructions, and may include a base station or UE according to various embodiments. If the instruction is executed by the processor (e.g., the controller2020ofFIG.20or the controller2120ofFIG.21or the controller2220ofFIG.22or the controller2320ofFIG.23), a function corresponding to the instruction may be directly performed by the processor or may be performed using other elements under the control of the processor. The instruction may include code generated or executed by a compiler or interpreter. The machine-readable storage media may be provided in the form of a non-transitory storage medium. In this case, the term “non-transitory” means that the storage media do not include a signal and is tangible, and is not limited to whether data is stored in the storage media semi-permanently or temporally. The method according to various embodiments disclosed in the disclosure may be included in a computer program product and provided. The computer program product may be traded as a product between a seller and a purchaser. The computer program product may be online distributed in the form of device-readable storage media (e.g., compact disc read only memory (CD-ROM)) or through an app store (e.g., PlayStore™). In the case of the online distribution, at least some of the computer program product may be at least temporarily stored or temporally generated in storage media, such as the memory of the server of a manufacturer, the server of an app store or a relay server. Each of elements (e.g., module or program) according to various embodiments may be configured with a single entity or a plurality of entities. Some of the above-described sub-elements may be omitted other sub-elements may be further included in various embodiments. Alternatively or additionally, some elements (e.g., modules or programs) may be integrated into one entity, and may perform a function, performed by each corresponding element prior to the integration, identically or similarly. Operations performed by a module, a program or other elements according to various embodiments may be executed sequentially, in parallel, repeatedly, or heuristically, or at least some operations may be executed in different order or may be omitted, or other operations may be added. The methods of the embodiments illustrated inFIGS.1to23can include a combination of methods from more than one illustration. For example,FIGS.1to23illustrate operations related to a secondary node operation when a radio link failure or recovery occurs in the secondary node based on various embodiments, the methods can include a combination of methods from more than one illustration. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. | 91,063 |
11863373 | DESCRIPTION The following contains specific information pertaining to exemplary implementations in the present disclosure. The drawings and their accompanying detailed disclosure are directed to merely exemplary implementations. However, the present disclosure is not limited to merely these exemplary implementations. Other variations and implementations of the present disclosure will occur to those skilled in the art. Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present disclosure are generally not to scale and are not intended to correspond to actual relative dimensions. The following contains specific information pertaining to example implementations in the present disclosure. The drawings and their accompanying detailed disclosure are directed to merely example implementations. However, the present disclosure is not limited to merely these example implementations. Other variations and implementations of the present disclosure will occur to those skilled in the art. Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present disclosure are generally not to scale and are not intended to correspond to actual relative dimensions. For consistency and ease of understanding, like features are identified (although, in some examples, not illustrated) by numerals in the example figures. However, the features in different implementations may differ in other respects, and thus shall not be narrowly confined to what is illustrated in the figures. References to “one implementation,” “an implementation,” “example implementation,” “various implementations,” “some implementations,” “implementations of the present disclosure,” etc., may indicate that the implementation(s) of the present disclosure may include a particular feature, structure, or characteristic, but not every possible implementation of the present disclosure necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one implementation,” “in an example implementation,” or “an implementation,” do not necessarily refer to the same implementation, although they may. Moreover, any use of phrases like “implementations” in connection with “the present disclosure” are never meant to characterize that all implementations of the present disclosure must include the particular feature, structure, or characteristic, and should instead be understood to mean “at least some implementations of the present disclosure” includes the stated particular feature, structure, or characteristic. The term “coupled” is defined as connected, whether directly or indirectly through intervening components, and is not necessarily limited to physical connections. The term “comprising,” when utilized, means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in the so-disclosed combination, group, series, and the equivalent. The term “and/or” herein is only an association relationship for describing associated objects, and represents that three relationships may exist, for example, A and/or B may represent that: A exists alone, A and B exist at the same time, and B exists alone. “A and/or B and/or C” may represent that at least one of A, B and C exists. In addition, the character “/” used herein generally represents that the former and latter associated objects are in an “or” relationship. Additionally, for the purpose of non-limiting explanation, specific details, such as functional entities, techniques, protocols, standards, and the like, are set forth for providing an understanding of the disclosed technology. In other examples, a detailed disclosure of well-known methods, technologies, systems, architectures, and the like are omitted in order not to obscure the present disclosure with unnecessary details. Persons skilled in the art will immediately recognize that any NW function(s) or algorithm(s) in the present disclosure may be implemented by hardware, software, or a combination of software and hardware. Disclosed functions may correspond to modules that may be software, hardware, firmware, or any combination thereof. The software implementation may comprise computer-executable instructions stored on computer-readable media such as memory or other types of storage devices. For example, one or more microprocessors or general-purpose computers with communication processing capability may be programmed with corresponding executable instructions and carry out the disclosed NW function(s) or algorithm(s). The microprocessors or general-purpose computers may be formed of Applications Specific Integrated Circuitry (ASIC), programmable logic arrays, and/or using one or more Digital Signal Processor (DSPs). Although some of the example implementations in the present disclosure are directed to software installed and executing on computer hardware, alternative example implementations implemented as firmware or as hardware or combination of hardware and software are well within the scope of the present disclosure. The computer-readable medium includes but is not limited to Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, Compact Disc Read-Only Memory (CD-ROM), magnetic cassettes, magnetic tape, magnetic disk storage, or any other equivalent medium capable of storing computer-readable instructions. A radio communication NW architecture (e.g., a Long Term Evolution (LTE) system, an LTE-Advanced (LTE-A) system, or an LTE-Advanced Pro system) typically includes at least one Base Station (BS), at least one user equipment (UE), and one or more optional NW elements that provide connection towards an NW. The UE communicates with the NW (e.g., a Core NW (CN), an Evolved Packet Core (EPC) NW, an Evolved Universal Terrestrial Radio Access NW (E-UTRAN), a Next-Generation Core (NGC), or an Internet), through a Radio Access NW (RAN) established by the BS. It should be noted that, in the present disclosure, a UE may include, but is not limited to, a mobile station, a mobile terminal or device, a user communication radio terminal. For example, a UE may be a portable radio equipment, which includes, but is not limited to, a mobile phone, a tablet, a wearable device, a sensor, or a Personal Digital Assistant (PDA) with wireless communication capability. The UE is configured to receive and transmit signals over an air interface to one or more cells in a RAN. A BS may include, but not limited to, a Node B (NB) as in the Universal Mobile Telecommunication System (UMTS), an evolved Node B (eNB) as in the LTE-A, a Radio NW Controller (RNC) as in the UMTS, a Base Station Controller (BSC) as in the Global System for Mobile communications (GSM)/GSM EDGE Radio Access NW (GERAN), a Next Generation eNB (ng-eNB) as in an E-UTRA BS in connection with the 5GC, a next-generation Node B (gNB) as in the 5G Access NW (5G-AN), and any other apparatus capable of controlling radio communication and managing radio resources within a cell. The BS may connect to serve the one or more UEs through a radio interface to the NW. A BS may be configured to provide communication services according to at least one of the following Radio Access Technologies (RATs): Worldwide Interoperability for Microwave Access (WiMAX), GSM (often referred to as 2G), GERAN, General Packet Radio Service (GPRS), UMTS (often referred to as 3G) based on basic Wideband-Code Division Multiple Access (W-CDMA), High-Speed Packet Access (HSPA), LTE, LTE-A, enhanced LTE (eLTE), NR (often referred to as 5G), and LTE-A Pro. However, the scope of the present disclosure should not be limited to the protocols previously disclosed. The BS may be operable to provide radio coverage to a specific geographical area using a plurality of cells included in the RAN. The BS may support the operations of the cells. Each cell is operable to provide services to at least one UE within its radio coverage. More specifically, each cell (often referred to as a serving cell) may provide services to serve one or more UEs within its radio coverage, (e.g., each cell schedules the Downlink (DL) and optionally UL resources to at least one UE within its radio coverage for DL and optionally Uplink (UL) packet transmissions). The BS may communicate with one or more UEs in the radio communication system through the plurality of cells. A cell may allocate sidelink (SL) resources for supporting proximity service (ProSe). Each cell may have overlapped coverage areas with other cells. In Multi-RAT Dual Connectivity (MR-DC) cases, the primary cell of a Master Cell Group (MCG) or a Secondary Cell Group (SCG) may be called as a Special Cell (SpCell). A Primary Cell (PCell) may refer to the SpCell of an MCG. A PSCell may refer to the SpCell of an SCG. MCG refers to a group of serving cells associated with the Master Node (MN), comprising the SpCell and optionally one or more secondary cells (SCells). SCG refers to a group of serving cells associated with the Secondary Node (SN), comprising of the SpCell and optionally one or more SCells. As previously disclosed, the frame structure for NR is to support flexible configurations for accommodating various next generation (e.g., 5G) communication requirements, such as eMBB, mMTC, and URLLC, while fulfilling high reliability, high data rate, and low latency requirements. The orthogonal frequency-division multiplexing (OFDM) technology, as agreed in the 3rdGeneration Partnership Project (3GPP), may serve as a baseline for an NR waveform. The scalable OFDM numerology, such as the adaptive sub-carrier spacing, the channel bandwidth, and the cyclic prefix (CP), may also be used. Additionally, two coding schemes are considered for NR: (1) low-density parity-check (LDPC) code and (2) polar code. The coding scheme adaption may be configured based on the channel conditions and/or service applications. Moreover, it is also considered that in a transmission time interval of a single NR frame, at least DL transmission data, a guard period, and UL transmission data should be included, where the respective portions of the DL transmission data, the guard period, the UL transmission data should also be configurable, for example, based on the NW dynamics of NR. In addition, SL resources may also be provided in an NR frame to support ProSe services. An objective of the 5G on new radio access technology is to identify and develop technology components needed for new radio systems which should be able to use any spectrum band ranging at least up to 100 GHz. Supporting carrier frequencies up to 100 GHz brings several challenges in the area of radio propagation. As the carrier frequency increases, the path loss also increases. In lower frequency bands (e.g., <6 GHz) the required cell coverage may be provided by forming a wide sector beam for transmitting downlink common channels. However, utilizing a wide sector beam on higher frequencies (e.g., >6 GHz) the cell coverage is reduced with the same antenna gain. To provide required cell coverage on higher frequency bands, higher antenna gain is needed to compensate for the increased path loss. Beamforming is a signal processing technique used in antenna arrays for directional signal transmission/reception. With beamforming, a beam may be formed by combining elements in a phased array of antennas in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Different beams may be utilized simultaneously using multiple arrays of antennas. To increase the antenna gain over a wide sector beam, larger antenna arrays (number of antenna elements ranging from tens to hundreds) are used to form high gain beams. Nonetheless, the high gain beams are narrow compared to a wide sector beam so multiple beams for transmitting downlink common channels are needed to cover the required cell area. The number of concurrent high gain beams that an access point may form may be limited by the cost and complexity of the utilized transceiver architecture. In practice, on higher frequencies, the number of concurrent high gain beams is much less than the total number of beams required to cover the cell area. In other words, the access point may cover only part of the cell area by using a subset of beams at any given time. As a consequence, the gNB may utilize multiple beams to cover the whole coverage area and each UE may be associated with one of those beams. When the UE moves and/or the environment varies, the best beam for the UE could change. Here the Layer 1 (L1)/Layer 2 (L2) beam management procedure is operated to switch the current beam to a new beam. That may be called L1/L2 inter-beam mobility. The beam may be used on the downlink control channel. The design of beams should consider both coverage distance and robustness to UE mobility. Considering the low data rate requirement but high reliability requirement on the control channel, the beam should be wide enough to allow reasonable UE mobility and potential blockage. Choosing narrow beams would generate unnecessary frequent beam switching and potentially frequent connection loss on the control channel. On the other hand, the misalignment on the beam could result in the loss of an ongoing link of the control channel (which may be called beam failure). The gNB might not be able to use the same beam management procedure to switch to a new beam. Thus, a beam failure recovery (BFR) mechanism may be utilized. The UE may recognize a beam failure event based on measuring some downlink RSs, control channels and/or data channels. One example of beam failure recognition is that the UE detects very low reference symbol received power (RSRP) of the current serving beam based on the measurement of downlink RS used for beam management. If beam failure is recognized (or detected), the UE may notify the gNB of this event through some UL transmission. Then the gNB may act to recovery the beam accordingly. However, there is a need in the art to make the BFR procedure more efficient and better applied to the secondary cell (SCell). However, there is a need in the art to make the BFR procedure more efficient and better applied to the secondary cell (SCell). Based on the previously disclosed issues, a UE may be configured with a BFR procedure which is used for indicating to the serving gNB of a new synchronization signal block (SSB) and/or channel status information-RS (CSI-RS) when beam failure is detected on the serving SSB(s)/CSI-RS(s). For beam failure detection, the gNB configures the UE with beam failure detection reference signals (SSB or CSI-RS) and the UE declares beam failure when the number of beam failure instance indications from the physical layer reaches a configured threshold before a configured timer expires. SSB-based Beam Failure Detection is based on the SSB associated with the initial DL Band Width Part (BWP) and may only be configured for the initial DL BWPs and for DL BWPs containing the SSB associated to the initial DL BWP. For other DL BWPs, Beam Failure Detection may only be performed based on CSI-RS. When beam failure (on SpCell) is detected, the UE may perform a random-access channel-based (RACH-based) BFR procedure with the following steps:triggering BFR by initiating a Random Access (RA) procedure on the SpCell; and/orselecting a suitable beam to perform BFR if the gNB has provided dedicated RA resources for certain beams, those will be prioritized by the UE. Upon completion of the RA procedure, the BFR is considered complete. The following may be used to further elaborate the term, example, embodiment, action, behavior, alternative, aspect, example, or claim mentioned in the present disclosure. UE: The UE may be referred to as PHY/MAC/RLC/PDCP/SDAP entity. The PHY/MAC/RLC/PDCP/SDAP entity may refer to the UE. NW: The NW may be a network node, a TRP, a cell (e.g., SpCell, PCell, PSCell, and/or SCell), an eNB, a gNB, and/or a base station. Serving Cell: A PCell, a PSCell, or an SCell. The serving cell may be an activated or a deactivated serving cell. SpCell: For DC operation the term Special Cell refers to the PCell of the MCG or the PSCell of the SCG depending on whether the MAC entity is associated with the MCG or the SCG, respectively. Otherwise, the term Special Cell refers to the PCell. A Special Cell supports PUCCH transmission and contention-based RA and is always activated. Component Carrier (CC): The CC may be a PCell, PSCell, and/or SCell. UL resource: The UL resource may be a RACH resource, PUCCH resource, and/or PUSCH resource. The UL resource may be scheduled by a dynamic grant (e.g., via PDCCH) and/or configured by the RRC (e.g., type 1/type 2 configured UL grant or pre-configured in RRC configuration). BFR procedure: The BFR procedure may be the SCell BFR procedure and/or the RACH-based BFR procedure. RACH-based BFR procedure: The RACH-based BFR procedure may be performed based on contention-free RA procedure and/or contention-based RA procedure. The RACH-based BFR procedure is initiated when the corresponding RA procedure is initiated. The RACH-based BFR procedure is ongoing when the corresponding RA procedure is ongoing. The RACH-based BFR procedure is stopped when the corresponding RA procedure is stopped. The RACH-based BFR procedure is completed when the corresponding RA procedure is completed. SCell BFR procedure: The SCell BFR procedure may be performed based on the BFR-SR and/or the BFR MAC CE. Beam: The term “beam” may be replaced by a spatial filter. For example, when the UE reports a preferred gNB TX beam, the UE is essentially selecting a spatial filter used by the gNB. The term “beam information” is used to provide information about which beam/spatial filter is being used/selected. In one example, individual reference signals are transmitted by applying individual beams (spatial filters). Thus, the beam or the beam information may be represented by the reference signal resource index(es). The beam may be a DL and/or UL beam. The beam may be a Tx beam and/or Rx beam. The beam may be a UE beam and/or NW beam. The beam may be referred to as a reference signal (e.g., SSB, CSI-RS, and/or SRS), and/or TCI state. The (new) beam may be indicated via a reference signal (e.g., SSB, CSI-RS, and/or SRS), and/or a TCI state. Serving beam: The serving beam for the UE is a beam generated by network, e.g. TRP, which is used to communicate with the UE, e.g. for transmission and/or reception. The BFR-SR of SCell BFR mentioned in the present disclosure may be replaced by a PRACH transmission. For example, the UE may perform the PRACH transmission (e.g., transmits preamble) to request a UL resource for the BFR MAC CE. The BFR MAC CE of SCell BFR mentioned in the present disclosure may be replaced by transmitting an Uplink Control Information (UCI). For example, the BFR-related information (e.g., (failed) CC (or cell) information (e.g., cell index), (failed) set/group(s) of cells (e.g., the set/group may be pre-configured by NW), (failed) transmission and reception point (TRP) information, the corresponding measurement result (e.g., RSRP, Signal to Interference plus Noise Ratio (SINR), etc.) of the (failed) CC, set/group of cells, TRP, Candidate beam information (or new beam information), e.g., one or more qualified beam based on measuring NBI RS, no new beam information (e.g., if there is no new beam with RSRP higher than a threshold for the (failed) CC, set/group of cells, TRP, etc. may be included in the UCI. For the NW side, the NW may have multiple TRPs (either centralized or distributed). Each TRP may form multiple beams for transmission or reception. The number of beams and the number of simultaneous beams in the time/frequency domain may depend on the number of antenna array elements and the RF at the TRP. The TRP may apply a beamforming to both data and control signaling transmission or reception. Number of beams generated concurrently by TRP depends on TRP capability, e.g. maximum number of beams generated concurrently by different TRPs in the same cell may be the same and those in different cells may be different. Beam sweeping may be necessary, e.g. for the control signaling to be provided in every direction. The UE may perform the beamforming for transmission or reception. The UE may generate multiple UE beams concurrently and to be served by multiple serving beams from one or multiple TRPs of the same cell. Same or different (DL or UL) data could be transmitted on the same radio resource via different beams for diversity or throughput gain. In one embodiment (e.g. the spec as defined for Rel-15), the RACH-based BFR mechanism is only applied for the special cell (SpCell), e.g., the primary cell (PCell) and the primary secondary cell (PSCell). If beam blockage happens on an SCell, the only option is to rely on a network (NW) to handle it, e.g., the SCell beam failure detection could be based either on the absence of an acknowledgment (ACK)/non-ACK (HACK) feedback for the scheduled DL transmission in the SCell or depending on a Channel Quality Indicator (CQI) report in the SCell. If the beam failure occurs, the NW may release this SCell and re-schedule the data transmission. Under such circumstances, this implementation may degrade scheduling efficiency and increase the higher layers signaling propagation latency. In order to quickly recover the beam (e.g., changing the serving beam) from the beam failure on the SCell, in another embodiment (e.g. the spec as defined for the Rel-16), the detailed signaling configuration and/or the BFR procedure is discussed and determined to support the SCell BFR. FIG.1illustrates a SCell BFR procedure10according to an example implementation of the present disclosure. As illustrated inFIG.1, the SCell BFR procedure10includes the following steps. Step102of the SCell BFR procedure10performs a beam failure detection (BFD) by a UE182. Specifically, a BFD RS (e.g., SSB and/or CSI-RS) may be explicitly or implicitly configured for the UE to detect any beam failure (event). When the BFD RS is configured implicitly, the BFD RS may be transmitted in active BWP of either a current CC or another CC. In one aspect of the embodiments, considering the physical layer of the UE, the UE may assess the radio link quality according to the BFD RS. The UE may indicate a BFI indication to a higher layer (e.g. the MAC layer) when the radio link quality is lower than a first threshold (e.g., a RSRP threshold for the BFD RS) with a periodicity. The UE may increment a value of the BFI counter based on the previously disclosed Beam Failure Detection (BFD). In one aspect of the embodiments, considering the MAC layer of UE, the UE may receive the BFI indication from a lower layer (e.g. the physical layer). The Beam failure (event) is determined/detected when the value of the BFI counter is equal to or higher than a second threshold. For example, if the incremented value of the BFI counter exceeds the second threshold (e.g. a configured maximum number, specifically the beamfailureInstanceMaxCount information element (IE)), the BFR procedure for the serving cell may be triggered. One beamfailureInstanceMaxCount IE may be configured for each serving cell. In other words, the BFI counter may be used for counting the number of BFI(s), specifically BFI_COUNTER, such that the BFI_COUNTER may be used for each serving cell. In one aspect of the embodiments, the UE may implement a BFD timer that resets the BFI counter upon expiration, specifically beamFailureDetectionTimer. The beamFailureDetectionTimer may be configured for each serving cell. Accordingly, as previously disclosed, when the beam failure associated with at least one serving cell (e.g., SCell) is detected, UE may trigger the BFR procedure for the serving cell (e.g., SCell) and/or trigger a dedicated scheduling request (SR)-like PUCCH resource for a BFR request (BFRQ), which may be introduced as a BFR-SR procedure in the following disclosure. Step104of the SCell BFR procedure10performs an NBI by the UE182. In one aspect of the embodiments, the UE may select a new beam or a candidate beam for the serving cell(s) based on measuring an NBI RS. For example, the UE may determine whether an L1-RSRP measurement result is higher than a predefined threshold or not. Next, a downlink RS for the NBI may be transmitted in an active BWP of the CC which is configured to be monitored for the same BFR or another CC within the same band of the serving cell (e.g., SCell). The UE may expect the gNB to configure at least one new beam RS if the BFR for the corresponding serving cell (e.g., SCell) is configured. If at least one new beam RS is not configured, all SSBs may be considered as new beam RS candidates. For the BFR, each BWP of a serving cell (e.g., SCell) may support a maximum number of 64 RS for new beam identification. Step106of the SCell BFR procedure10performs a BFRQ by the UE182. In the BFRQ, the UE may send a BFR-SR over a PCell, a PSCell, and/or a SCell, and the BFR-SR may be used to indicate a beam failure event of an CC (s) and/or to request an UL resource in order to transmit more information related to beam failure. It is noted that whether the first step should be performed may be based on whether any UL resource(s) is available. Specifically, the BFR-SR may be skipped if the UL resource is available and/or could be used for a BFR report (e.g., BFR MAC CE) transmission. It is noted that when the UE determines not to perform (or to skip) the BFR-SR, the UE may (directly) send a BFRQ MAC CE. In the BFRQ, the UE may send a BFR MAC CE. In one aspect of the embodiments, the BFR MAC CE may include at least one of the following information:a failed-CC(s) information (e.g., cell index);a new-beam(s) information (e.g., the new beam may be selected based on measuring NBI RS);a no-new-beam information (e.g., no new beam with L1-RSRP higher than a threshold);a cell identity of the serving cell which triggers the BFR procedure;a beam-presence indicator of the serving cell which triggers the BFR procedure;a candidate beam indicator of the serving cell which triggers the BFR procedure. The BFR MAC CE may be transmitted (only) via the UL grant which is requested by the BFR-SR. Alternatively, the BFR MAC CE may also be transmitted via any UL grant (e.g., UL grant via RAR, dynamic UL grant via a physical downlink control channel (PDCCH), and/or configured grant), which is not limited the scope of the embodiments. Step108of the SCell BFR procedure10is the NW transmitting a BFR response (BFRR). In one aspect of the embodiments, after the UE transmits the BFRQ (e.g., the BFR-SR and/or the BFR MAC CE), the UE may try to monitor a BFRR (e.g., via PDCCH monitoring) from the NW (i.e. the BFRR is received from the PDCCH of the serving cell). In one aspect of the embodiments, the BFRR may be transmitted, from the NW, on the PCell, the PSCell and/or the SCell. The BFRR may be transmitted, from the NW, on an CC, where the UE transmits the BFRQ on the CC. The BFRR may be transmitted, from the NW, on another CC, which is not the same as the CC on which the UE transmits the BFRQ, e.g., via cross carrier scheduling. In one aspect of the embodiments, the BFRR may be an UL grant scrambled with/addressed to a cell-radio network temporary identifier (C-RNTI)/modulation coding scheme (MCS)-C-RNTI. In one aspect of the embodiments, the BFRR may schedule a new transmission for the same Hybrid Automatic Repeat Request (HARQ) process as a physical uplink shared channel (PUSCH) carrying the BFR MAC CE. Accordingly, upon receiving the BFRR, the UE may consider the BFR procedure is successfully completed. More detailed terminology and/or definition may be disclosed hereinafter. In one embodiment, the BFD RS may be a set of reference signals (e.g., SSB and/or CSI-RS) which may be used for the BFD. Different sets of the BFD RSs may be associated with different serving cell/CC (or cell)s, sets/groups of cells, or TRPs. In one embodiment, assume a first set of the BFD RSs is associated with a first serving cell/CC. If the UE detects that the quality of the first set of the BFD RSs are all lower than a threshold for a period, the UE may detect that the first serving cell/CC has failed (or beam failure has occurred). The BFD RS may be transmitted in (active BWP of) either a current serving cell/CC or another serving cell/CC (e.g., within the same band). In one embodiment, the NBI RS may be a set of reference signals (e.g., SSB and/or CSI-RS) which may be used for the NBI. Different sets of the NBI RSs may be configured for different serving cell/CCs, sets/groups of cells, or TRPs. In one embodiment, assume a first set of the NBI RS is configured for a first serving cell/CC. If beam failure occurs in the first serving cell/CC, the UE may select a new beam based on measuring the first set of the NBI RSs. The UE may select a new beam which has the highest RSRP or has a RSRP higher than a threshold within the first set of the NBI RS. The UE may include the information of the NBI RS in the BFR report (e.g., BFR MAC CE). The NBI RS may be transmitted in (active BWP of) the serving cell/CC which is configured to be monitored for BFR or another serving cell/CC within the same band. In one embodiment, the SR may be used for requesting an uplink shared channel (UL-SCH) resource (e.g., PUSCH resource) for a new transmission. The UE may be configured with zero, one, or more SR configurations. An SR configuration may consist of a set of PUCCH resources for the SR across different BWPs and cells. For a logical channel, at most one PUCCH resource for the SR is configured per BWP. Each SR configuration may correspond to one or more logical channels. Each logical channel may be mapped to zero or one SR configuration. The SR configuration of the logical channel that triggered a buffer status report (BSR) (if such a configuration exists) is considered as the corresponding SR configuration for the triggered SR. When the SR is triggered, the SR shall be considered as pending until it is cancelled. In one embodiment, the BFR-SR may be a BFRQ. The BFR-SR may be a dedicated SR-like PUCCH resource for BFR. The BFR-SR may be used to indicate to the NW a beam failure event and/or used for requesting a UL-SCH resource (e.g., for a BFR MAC CE transmission). The UE may be configured with zero, one, or more BFR-SR configuration. The PUCCH resource for the BFR-SR may be configured per BWP, per TRP, per serving cell/CC, per set of CCs, per CG, and/or per UE. The PUCCH resource for the BFR-SR may be configured on the PCell, the PSCell, and/or the (PUCCH) SCell. The BFR-SR may be transmitted on the PCell, the PSCell, and/or the SCell accordingly. The BFR-SR may be a cross-cell transmission, e.g., the beam failure happens on an SCell, but the BFR-SR is transmitted on the PCell. The BFR-SR configuration may be a specific configuration which may not be one of the SR configurations, e.g., the identification (ID) of BFR-SR configuration is not indicated by schedulingRequestid. Alternatively, the BFR-SR configuration may be one of the SR configurations, e.g., the ID of the BFR-SR configuration is indicated by schedulingRequestid. A radio resource control (RRC) parameter may be used to indicate which SR configuration corresponds to the BFR-SR. The ID of the BFR-SR configuration may be configured per BWP, e.g., as a part of BFR configuration. The BFR-SR may have the highest priority of all the SR procedures applying legacy SR configurations. The BFR-SR configuration may be configured per BWP, per TRP, per serving cell/CC, per set of CCs, per cell group (CG), and/or per UE. In one embodiment, the BFR MAC CE may be a BFRQ. The BFR MAC CE may be transmitted on any available UL grant which could accommodate the BFR MAC CE. Alternatively, the BFR MAC CE may (only) be transmitted on a specific UL grant which is requested by the BFR-SR. Preferably, whether the specific UL grant is requested by BFR-SR or not may be indicated based on one implicit or explicit method. In one embodiment, the BFR MAC CE may be transmitted on a physical uplink shared channel (PUSCH). Alternatively, in another embodiment, the BFR MAC CE may be transmitted on any UL grant (e.g., the UL grant provided by random access response (RAR), type1/type 2 configured grant, dynamic grant, etc.). In some of the embodiments, the BFR MAC CE may include one or more of the following information:(failed) CC (or cell) information (e.g., the cell index of the serving cell);(failed) set/group(s) of cells (e.g., the set/group may be pre-configured by the NW);(failed) TRP information;the corresponding measurement result (e.g., RSRP, SINR, etc.) of the (failed) CC, set/group of cells, TRP;candidate beam information/indicator (or new beam information) (e.g., one or more qualified beam(s) based on measuring the NBI RS);beam-presence information/indicator;no new beam information (e.g., if there is no new beam with RSRP higher than a threshold for the (failed) CC, set/group of cells, TRP). In one embodiment, multiple serving cells (e.g., PCell and/or SCells) may fail simultaneously, and the BFR MAC CE may carry multiple failed serving cell (e.g., PCell and/or SCell) information. In another embodiment, if there is only one failed SCell, one failed SCell information is included in the BFR MAC CE. Two formats of the BFR MAC CE including a single-entry MAC-CE and/or a multi-entry MAC-CE for carrying the information of the failed SCell may be introduced. The UE may transmit the corresponding BFRQ information via the single-entry BFFQ MAC CE when beam failure happens (e.g., on only one serving cell) and/or transmit the corresponding BFRQ information via the multi-entry BFR MAC CE when beam failure occurs on multiple serving cells (e.g., PCell and/or SCells). FIG.2andFIG.3provide clarification.FIG.2illustrates a single-entry BFR MAC CE20according to an example implementation of the present disclosure, andFIG.3illustrates a multi-entry BFR MAC CE30according to an example implementation of the present disclosure. As illustrated inFIG.2andFIG.3, the BFR MAC CE may include at least one of the following fields:a serving Cell ID field that indicates which serving cells failed;a ‘B’ field that indicates whether the new beam information corresponds to the identified failed serving cell;a new beam info field that indicates the CSI-RS or SSB with the L1-RSRP higher than the threshold configured for the BFR. In one embodiment, for the multi-entry BFR MAC CE, multiple failed serving cell indexes may be indicated by a bitmap, where each bit corresponds to one serving cell. BWP Switching During BFR Procedure In some of the embodiments, for the BFR mechanism, the UE may be configured with a (set of) BFD RS for the BFD and may be configured with a (set of) NBI RS for the NBI. It is noted that the BFD RS and/or the NBI RS may be configured per (DL) BWP. In one embodiment, the configuration of the BFD RS and/or the NBI RS may be configured in a BWP configuration (e.g., BWP-DownlinkDedicated). In other words, each BWP of a cell may have different (set of) BFD RSs and/or different (set of) NBI RSs. For the BFD, the UE may assess the radio link quality associated with the BFD according to the BFD RS of an active BWP of a cell. For example, the UE may count the number of BFIs when the quality of the BFD RS is worse than a threshold during a period of time. If the number of BFI reaches a maximum number (e.g., a threshold) the UE may consider the cell where the BFD RS is configured has detected a beam failure. Furthermore, when beam failure is detected on the cell, the UE may need to find a new beam (or a candidate beam) based on the (set of) NBI RSs configured for the active BWP of the cell. For the SCell BFR procedure10illustrated inFIG.1, the UE may report the new beam information via the second step of the BFRQ, e.g., to carry the NBI RS index in the BFR MAC CE. In some of the embodiments, the UE may switch the BWP of a cell during the BFR procedure for the cell. Under such circumstances, the UE may receive an indication (e.g., an RRC or a PDCCH signaling) for a BWP switching of the cell from the NW during any time point of the BFR procedure for the cell. In another embodiment, the UE may also switch the BWP of the cell due to expiration of a bwp-Inactivity Timer of the cell during the BFR procedure for the cell.FIG.4illustrates a BWP switching during the BFR procedure40according to an example implementation of the present disclosure. As illustrated inFIG.4, some issues may be introduced. For example, the NW may not be aware of whether the new beam information included in the BFR MAC CE is measured on the BWP before the BWP switching or after the BWP switching. If the BWP switching is performed after transmitting the BFR MAC CE or after generating the BFR MAC CE, the new beam information included in the BFR MAC CE may be invalid because the UE has changed to another BWP with different channel condition(s). Accordingly, one or more or any combination of the disclosed alternatives, aspects, examples, and/or embodiments may be taken into account to resolve such issues. In one embodiment, the UE may switch the BWP of a cell (e.g., when the UE receives an indication for the BWP switching or when the bwp-InactivityTimer of the cell expires) during the BFR procedure for the cell. The UE may (only) perform the BWP switching if the UE has not transmitted the BFR MAC CE during the BFR procedure. The UE may not perform the BWP switching if the UE has transmitted the BFR MAC CE during the BFR procedure. BWP Switching In one embodiment, the UE may or may not perform BWP switching when the UE receives a signaling, and the signaling includes BWP information. For example, if the BWP information is different from the active (DL) BWP of the UE, the UE may perform BWP switching to the BWP indicated by the signaling. If the BWP information is the same as the active (DL) BWP of the UE, the UE may not perform BWP switching. The UE may determine to start or restart the bwp-InactivityTimer of a cell based on whether the UE performs BWP switching of the cell. If the UE performs BWP switching of a cell, the UE may start or restart the bwp-Inactivity Timer of the cell. If the UE does not perform BWP switching of a cell, the UE may not start or restart the bwp-InactivityTimer of the cell. The duration of the bwp-InactivityTimer is in milliseconds (ms) after which the UE reverts to the default Bandwidth Part. A value 0.5 ms is only applicable for carriers >6 GHz. When the network releases the timer configuration, the UE stops the bwp-Inactivity Timer without switching to the default BWP. In one embodiment, the BWP switching may be controlled by the following one or more programming code(s). PDCCH Indicating a DL Assignment or an UL Grant If the UE receives the PDCCH for the BWP switching of a serving cell, the UE may perform the BWP switching to a BWP indicated by the PDCCH (e.g., the PDCCH may include a BWP ID which is different from the current active BWP of the UE). bwp-InactivityTimer If the bwp-Inactivity Timer associated with the DL BWP expires, the UE may perform the BWP switching to a default BWP (if defaultDownlinkBWP-Id is configured) or an initial BWP (which is indicated by initialDownlinkBWP). RRC Signaling Upon performing RRC (re-)configuration, a firstActiveDownlinkBWP IE contains the ID of the DL BWP to be activated. If the ID of the DL BWP (e.g., firstActiveDownlinkBWP-Id) is absent, the RRC (re-)configuration does not impose a BWP switch. Upon performing RRC (re-)configuration, the firstActiveUplinkBWP IE contains the ID of the UL BWP to be activated. If the ID of the UL BWP (e.g., firstActiveDownlinkBWP-Id) is absent, the RRC (re-)configuration does not impose a BWP switch. Upon Initiation of Random Access (RA) Procedure Upon initiation of the RA procedure on a serving cell, the UE may: 1>if PRACH occasions are not configured for the active UL BWP:2>switch the active UL BWP to BWP indicated by initialUplinkBWP;2>if the Serving Cell is a SpCell:3>switch the active DL BWP to BWP indicated byinitialDownlinkBWP.1>else:2>if the Serving Cell is a SpCell:3>if the active DL BWP does not have the same bwp-Id as the activeUL BWP:4>switch the active DL BWP to the DL BWP with the samebwp-Id as the active UL BWP. Upon Reception of a Wake-Up Signal (WUS) Signaling (e.g., DCI Forma 2_6), to Indicate the BWP Switch. The UE may apply one or more or any combination of the following behaviors (e.g., if the UE switches the BWP of the cell (e.g., when the UE is performing the BFR procedure for the cell)). In some of the embodiments, the UE may cancel or stop the (ongoing) BFR procedure for the cell in a case that the UE switches the BWP of the cell when the UE is performing the BFR procedure for the cell. In one embodiment, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR procedure is ongoing, the UE may receive an indication (e.g., via PDCCH indicating a DL assignment, PDCCH indicating a UL grant, or via an RRC (re-)configuration), from the NW, to instruct the UE to switch the BWP of the cell. The UE may cancel or stop the BFR procedure for the cell if the UE switches the BWP of the cell based on the indication. In one embodiment, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR procedure is ongoing, the UE may switch the BWP (e.g., to initial/default BWP) of the cell if a BWP inactivity timer for the cell expires. The UE may cancel or stop the BFR procedure for the cell if the UE switches the BWP of the cell when the BWP inactivity timer expires. In one embodiment, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell when the UE is performing the RA procedure. When the BFR procedure is ongoing, the UE may switch the BWP (e.g., to initial/default BWP) of the cell during the RA procedure. In one embodiment, the UE may switch an UL BWP to another BWP indicated by the initialUplinkBWP if PRACH occasions are not configured for the UE's active UL BWP during the RA procedure. The UE may cancel or stop the BFR procedure for the cell if the UE switches the BWP of the cell during the RA procedure. In one embodiment, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR procedure is ongoing, the UE may receive the PDCCH for the BWP switching from the gNB. The PDCCH may instruct the UE to switch the BWP of the cell. The UE may cancel or stop the BFR procedure for the cell if the UE switches the BWP of the SCell based on the instruction. Alternatively, the UE maintains the triggered BFR procedure if the UE ignores a BWP switching instruction. In one embodiment, if the BFR procedure is successfully completed upon reception of the PDCCH for the BWP switching (e.g., the UE receives the BFRR indicating the BWP switching) of the serving cell, the UE may perform the BWP switching indicated by the PDCCH. In some of the embodiments, the UE may cancel the (triggered) BFR MAC CE (by reporting or generating a procedure thereof) for the cell if the UE switches the BWP of the cell when the UE is performing the BFR procedure for the cell. In one embodiment, the UE may trigger the BFR MAC CE (by reporting or generating a procedure thereof) for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR MAC CE has not been canceled, the UE may receive an indication, from the NW, to instruct the UE to switch the BWP of the cell. The UE may cancel the triggered BFR MAC CE if the UE switches BWP of the cell based on the indication (e.g., the UE may not generate the BFR MAC CE). In one embodiment, the UE may trigger the BFR MAC CE (and/or the corresponding procedure) for a cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR MAC CE has not been canceled, the UE may switch the BWP (e.g., to initial/default BWP) of the cell if the BWP inactivity timer for the cell expires. The UE may cancel the triggered BFR MAC CE if the UE switches the BWP of the cell when the BWP inactivity timer expires (e.g., the UE may not generate the BFR MAC CE). In some of the embodiments, the UE may cancel the pending BFR-SR (for the cell) if the UE switches the BWP of the cell when the UE is performing the BFR procedure for the cell. In one embodiment, the UE may trigger the BFR-SR when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR-SR is pending, the UE may receive the indication, from the NW, to instruct the UE to switch the BWP of the cell. The UE may cancel the pending BFR-SR if the UE switches the BWP of the cell based on the indication. In one embodiment, the UE may trigger the BFR-SR when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. When the BFR-SR is pending, the UE may switch BWP (e.g., to initial/default BWP) of the cell if the BWP inactivity timer for the cell expires. The UE may cancel the pending BFR-SR if the UE switches the BWP of the cell when the BWP inactivity timer expires. In some of the embodiments, the UE may reset a counter for BFI indication (e.g., BFI counter) if the UE switches the BWP of the cell (e.g., activates an inactive BWP of the cell and deactivates an active BWP of the cell). Resetting the counter sets the value of the counter to zero. In one embodiment, the UE may receive the indication, from the NW, to instruct the UE to switch the BWP of the cell. The UE may reset the counter for the BFI indication for the cell if the UE switches the BWP of the cell based on the indication. In one embodiment, the UE may switch the BWP (e.g., to initial/default BWP) of the SCell if the BWP inactivity timer for the cell expires. The UE may reset the counter for the BFI indication for the cell if the UE switches the BWP of the cell when the BWP inactivity timer for the cell expires. In some of the embodiments, the UE may reset the counter for the BFI indication in one or more of the following scenarios. In one embodiment, the UE may reset the counter for the beam failure instance indication (e.g., BFI counter) for the cell if the UE initiates the BFR procedure for the cell. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE triggers the BFR-SR. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE triggers the BFR MAC CE (reporting procedure). In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE instructs the Multiplexing and Assembly procedure to generate the BFR MAC CE. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for a cell if the UE receives the BFR response (BFRR) for the cell from the NW. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE considers the BFR procedure for the cell is successfully completed. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE considers the BFR procedure for the cell has failed. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell when the cell is deactivated, e.g., if the UE receives an associated Cell Activation/Deactivation MAC CE to deactivate the cell and/or when a Scell deactivation timer for the cell expires. In one embodiment, the UE may reset the counter for the BFI indication (e.g., BFI counter) for the cell if the UE receives a DL RRC message which (re)configures the (SCell) BFR corresponding configuration (e.g., the beamFailureDetectionTimer, the beamFailureInstanceMaxCount, or any of the reference signals used for beam failure detection). In some of the embodiments, the UE may include BWP information (e.g., BWP index) in the BFR MAC CE. For example, when generating the BFR MAC CE. The BWP information may be the active BWP of the UE when the UE generates the BFR MAC CE. The new beam information included in the BFR MAC CE may be associated with the BWP information included in the BFR MAC CE. For example, the NBI RS(s) is configured under the corresponding BWP configuration. In one embodiment, the BFR MAC CE may have a field (e.g., 2 bits) for a BWP ID indication. The filed may indicate the DL/UL BWP of the new beam information included in the BFR MAC CE. In some of the embodiments, the UE may trigger a measurement of the NBI RS on the new BWP if the UE switches the BWP of the cell to the new BWP of the cell. In some of the embodiments, the UE may consider the BFR procedure for the cell is successfully completed if the UE switches the BWP of the cell when the UE is performing the BFR procedure for the cell. In some of the embodiments, the UE may consider the BFR procedure for the cell is not successful if the UE switches the BWP of the cell when the UE is performing the BFR procedure for the cell. In some of the embodiments, while receiving the BWP switching indication for a cell or when the BWP inactivity timer for the cell expires, the UE will send the BFR MAC CE via a new switched BWP of the cell. For example, if the BFR occurs on an active BWP of a cell, the UE may trigger a BFR procedure for the cell. During the BFR procedure the UE may receive a BWP switching indication for the cell, the UE will perform the BWP switching first and then transmit the BFR MAC CE on the new switched BWP of the cell. In some of the embodiments, after the BFR of the serving cell is triggered or initiated, if the BFR procedure is successfully completed upon (or based on) reception of the PDCCH for the BWP switching of the serving cell, the UE performs the BWP switching indicated by the PDCCH. In some of the embodiments, after the BFR of the serving cell is triggered or initiated, if the UE receives the PDCCH indicating PUSCH transmission on another UL BWP (which is not the current active UL BWP), and the indicated PUSCH transmission corresponds to the BFR-SR transmission, the UE may perform the UL BWP switching indicated by the PDCCH. That is, the PUSCH resource was granted by the gNB for the response of the BFR-SR reception. In some of the embodiments, the UE may not switch the BWP of the cell (e.g., when receiving an indication from the NW to instruct the UE to switch the BWP of the cell, or when the bwp-InactivityTimer of the cell expires) during the BFR procedure for the cell. In some of the embodiments, the UE may ignore an indication, received from the NW, for the BWP switching of the cell if the indication is received during the BFR procedure for the cell. In one embodiment, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected on the cell. When the BFR procedure is ongoing, the UE may receive an indication, from the NW, to instruct the UE to switch the BWP of the cell. The UE may ignore the indication when the BFR procedure is ongoing. In some of the embodiments, if the BFRR for the cell is received during the BFR procedure, where the BFRR is to indicate the success of the BFR MAC CE transmission or the success of the BFR procedure, and the BFRR indicates the BWP switching, the UE may (consider the BFR procedure is successfully completed and) switch the BWP based on the BFRR. In one embodiment, when the UE is performing the BFR procedure for the cell, the UE may receive a first PDCCH to indicate the BWP switching for the cell, where the first PDCCH is not the BFRR for the cell. The UE may not switch the BWP based on the first PDCCH. In another embodiment, when the UE is performing the BFR procedure for the cell, the UE may receive a second PDCCH, which is different from the first PDCCH, to indicate the BWP switching for the cell, where the second PDCCH is the BFRR for the cell. The UE may switch the BWP based on the second PDCCH. In one embodiment, after the UE transmits the BFR MAC CE for the cell, the UE may receive the first PDCCH to indicate the BWP switching for the cell, where the first PDCCH is not the BFRR for the cell. The UE may not switch the BWP based on the first PDCCH. In another embodiment, after the UE transmits the BFR MAC CE for the cell, the UE may receive the second PDCCH to indicate the BWP switching for the cell, where the second PDCCH is the BFRR for the cell. The UE may switch the BWP based on the second PDCCH. In some of the embodiments, the UE may stop the bwp-InactivityTimer of the cell when the beam failure is detected on the cell. In some of the embodiments, the UE may stop the bwp-InactivityTimer of the cell when the BFR procedure for the cell is initiated. In some of the embodiments, the UE may stop the bwp-Inactivity Timer of the cell when the BFR-SR is triggered or transmitted. In some of the embodiments, the UE may stop the bwp-InactivityTimer of the cell when the BFR MAC CE is triggered or transmitted. In some of the embodiments, the UE may (re-)start the bwp-Inactivity Timer of the cell when the BFR procedure for the cell is completed. In some of the embodiments, the UE may (re-)start the bwp-InactivityTimer of the cell when the BFRR for the cell is received. In one embodiment, the UE may measure the BFD RS(s) which is associated with the cell. The UE may detect the beam failure on the cell based on the BFR RS(s) measurement. The UE may initiate the BFR procedure for the cell when the beam failure is detected on the cell. The UE may trigger the BFR-SR when the beam failure is detected on the cell. The UE may trigger the BFR MAC CE when the beam failure is detected on the cell. The UE may consider the BFR procedure for the cell is successfully completed when receiving the BFRR. The UE may stop the bwp-InactivityTimer for the cell based on whether the beam failure is detected on the cell, based on whether the BFR-SR is triggered, based on whether the BFR MAC CE is triggered, based on whether the BFR-SR is transmitted, and/or based on whether the BFR MAC CE is transmitted. The UE may start or restart the bwp-InactivityTimer for the cell based on whether the BFR procedure for the cell is completed, and/or based on whether receiving the BFRR. More specifically, the UE may trigger the BFR MAC CE to instruct the Multiplexing and Assembly procedure, in order to generate the BFR MAC CE (e.g., if UL-SCH resources are available for a new transmission and the UL-SCH resources accommodate the BFR MAC CE as well as its sub-header). In some of the embodiments, the UE may initiate the BFR procedure for the cell(s) when the beam failure is detected (e.g., when the BFI counter for the cell reaches the BFI maximum count, i.e. the beamFailureInstanceMaxCount IE) on the cell. While the BFR procedure is ongoing and the BWP switching criteria is activated (e.g., the UE receives the PDCCH indicating the BWP switch, the bwp-Inactivity Timer expires, etc.) for the cell, the UE may switch the BWP for the cell if one or more or any combination of the following conditions are satisfied. Otherwise, it may be up to the UE implementation whether to switch the BWP for the cell. In some of the embodiments, the BWP switching criteria may be one or more than one combination(s) of the following. In one embodiment, the PDCCH indicating the UL BWP switch for the cell (e.g., the BFRR indicating the UL BWP switch for the cell). In one embodiment, the reception of the RRC (re-)configuration for the DL BWP switching for the cell. In one embodiment, if the UE does not find any suitable and/or qualified NBI on the BWP where the BFR is detected. In one embodiment, if there is no NBI configured for the BWP where the BFR is detected. In one embodiment, if the UE is configured with the same set of the NBI(s) on the BWP before the BWP switching and on the BWP after the BWP switching. In one embodiment, if a BWP switching command is indicated by the BFRR. In one embodiment, if the BWP switching command is indicated by the PDCCH which indicates the success of the (cell) BFR procedure. BFR MAC CE Reporting Criterion FIG.5illustrates an example of the BFD RSs and the NBI RS configurations for cell(s) according to an example implementation of the present disclosure. In some of the embodiments, the (set of) BFD RS(s) and/or the (set of) NBI RS(s) may be configured per (DL) BWP. Specifically, the configuration of the (set of) BFD RS(s) and/or the (set of) NBI RS(s) may be configured in the BWP configuration, e.g., BWP-DownlinkDedicated. Each BWP of one cell may have a different (set of) BFD RSs and/or different (set of) NBI RSs. It is noted that beam failure on multiple Cells may occur simultaneously. As illustrated inFIG.5, a configured BFD RS may be associated with both Cell 1 and Cell 2. If the UE detects the beam failure based on the BFD RS, the UE may consider that both Cell 1 and Cell 2 encounter beam failure. If there are more than one cell that encounter beam failure simultaneously, it is important for the UE how to report some criteria associated with the beam failure related information (e.g., a serving cell index, a NBI RS index, a no new beam information, etc.) for the cell(s) via the BFR MAC CE. Assuming the suitable and/or qualified new beam of Cell 1 is NBI RS 1, and the suitable and/or qualified new beam of Cell 2 is NBI RS 2. It is practicable that reporting the beam failure related information may depend on which BFR MAC CE(s) may be applied. As previously disclosed, the BFR MAC CE may be the single-entry BFR MAC CE, as illustrated inFIG.2, and/or the multi-entry BFR MAC CE, as illustrated inFIG.3. If (Only) the Multi-Entry BFR MAC CE May be Applied. In some of the embodiments, if only the multi-entry BFR MAC CE is used, one alternative is that the UE may indicate all the cell indexes of the failed cells (e.g., via a bit map) if more than one cell encounters the beam failure simultaneously. In one embodiment, if the UE detects that Cell 1 and Cell 2 encounter the beam failure (simultaneously) based on the BFD RS when the UE (instructs the Multiplexing and Assembly procedure to) generates the multi-entry BFR MAC CE, the UE may consider to indicate all the indexes of Cell 1 and Cell 2 via the multi-entry BFR MAC CE. In some of the embodiments, the UE may indicate a subset (e.g., one or more than one beam-failed Cells) of all the beam-failed Cells in the BFR MAC CE. For example, if the BFD RS is associated with multiple cells when the UE detects that the beam failure occurs based on the BFD RS, the UE may consider that the multiple cells encounter the beam failure simultaneously. In such circumstances, the UE may only indicate one of the multiple cells via the multi-entry BFR MAC CE. The NW may know that the multiple Cells encounter the beam failure since the multiple cells are related to the same BFD RS. In one embodiment, if the UE detects that Cell 1 and Cell 2 encounter the beam failure (simultaneously) based on the BFD RS when the UE (instructs the Multiplexing and Assembly procedure to) generates the multi-entry BFR MAC CE, the UE may determine to indicate one of the index of the Cell 1 or Cell 2 via the multi-entry BFR MAC CE. BFD RS and NBI RS Configurations In some of the embodiments, since the BFD RS and/or the NBI RS are configured by the NW, some alternatives for the NW to configure a suitable association between the BFD RS, NBI RS, and/or the cell(s) may be applied to avoid ambiguity in generating the BFR MAC CE in order to have some guidance or restrictions for the configuration of the BFD RS and/or the NBI RS. In some of the embodiments, if the NW configures the BFD RS associated with multiple cells, the NW may have to configure a (set of) NBI RS which is also associated with the multiple cells. In other words, the UE may expect the NW to configure a (set of) NBI RS and a (set of) BFD RS which are associated the same (set) of cells. In some of the embodiments, the NBI RS may be used by the UE to detect the beam failure of a set of cells, and the BFR RS may be used by the UE to find the (common) new beam of the set of cells. In one embodiment, assuming a set of BFD RS is configured to be associated with Cell 1 and Cell 2, a set of NBI RS may also be configured to be associated with Cell 1 and Cell 2. For example, when the UE detects the beam failure based on the set of BFD RS, the UE may consider that Cell 1 and Cell 2 encounter the beam failure simultaneously. The UE may (only) measure the set of NBI RS to find the (common) new beam for Cell 1 and Cell 2. The UE may determine to indicate a new beam index from the set of NBI RS via the BFR MAC CE. If the UE does not find any qualified new beam from the set of NBI RS, the UE may indicate “no new beam information” via the BFR MAC CE. In some of the embodiments, the NW may not configure a (set of) BFD RS which is associated with more than one cells. Preferably, the NW only configures a (set of) BFD RS which is associated with one cell. Preferably, the UE expects that the configured (set of) BFD RS is only associated with one cell. If the UE detects the beam failure based on the configured (set of) BFD RS, the UE may consider that the beam failure occurs on this cell which is associated with the BFD RS. In such circumstances, the UE may measure a (set of) BFD RS which is associated with this cell to find a new beam for this cell. In another embodiment, the association between the (set of) BFD RS(s), the (set of) NBI RS(s), and/or the corresponding (set of) cell(s) are configured by the NW, e.g., via the RRC configuration. FIG.6illustrates a BFR procedure60performed by a UE according to an example implementation of the present disclosure. As illustrated inFIG.6, the BFR procedure60for a serving cell includes the following steps:Step600: Start.Step602: Receive, from the BS, the BFR configuration for the serving cell of the BS, where the BFR configuration includes the threshold for the BFI counter associated with the serving cell and the threshold is associated with the beamFailureInstanceMaxCount IE.Step604: Increment the value of the BFI counter based on the BFD.Step606: Trigger the BFR procedure for the serving cell when the value of the BFI counter is equal to or higher than the threshold.Step608: Perform the BWP switching for the serving cell when receiving the reconfiguration indication from the BS, where the reconfiguration indication includes the BWP index.Step610: Set the value of the BFI counter to zero when performing the BWP switching.Step612: End. Preferably, step602to step610of the BFR procedure60may be applied to a serving cell (e.g., PCell, PSCell, SCell). In other words, the BFR configurations (e.g., parameter, counter, timer, etc.) are applied on a per serving cell basis. Preferably, the UE may be configured to increment the value of the BFI counter for a serving cell based on the BFD for the RS (e.g., Failure Detection Resources) which is associated with the serving cell. When the value of the BFI counter for the serving cell is equal to or higher than the threshold (e.g. the beamFailureInstanceMaxCount IE), the UE may trigger the BFR procedure for the serving cell (i.e. the corresponding cell). Accordingly, when the UE has received the reconfiguration indication for the serving cell from the BS, the UE may be configured to perform the BWP switching for the serving cell. Upon receiving the reconfiguration indication and/or upon performing the BWP switching, the UE may be configured to set the value of the BFI counter to zero (i.e. to reset the value of the BFI counter). Since detailed operations of the step602to step610have been comprehensively discussed and/or introduced in the previous disclosure, it is unnecessary to repeat the detailed operations for brevity. FIG.7illustrates a block diagram of a node700for wireless communication according to an example implementation the present disclosure. As illustrated inFIG.7, the node700may include a transceiver706, a processor708, a memory702, one or more presentation components704, and at least one antenna710. The node700may also include a Radio Frequency (RF) spectrum band module, a BS communications module, an NW communications module, and a system communications management module, input/output (I/O) ports, I/O components, and power supply (not explicitly illustrated inFIG.7). Each of these components may be in communication with each other, directly or indirectly, over one or more buses724. In one implementation, the node700may be a UE or a BS that performs various functions disclosed herein, for example, with reference toFIGS.1through6. The transceiver706having a transmitter716(e.g., transmitting/transmission circuitry) and a receiver718(e.g., receiving/reception circuitry) may be configured to transmit and/or receive time and/or frequency resource partitioning information. In one implementation, the transceiver706may be configured to transmit in different types of subframes and slots, including, but not limited to, usable, non-usable and flexibly usable subframes and slot formats. The transceiver706may be configured to receive data and control channels. The node700may include a variety of computer-readable media. Computer-readable media may be any available media that may be accessed by the node700and include both volatile (and non-volatile) media and removable (and non-removable) media. By way of example, and not limitation, computer-readable media may include computer storage media and communication media. Computer storage media may include both volatile (and non-volatile) and removable (and non-removable) media implemented according to any method or technology for storage of information such as computer-readable. Computer storage media includes RAM, ROM, EEPROM, flash memory (or other memory technology), CD-ROM, Digital Versatile Disks (DVD) (or other optical disk storage), magnetic cassettes, magnetic tape, magnetic disk storage (or other magnetic storage devices), etc. Computer storage media does not include a propagated data signal. Communication media may typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. The term “modulated data signal” may mean a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired NW or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the previous disclosure should also be included within the scope of computer-readable media. The memory702may include computer-storage media in the form of volatile and/or non-volatile memory. The memory702may be removable, non-removable, or a combination thereof. For example, the memory702may include solid-state memory, hard drives, optical-disc drives, etc. As illustrated inFIG.7, the memory702may store computer-readable and/or -executable instructions714(e.g., software codes) that are configured to, when executed, cause the processor708to perform various functions disclosed herein, for example, with reference toFIGS.1through6. Alternatively, the instructions714may not be directly executable by the processor708but may be configured to cause the node700(e.g., when compiled and executed) to perform various functions disclosed herein. The processor708(e.g., having processing circuitry) may include an intelligent hardware device, a Central Processing Unit (CPU), a microcontroller, an ASIC, etc. The processor708may include memory. The processor708may process the data712and the instructions714received from the memory702, and information through the transceiver706, the baseband communications module, and/or the NW communications module. The processor708may also process information to be sent to the transceiver706for transmission through the antenna710, to the NW communications module for transmission to a CN. One or more presentation components704may present data indications to a person or other device. Examples of presentation components704may include a display device, speaker, printing component, vibrating component, etc. From the previous disclosure, it is manifested that various techniques may be used for implementing the concepts described in the present disclosure without departing from the scope of those concepts. Moreover, while the concepts have been disclosed with specific reference to certain implementations, a person of ordinary skill in the art would recognize that changes may be made in form and detail without departing from the scope of those concepts. As such, the disclosed implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present disclosure is not limited to the particular disclosed implementations. Still, many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure. | 70,657 |
11863374 | DESCRIPTION OF EMBODIMENTS Embodiments of a distributed control system, an automatic analysis device, and an automatic analysis system of the invention will be described with reference toFIGS.1to11. First, an overall configuration of the automatic analysis device or the automatic analysis system provided with the distributed control system of the present embodiment will be described with reference toFIG.1.FIG.1is a diagram schematically showing the overall configuration of the automatic analysis device or the automatic analysis system including the distributed control system according to the present embodiment. An automatic analysis system1000inFIG.1is a device for performing qualitative and quantitative analysis of a biological sample such as blood or urine (hereinafter referred to as a specimen), and mainly includes a transport unit20, an analysis unit30, and a control device1. The transport unit20is a unit for putting or collecting a specimen rack25equipped with one or more specimen containers containing the specimen into or from the automatic analysis system1000, and at the same time, transporting the specimen rack25to the analysis unit30. The transport unit20includes a rack buffer23, a rack supply tray22, a rack storage tray27, a transport line26, a transport control unit28, or the like. In the transport unit20, the specimen rack25disposed on the rack supply tray22is transported to the rack buffer23by the transport line26. There is a specimen presence or absence determination sensor (not shown) in the middle of the transport line26, and the presence or absence of the specimen container on the specimen rack25is recognized. Here, if it is determined that there is a specimen container, a specimen barcode (not shown) affixed on the specimen container is read by a specimen barcode reader (not shown) to recognize identification information of the specimen. In a real system, the identification information identifies a patient. The rack buffer23has a rotor structure that performs circular motion, and has slots for radiatively retaining a plurality of specimen racks25on a concentric circle on which a plurality of specimen containers are placed on an outer circumference. By rotating the slots with a motor, the slots are configured to carry in and out any specimen rack25to a requested destination. According to such a structure, it is not always necessary to process the specimen racks25placed first in order. In other words, if a specimen rack has a high priority, the specimen rack can be processed first. The transport line26is connected to a certain point on the radial circumference of the rack buffer23, and the specimen rack25is carried in and out. If the point is at a position of 0 degrees on the circumference, a specimen dispensing line38for drawing the specimen rack25into the analysis unit30described later is connected to a position of 90 degrees on the circumference from the position where the transport line26is connected, and the specimen rack25is carried in and out. The specimen rack25that has been dispensed in the analysis unit30waits for output of a measurement result, and if necessary, processing such as automatic retesting can be performed in the rack buffer23. Further, when the processing is completed, the specimen rack25is transported to the rack storage tray27via the transport line26. The transport control unit28is a unit that executes control of an operation of transporting an appropriate specimen rack25from the rack buffer23to the specimen dispensing line38and an operation of returning the specimen rack25from the specimen dispensing line38to the rack buffer23. The transport control unit28controls a transport operation for transporting the specimen to the analysis unit30. Therefore, the transport control unit28is connected to a motor23afor rotationally driving the rack buffer23or a motor26afor driving the transport line26by a distributed control system500(seeFIG.2). The control device1includes user interfaces such as a display apparatus5that displays an operation screen for ordering a measurement item to be measured for a specimen to be measured and an operation screen for confirming a measurement result, and an input device that inputs various instructions. The control device1is a unit that plays a role of managing information of units of the entire automatic analysis system1000. The control device1is connected to the analysis unit30and the transport unit20via a wired or wireless network line103. The analysis unit30is a unit that performs a measurement operation for the measurement item requested for the specimen and outputs the measurement result, and is connected to the transport unit20. The analysis unit30includes a reaction disk37, a reagent disk32, a reagent probe34, a sample probe35, the specimen dispensing line38, a biochemical measurement unit36, and a control unit39. Reaction containers (not shown) are arranged on a circumference of the reaction disk37. The specimen dispensing line38for carrying in the specimen rack25on which the specimen container is placed is disposed near the reaction disk37. A motor37afor rotating the reaction disk37is connected to the reaction disk37. The specimen dispensing line38is a line for transporting the specimen rack25transported from the rack buffer23to a dispensing position and returning the specimen rack25after dispensing to the rack buffer23, and is driven by a motor38a. The sample probe35that can rotate and move up and down is disposed between the reaction disk37and the specimen dispensing line38. The sample probe35moves while drawing an arc around a rotation axis to dispense the specimen from the specimen rack25to the reaction container. A motor35aand a syringe (not shown) for rotating and moving the sample probe35up and down are connected to the sample probe35. The reagent disk32is a storage in which a plurality of reagent bottles (not shown) containing a reagent can be placed on the circumference. The reagent disk32is kept cold. A motor32afor rotating the reagent disk32is connected to the reagent disk32. The reagent probe34that can rotate and move up and down is disposed between the reaction disk37and the reagent disk32. The reagent probe34moves while drawing an arc around a rotation axis, accesses the inside of the reagent disk32from a reagent probe aspiration port, and dispenses the reagent from the reagent bottles to the reaction containers. A motor34aand a syringe (not shown) for rotating and moving the reagent probe34up and down are connected to the reagent probe34. Further, washing tanks (not shown) are disposed within operation ranges of the reagent probe34and the sample probe35, respectively. The biochemical measurement unit36is further arranged around the reaction disk37. The biochemical measurement unit36is an analysis unit that analyzes biochemical components in the specimen by measuring an absorbance of a reaction solution produced by mixing and reacting in the reaction containers on the reaction disk37. The biochemical measurement unit36includes a light source, a spectrophotometer36a, or the like. The control unit39arranged in the analysis unit30is connected to each mechanism in the analysis unit30described above by the distributed control system500(seeFIG.2), and controls an operation of the mechanism.FIGS.1and2show a case where the motors32a,34a,35a,37a, and38a, as well as the spectrophotometer36ain the analysis unit30are connected to the control unit39. InFIG.2, for convenience of illustration, a line connected to the motor38ais omitted. The above is the overall outline configuration of the automatic analysis system1000according to the present embodiment. AlthoughFIG.1describes a system including the transport unit20, the analysis unit30, and the control device1, the automatic analysis system1000shown inFIG.1is only an example. For example, an analysis unit that executes measurement of a different measurement item (for example, an immunological item) may be connected to the automatic analysis system1000shown inFIG.1, an analysis unit that has the same configuration as that of the analysis unit30may be further connected to the automatic analysis system1000, and an analysis unit for measuring a different analysis item (for example, an electrolyte item) may be further arranged in the analysis unit30. Further, the distributed control system500of the invention can also be applied to an automatic analysis device formed by only the analysis unit30with the transport unit20being omitted. Furthermore, the distributed control system.500can also be applied to each device in an automatic analysis system formed by an automatic analysis device and a specimen pretreatment device that performs various pretreatments such as centrifugation and subdivision dispensing of a specimen before measurement. Next, an outline of a mechanical operation of the automatic analysis system1000shown inFIG.1will be described. The transport unit20sends the specimen racks25disposed on the rack supply tray22of the automatic analysis system1000one by one onto the transport line26, and carries the specimen racks25into the rack buffer23. The specimen racks25transported to the rack buffer23are transported to the specimen dispensing line38of the analysis unit30. When the specimen rack25arrives at the specimen dispensing line38of the analysis unit30, on each specimen mounted on the specimen rack25, a dispensing operation is performed by the sample probe35according to the measurement item requested by the control device1. The sample probe35discharges the aspirated specimen into the reaction container on the reaction disk37, and the reagent aspirated from the reagent disk32by the reagent probe34is further added to the reaction container and the mixture is stirred. Thereafter, the absorbance is measured by the biochemical measurement unit36, and a measurement result is transmitted to the control device1. The control device1acquires a concentration of a specific component in the specimen by arithmetic processing on the basis of the transmitted measurement result, displays the concentration on the display apparatus5, and stores the concentration in a storage unit (not shown). Next, a specific configuration of the distributed control system according to the present embodiment will be described with reference toFIGS.2to4.FIG.2is a diagram showing a configuration example of the distributed control system500according to present embodiment.FIG.3is a diagram showing a specific configuration example on a terminal communication device12side inFIG.2.FIG.4is a diagram showing an example of a screen for displaying an abnormal part displayed on the display apparatus5. As shown inFIG.2, the distributed control system500includes the display apparatus5, a central computation device10, a central communication device11, a plurality of terminal communication devices12, a network communication path13, and a communication path14. The central computation device10is connected to the central communication device11via a data transmission unit102, and is also connected to the control device1including the display apparatus5via a network line103. As shown inFIG.2, the central computation device10includes a storage unit100retaining correct connection information, and a comparison unit101comparing the correct connection information retained in the storage unit100with connection information of an actually connected device (control object device or terminal communication device12). The central computation device10outputs, when determination that an abnormality has occurred is made as a result of comparison of the correct connection information retained in the storage unit100and the connection information of the actually connected control object device or terminal communication device12by the comparison unit101, an identification display signal for identifying an abnormal part to the display device. The connection information used in the central computation device10includes port number identification information for identifying communication ports110,120, and121, which will be described later, and individual identification information set by an ID setting unit124, which will be described later. These details will be described later with reference toFIG.7and following figures. The data transmission unit102that connects the central computation device10and the central communication device11and the network line103that connects the central computation device10and the control device1consist of bus forms such as peripheral component interconnect (PCI: registered trademark) and versa module eurocard (VME: registered trademark), and data transmission paths that are serial communication such as universal serial bus (USB) and serial peripheral interface (SPI). The central communication device11is connected to the plurality of terminal communication devices12by the network communication path13, and executes integrated management of communication control as a master station of communication in the distributed control system500. As shown inFIGS.2and3, the central communication device11includes a central communication control unit111that controls the communication of the distributed control system500, and the communication port110. The central communication device11controls the terminal communication device12via the communication port110. As shown inFIG.3, a plurality of communication ports110(communication ports110a,110b, . . . ) are provided, but for convenience of description, the communication ports110will be described as one. As shown inFIGS.2and3, the terminal communication device12is connected to the central communication device11or another terminal communication device12by the network communication path13, and communicates with the central communication device11or another terminal communication device12. In particular, the terminal communication device12generates its own connection information. Further, the terminal communication device12transmits the connection information on a downstream side to the central computation device10on an upstream side. Further, as shown inFIG.2, each terminal communication device12is connected to, via the communication path14, the corresponding motors32a,34a,35a, and37a, as well as the spectrophotometer36awhich are serving as the control object devices in the analysis unit30. As shown inFIGS.2and3, the terminal communication device12includes an upstream communication port120, two downstream communication ports121a,121b, a terminal communication control unit122that executes communication control, light emitting diodes (LEDs)123that indicate a communication state with the central communication device11or the terminal communication device12connected upstream or downstream, the individual identifiable ID setting unit124, and a communication port125for connecting to the control object devices. The upstream communication port120is connected to the communication port110of the central communication device11or the downstream communication ports121a,121bof another terminal communication device12. The downstream communication ports121a,121bare connected to the upstream communication port120of another terminal communication device12. The LEDs123indicating the communication state are mounted on the upstream communication port120and the downstream communication ports121a,121bof the terminal communication device12, respectively, and are provided in the same number as the upstream communication port120and the downstream communication ports121a,121b. The LED123functions as a display device indicating the communication state with the central communication device11or the terminal communication device12connected upstream or downstream. The ID setting unit124is a unit for setting individual identification information of each terminal communication device12, and is set in one distributed control system500without overlapping each other. As a device for setting the ID, for example, a read-only memory (ROM), a switch, or the like is assumed. The display device in the present embodiment includes the LED123provided in the terminal communication device12and indicating the communication state with the central communication device11or the terminal communication device12, and the display apparatus5connected to the central computation device10. The display device displays an abnormal part on the basis of the identification display signal of the abnormal part generated by the comparison unit101of the central computation device10. For example, when there is an abnormal part, it is possible to identify a content of the abnormality by a lighting mode of the LED123. Further, as shown inFIG.4, an abnormal part5A can be identified by highlighting the abnormal part5A on the display apparatus5. Next, an acquisition method of connection information in the distributed control system500according to the present embodiment will be described with reference toFIGS.5to11.FIG.5is a flowchart of a procedure for acquiring an actual connection state of the terminal communication device12in the distributed control system500.FIG.6is a diagram showing an example of a method of adding data in the terminal communication device12.FIG.7is a diagram showing an example of a determination method of a port number by a port name.FIGS.8to10are diagrams showing an example of added and generated connection information data.FIG.11is a diagram showing an LED display example of the terminal communication device. A timing of acquiring the connection information in the distributed control system500described below is, for example, when checking wirings at the time of manufacturing the automatic analysis system1000, the transport unit20, and the analysis unit30, and when confirming startup after replacing a board during maintenance of the automatic analysis system1000, the transport unit20, and the analysis unit30. As a general overview, in the distributed control system500of the present embodiment, connection information, which includes the port number identification information for identifying a path of the terminal communication device12through which the terminal communication device12is retained and information of an individual identifiable ID, is acquired via the network communication path13between the terminal communication device12and the central communication device11or between a terminal communication device12and a terminal communication device12. The flowchart inFIG.5shows the procedure for acquiring the connection information retained by the terminal communication device12from the central computation device10via the data transmission unit102, the central communication device11, and the network communication path13in the distributed control system500. The correct connection information to be compared with in the central computation device10is preset and stored in the storage unit100. First, the central computation device10outputs a command for acquiring connection information to the central communication device11(step S201). Next, on the basis of the command from the central computation device10, the central communication device11transfers data to the terminal communication device12by accessing an operation register (step S202). At this time, all register regions related to the connection information are cleared to 0. Next, the terminal communication device12that receives the command from the central communication device11generates its own connection information and transfers the data to the central communication device11(step S203). Further, the transfer is executed while adding the data of the terminal communication device12relayed in a path connected from the terminal communication device12to another terminal communication device12. Next, the central communication device11stores the connection information in a register after receiving the transfer from the terminal communication device12(step S204). Therefore, the central computation device10reads and acquires the information received by the central communication device11and stored in the register in step S204(step S205). Next, in the central computation device10, the correct connection information retained by the storage unit100and the actually acquired connection information are compared in the comparison unit101of the central computation device10(step S206). If the two pieces of connection information match in step S206, processing is completed. On the other hand, if the two pieces of connection information do not match, the processing proceeds to step S207, all parts where the connection is incorrect are output as an alarm to the display apparatus5, the LEDs123corresponding to the abnormal parts are turned on, and the processing is completed. Next, a method of generating and adding the connection information of the relayed terminal communication device12in step S203in the flowchart shown inFIG.5will be described by taking a configuration shown inFIG.6as an example. As shown inFIG.6, in a method of generating and adding data according to the present embodiment, an example is shown in which a terminal communication device12B whose individual identification ID is set to “3” is connected to the central communication device11via another terminal communication device12A. First, the terminal communication device12B on the most downstream side stores its own ID “3” in an ID unit of the connection information data to be transferred. Further, the terminal communication device12B stores a port number “1” in a port number unit 0 of the connection information data. The port number unit is a number for identifying a port for outputting connection information when the device is in a position of generating connection information, and a port for receiving connection information when the device is in a position of mediating the connection information. For example, the numbers shown inFIG.7are assigned, when the port name is “terminal communication device12itself”, the number added in the port number unit is “1”, when the port name is “downstream communication port1(110a,121a)”, the number added in the port number unit is “2”, and when the port name is “downstream communication port2(110b,121b)”, the number added in the port number unit is “3”. The port number corresponds to the port number identification information. In this case, since the terminal communication device12B generates connection information and outputs the connection information from the upstream communication port120to the upstream side, the port number thereof is stored as “1”. Further, the terminal communication device12B stores “0” in a Tail unit indicating a storage slot of the port number in the connection information data. As a result, the generated connection information data has a form as shown inFIG.8. Next, the terminal communication device12A on the upstream side, which has received the connection information data as shown inFIG.8generated by the terminal communication device12B on the most downstream side, calculates the next storage slot (reference value+1) by referring to the Tail unit for the connection information data, and updates the Tail unit. Further, since the terminal communication device12A itself receives the connection information data from the terminal communication device12B at the downstream communication port121a, the port number2of the downstream communication port121aused in the terminal communication device12A is stored into the port number unit 1, which is the storage slot of the port number unit. The connection information data added in the terminal communication device12A has a form shown inFIG.9. The terminal communication device12A outputs the connection information data to the central communication device11. Finally, the connection information data added in the terminal communication device12A as shown inFIG.9is also added in the central communication device11. The central communication device11also calculates the next storage slot (reference value+1) by referring to the Tail unit, and updates the Tail unit. Further, the port number “2” that identifies the communication port110athrough which data passes is stored in the port number unit 2, which is the storage slot of the port number unit. That is, at the stage when the connection information data generated by the terminal communication device12B whose individual identification ID is set to “3” is input to the central computation device10, as shown inFIG.10, the Tail unit is “0x2”, the port number unit is “0x221”, and the ID unit is “0x3”. Therefore, it is possible to identify a mounting position of the terminal communication device12B whose individual identification ID is set to 3. From the above, it is possible to acquire the connection information indicating which place of which path the terminal communication device12is in. The central communication device11outputs the added connection information data as shown inFIG.10to the central computation device10. The central computation device10compares the received connection information with the correct connection information stored in the storage unit100in step S206of the flowchart shown inFIG.5, and determines whether an error such as a communication failure or a disconnection has occurred in the network communication path13between the central communication device11and the terminal communication device12A on the most downstream side. If the connection information has different parts, the central computation device10identifies, by referring to each storage slot of the connection information data, the terminal communication device12or a cable of the network communication path13which causes the communication failure or the disconnection in the network communication path13. For example, when the numbers in the port number units are different, it means that an erroneous connection has occurred in which a connected port or the connection order is erroneous. Further, it is possible to identify a position where the erroneous connection occurs based on a position of the different port numbers. Further, when the number of the port number units is different or there is no response, it means that breakage or disconnection has occurred. In this case, it is also possible to identify a part where the breakage or the disconnection has occurred on the basis of the number of the port number units or the absent port number unit. Further, the identified part is visualized, an error signal is output from the central computation device10to the display apparatus5, and the error and the abnormal part as shown inFIG.4are displayed. It is desirable that the error notification is executed when a response of data from the terminal communication device12to the central communication device11cannot be confirmed within a timeout time determined by the central computation device10. In this case, the central communication device11can only detect a path where an error has occurred. The path here is preferably determined by the communication ports110a,110bto which the terminal communication device12is connected among the plurality of communication ports110a,110bof the central communication device11. Furthermore, it is desirable that it is possible to identify, by confirming the LED123mounted on the terminal communication device12, the terminal communication device12or a cable of the network communication path13which cause a communication failure part or a disconnection part in the network communication path13. As described above, the central computation device10outputs, on the basis of a comparison result of the connection information, a lighting signal according to the corresponding connection state to the LED123corresponding to each communication port110,121a, and121bof the central communication device11and the terminal communication device12. FIG.11is a diagram showing an example of an LED lighting mode for identifying a communication failure part or a disconnection part in the network communication path13, and also the content of the error by confirming the LED123mounted on the terminal communication device12. For example, as shown inFIG.11, the lighting mode of the LED123is “ON” when the connection state is “normal”, “blink” when the connection state is “communication error” due to breakage or an erroneous connection, and “OFF” when the connection state is “disconnection”. Specifically, when there is an disconnection part, the LED123of the communication port110on the downstream side of the central communication device11or the downstream communication ports121a,121bof the terminal communication device12which are on the upstream side of the cable is turned on, and the LED123of the upstream communication port120of the terminal communication device12and others on the downstream side of the cable is turned off. Accordingly, the occurrence of a “disconnection” and the disconnection part can be clearly found at a glance. Further, when a communication error occurs due to an erroneous connection or breakage in communication ports of the central communication device11and the terminal communication device12, only an LED123corresponding to the communication port in which the error has occurred blinks as a communication error. Accordingly, it is also possible to identify whether the error is due to the cable of the network communication path13or the error is due to the central communication device11or the terminal communication device12itself. Next, the effect of the present embodiment will be described. The distributed control system500of the present embodiment described above includes the central computation device10, the central communication device11managing communication control, the plurality of terminal communication devices12to which at least one control object device is connected, the network communication path13connecting the central communication device11and the terminal communication device12, and the display device. The central computation device10includes the storage unit100retaining the correct connection information, and the comparison unit101comparing the correct connection information with connection information of an actually connected control object device or terminal communication device12. The central communication device11includes the central communication control unit111controlling communication of the distributed control system500, and the plurality of communication ports110. The terminal communication device12includes the terminal communication control unit122executing communication control, at least one upstream communication port120, at least one downstream communication port121, and the individual identifiable ID setting unit124. When the determination that an error has occurred is made as a result of comparison of the correct connection information retained in the storage unit100and the connection information of the actually connected control object device or terminal communication device12by the comparison unit101, the central computation device10outputs a display signal of an abnormal part to the display device. The display device displays the abnormal part on the basis of the display signal. According to the distributed control system500of the present embodiment, it is possible to immediately find an abnormality between the central communication device11and the terminal communication device12, and between the terminal communication device12and the terminal communication device12. Therefore, even if a plurality of control boards for controlling apparatuses to be controlled are distributedly arranged in the same device, it is possible to detect an abnormal part in a system connecting the control boards more easily and reliably than in the related art, and it is possible to quickly correct the connection. Further, the central communication device11controls the terminal communication device12via the communication port110, or the terminal communication device12transmits the connection information on the downstream side to the central computation device10on the upstream side via the network communication path13, so that one central computation device10or one central communication device11is disposed in the distributed control system500. Therefore, the inside of the system can be controlled efficiently, and the efficiency of distributed control can be improved. Further, the central computation device10outputs a command for acquiring connection information to the central communication device11; the central communication device11transfers the command to the terminal communication device12once receiving the command from the central communication device11; the terminal communication device12generates connection information on the basis of the command, and transfers the generated connection information to the central communication device11; the central communication device11transfers the connection information to the central computation device10once receiving the connection information from the terminal communication device12; and the central computation device10compares the connection information received from the central communication device11with the correct connection information, and outputs the display signal to the display device in a case of a mismatch. Therefore, the abnormal part can be identified more accurately and easily. Further, the display device is at least one of the LED123provided in the terminal communication device12and indicating the communication state with the central communication device11or the terminal communication device12, and the display apparatus5connected to the central computation device10. Therefore, by confirming the display apparatus5and the LED123, systematic troubleshooting that can identify an abnormal part and the content of the abnormality is enabled, and connection work can be performed more efficiently. Further, when there is an abnormal part, it is possible to identify a content of the abnormality by a lighting mode of the LED123. Therefore, the content of the abnormality can be grasped together with the part where the abnormality has occurred, and more appropriate measures can be taken. Further, the same number of LEDs123as the upstream communication port120and the downstream communication port121are provided. Therefore, it is possible to more easily grasp the part where the erroneous connection has occurred. As described above, in an automatic analysis device or an automatic analysis system where control boards are also desirably modularized and distributedly arranged, the control unit39which controls an operation of apparatuses in the analysis unit30that analyzes a sample and the apparatuses are connected by the distributed control system500according to the present embodiment, and each apparatus in the transport unit20which transports the sample to the analysis unit30and the transport control unit28are connected by the distributed control system500according to the present embodiment. Therefore, it is possible to quickly find out an erroneous connection or a defect and improve the efficiency of the connection work. Accordingly, in the automatic analysis device or the automatic analysis system, the efficiency of the connection work also can be improved, and the system configuration can be flexibly changed according to the operation of the user and space can be saved. OTHER EMBODIMENTS The invention is not limited to the above embodiment, and various modifications and applications can be made thereto. For example, the above-described embodiment has been described in detail in order to make the invention easy to understand, and the invention is not necessarily limited to those which have all the configurations described. For example, in the above-described embodiment, the automatic analysis device or automatic analysis system is described as an example of a device or system equipped with the distributed control system, but the device or system on which the distributed control system is applicable is not limited to this, and the distributed control system of the invention can be applied to various devices or systems that require a plurality of control boards to be provided in the device or system. REFERENCE SIGN LIST 1: control device5: display apparatus5A: abnormal part10: central computation device11: central communication device12,12A,12B: terminal communication device13: network communication path14: communication path20: transport unit23a,26a: motor28: transport control unit30: analysis unit (automatic analysis device)32a,34a,35a,37a,38a: motor36: biochemical measurement unit36a: spectrophotometer39: control unit100: storage unit101: comparison unit102: data transmission unit103: network line110,110a,110b: communication port111: central communication control unit120: upstream communication port121,121a,121b: downstream communication port122: terminal communication control unit123: LED124: ID setting unit125: communication port500: distributed control system1000: automatic analysis system | 37,119 |
11863375 | The present disclosure will now be described with reference to the accompanying drawings. DETAILED DESCRIPTION OF THE DISCLOSURE Overview Systems, methods, and apparatuses disclosed herein can detect whether an impairment is present within a service provider network. In some embodiments, the impairment can cause a service provided by the service provider network to not perform as expected. For example, the impairment can cause a black screen, pixelization of a movie or a television program, lack of sound for the movie or the television program, intermittent connectivity, slow speed, no internet connectivity, no dial-tone, and/or an inability to receive electronic mail (email) messages among others to provide some examples. As to be described in further detail below, these systems, methods, and apparatuses can develop multiple network records to record the performance of the service provider network at various instances in time. In some embodiments, these systems, methods, and apparatuses can compare these network records among each other to detect for the presence of the impairment within the service provider network. In these embodiments, these systems, methods, and apparatuses can thereafter diagnose and/or remedy the impairment when present within the service provider network. Exemplary Service Provider Network FIG.1graphically illustrates an exemplary service provider network according to some exemplary embodiments of the present disclosure. In the exemplary embodiment illustrated inFIG.1, a service provider network100can detect whether an impairment is present within the service provider network100. As to be described in further detail below, the impairment can be present anywhere within the service provider network100, for example, one or more subscriber premises, such as one or more customer home networks within the one or more subscriber premises, a service provider system, a service personnel workstation, and/or a communication network of the service provider network100. In some embodiments, the impairment can cause a service provided by the service provider network100to not perform as expected. For example, the impairment can cause a black screen, pixelization of a movie or a television program, lack of sound for the movie or the television program, intermittent connectivity, slow speed, no internet connectivity, no dial-tone, and/or an inability to receive electronic mail (email) messages among others to provide some examples. As to be described in further detail below, the service provider network100can develop multiple network records to record the performance of the service provider network100at various instances in time. In some embodiments, the service provider network100can compare these network records among each other to detect for the presence of the impairment within the service provider network100. In these embodiments, the service provider network100can thereafter diagnose and/or remedy the impairment when present within the service provider network100. In the exemplary embodiment illustrated inFIG.1, the service provider network100can include subscriber premises102, a service provider system104, a service personnel workstation106that are communicatively coupled to one another via a communication network108. The subscriber premises102represent one or more building and/or non-building structures that receive the service from the service provider network100. Generally, the one or more building structures refer to any suitable structure or structures that are designed for human occupancy and can include one or more residential, industrial, and/or commercial building structures to provide some examples. Generally, the one or more non-building structures refer to any suitable structure or structures that are not designed for human occupancy and can include one or more residential, industrial, and/or commercial non-building structures to provide some examples. In some embodiments, the subscriber premises102can include electronic devices that receive the service from the service provider network100and/or access points that facilitate the services between the service provider system104and the electronic devices via the communication network108. Generally, the one or more electronic devices represent any suitable mechanical, electrical, and/or electromechanical devices that can communicate electronic information to and/or from the service provider system104via the communication network108and/or the one or more access points. In some embodiments, the one or more electronic devices can include mobile telephony devices, such as mobile phones, mobile computing devices, mobile internet devices, such as tablet computers and/or laptop computers, video game consoles, portable media players, peripheral devices, such as wireless speakers, mice, keyboards, monitors, printers, and/or scanners, internet capable appliances, smart televisions, video streaming devices, video set-top boxes (STBs), and/or other suitable communication devices that are capable of wireless communication that will be recognized by those skilled in the relevant art(s) without departing from the spirit and scope of the present disclosure. Generally, the one or more access points represent any suitable mechanical, electrical, and/or electromechanical devices that can communicate electronic information to and/or from the subscriber premises102via the communication network108. In some embodiments, the one or more access points can include wireless routers, cable modems, set-top boxes (STBs), digital subscriber line (DSL) modems, WiFi signal extenders, and/or other suitable communication devices that can communicate electronic information to and/or from the subscriber premises102via the communication network108that will be recognized by those skilled in the relevant art(s) without departing from the spirit and scope of the present disclosure. The service provider system104represents one or more computer systems, an exemplary embodiment of which is to be described in further detail below, which facilitate delivery of the service to the subscriber premises102. In some embodiments, the service can include, for example, delivery of media content, such as movies, television programs, advertising, and/or electronic programing guides (EPGs) to provide some examples, internet access, and/or telephone service. As illustrated inFIG.1, the service provider system104can detect whether the impairment is present within the service provider network100. As described above, the impairment can be present anywhere within the service provider system104which includes the subscriber premises102, such as one or more customer home networks within the subscriber premises102, as described above and the service provider system104as to be described in further detail below. And as to be described in further detail below, the service provider system104can develop multiple network records to record the performance of the service provider network100at various instances in time. In some embodiments, the service provider system104can compare these network records among each other to detect for the presence of the impairment within the service provider network100. In these embodiments, the service provider system104can thereafter diagnose and/or remedy the impairment when present within the service provider network100. In the exemplary embodiment illustrated inFIG.1, the service provider system104can include a service provider server110, an administrative server112, a network record repository114, and/or an administrative workstation116. The service provider server110provides the service to the subscriber premises102to deliver electronic information, such as video, audio, and/or data to provide some examples, to the subscriber premises102in a downstream direction. As used herein, the term “downstream direction” refers to the transfer of the electronic information from the service provider system104to the subscriber premises102. As part of the service, the service provider server110can receive electronic information, such as video, audio, and/or data to provide some examples, from the subscriber premises102in an upstream direction. As used herein, the term “upstream direction” refers to the transfer of the electronic information from the subscriber premises102to the service provider system104. The administrative server112represents one or more computer systems, an exemplary embodiment of which is to be described in further detail below, which manages the service. In the exemplary embodiment illustrated inFIG.1, the administrative server112can develop network records118to record the performance of the service provider network100at various instances in time. Alternatively, or in addition to, the administrative server112can store one or more of the network records118in the network record repository114which is to be described in further detail below. In some embodiments, these network records can represent multiple “snapshots” of various characteristics, parameters, and/or attributes of the service provider network100that characterize the performance of the service provider network100at the various instances in time. In these embodiments, these network records can represent multiple “snapshots” of various characteristics, parameters, and/or attributes of one or more of the subscriber premises102that characterize the performance of the one or more of the subscriber premises102at the various instances in time. In some embodiments, the various instances in time can be, or from among, a periodic instances of time, such as every twenty-four (24) hours, every multiple days, every week, or every month to provide some examples, an aperiodic instances of time, and/or in response to an event, such as a provisioning of electronic devices and/or access points within the subscriber premises102, a service call being received from a subscriber associated with the subscriber premises102, and/or a service technician arriving at and/or departing from the subscriber premises102to provide some examples. In the exemplary embodiment illustrated inFIG.1, the administrative server112can determine one or more characteristics, parameters, and/or attributes of the service provider network100during the various instances in time to develop the network records118. In some embodiments, the one or more characteristics, parameters, and/or attributes can include various parameters, characteristics, and/or attributes describing configuration information of one or more of the subscriber premises102, such as the configuration of the electronic devices and/or the access points within one or more of the subscriber premises102to provide an example. In these embodiments, the configuration information can include the make, model, type or brand of the electronic devices and/or the access points; one or more identifiers for the network that is associated with the electronic devices and/or the access points, such as a network identifier (ID) or a network name to provide some examples; one or more locations of the electronic devices and/or the access points; one or more device identifiers of the electronic devices and/or the access points, such as a serial number, a median access controller (MAC) address, and/or an Internet Protocol (IP) address to provide some examples; and/or one or more statuses of the electronic devices and/or the access points, for example, power status information, channel tuning information, device re-boot information, and/or software version installed on the electronic devices and/or the access points. In some embodiments, the one or more characteristics, parameters, and/or attributes can include various parameters, characteristics, and/or attributes describing operation information of one or more of the subscriber premises102, such as the operation of the electronic devices and/or the access points within one or more of the subscriber premises102to provide an example. In these embodiments, the operation information can include provisioning information of the electronic devices and/or the access points, such as a name of the subscriber, an address of the customer premise, an electronic mail address of the subscriber, a telephone number associated with the subscriber, and/or a payment history of the subscriber. The operational information can include signal strengths of the electronic devices and/or the access points, receiving signal strength of the electronic devices and/or the access points, transmitting signal strength of the electronic devices and/or the access points, speeds of the downstream direction, speeds of the upstream direction, modulation of the digital data being carried, and/format of the digital data being carried to provide some examples. In some embodiments, the operational information can further include health information, for example, a heath score, of the electronic device that is associated with the selected graphical icon. In some embodiments, the one or more characteristics, parameters, and/or attributes can include historical information that is associated with one or more of the subscriber premises102, such as historical information of the electronic devices and/or the access points within one or more of the subscriber premises102to provide an example. In these embodiments, the historical information can identify one or more impairments that were previously present in the service provider network100, one or more potential sources of these impairments, and/or one or more previous actions that were performed on these one or more potential sources to remedy these impairments. In the exemplary embodiment illustrated inFIG.1, the configuration information, the operation information, and/or the historical information can include, or be related to, one or more radio frequency (RF) communication channels, such as one or more DOCSIS communication channels, that carry the service between the subscriber premises102and the service provider system104, one or more fiber optic communication channels that carry the service between the subscriber premises102and the service provider system104, the electronic devices and/or the access points within one or more of the subscriber premises102, and/or outcomes from one or more assessments of the service provider network100, such as a speed test or a Wi-Fi Service Set Identifier (SSID) check to provide some examples. Upon developing the network records118, the administrative server112can compare the network records118among each other to detect for the presence of the impairment. In some embodiments, the administrative server112can autonomously compare the network records118among each other to proactively monitor the service provider network100to detect for the presence of the impairment. Alternatively, or in addition to, the administrative server112can compare the network records118among each other in response to an event, such as a provisioning of electronic devices and/or access points within the subscriber premises102, a service call being received from a subscriber associated with the subscriber premises102, and/or a service technician arriving at and/or departing from the subscriber premises102to provide some examples, to detect for the presence of the impairment. In the exemplary embodiment illustrated inFIG.1, the service provider system104can compare one or more characteristics, parameters, and/or attributes from a first network record that was developed at a first instance in time from among the network records118and one or more corresponding characteristics, parameters, and/or attributes from a second network record that was developed at a second instance in time from among the network records118to detect for the presence of the impairment within the service provider network100. In some embodiments, the administrative server112can develop the first network record at the first instance in time and can retrieve the second network record from the network record repository114for comparison. In some embodiments, the administrative server112can determine that the impairment is present within the service provider network100when the one or more characteristics, parameters, and/or attributes from the first network record differ from the one or more corresponding characteristics, parameters, and/or attributes from the second network record. Additionally, the administrative server112can signal the service provider network100of the presence of the impairment. In some embodiments, this signaling can include dispatching a service technician to diagnose and/or remedy the impairment when present within the service provider network100and/or alerting a subscriber of the service of the presence of the impairment. In these embodiments, the alerting can include sending an electronic mail (email) message and/or a short message service (SMS) text message to a subscriber whose service is affected by the impairment to provide some examples. The administrative server112can diagnose and/or remedy the impairment upon detecting the presence of the impairment. In some embodiments, the administrative server112can diagnose one or more mechanical, electrical, and/or electromechanical devices within the service provider network100causing different characteristics, parameters, and/or attributes between the first network record and the second network record as causing of the impairment. In this example, the administrative server112can cause these mechanical, electrical, and/or electromechanical devices to be repaired, for example, by dispatching a service technician, and/or replaced, for example, by causing delivery of a new mechanical, electrical, and/or electromechanical device, to remedy the impairment. As illustrated inFIG.1, the administrative server112can store one or more of the network records118in the network record repository114. In some embodiments, the network record repository114can include one or more non-transitory machine-readable mediums such as read only memory (ROM), random access memory (RAM), magnetic disk repository media, optical repository media, and/or flash memory devices to provide some examples. In some embodiments, the administrative server112can store the network records118as an organized collection of data, often referred to as a database, within the network record repository114. In these embodiments, a database may include one or more data tables having various data values, such as alphanumeric strings, integers, decimals, floating points, dates, times, binary values, Boolean values, and/or enumerations to provide some examples. In some embodiments, the database can be a columnar database, a relational database, a key-store database, a graph database, and/or a document store to provide some examples. The administrative workstation116represents one or more computer systems, an exemplary embodiment of which is to be described in further detail below, which oversees the operation of the service provider network100. In the exemplary embodiment illustrated inFIG.1, a customer service representative of the service provider network100operating the administrative workstation116can receive an inquiry, such as a telephone call, a short message service (SMS) text message, or an electronic mail (email) message to provide some examples, from a subscriber whose service is affected by the impairment. For example, the inquiry can indicate that the subscriber is experiencing a black screen, pixelization of a movie or a television program, lack of sound for the movie or television program, intermittent connectivity, slow speed, no internet connectivity, no dial-tone, and/or an inability to receive electronic mail (email) messages to provide some examples. In some embodiments, the administrative server112and/or the administrative workstation116can compare the network records118among each other to detect for the presence of the impairment in a substantially similar manner as described above. In these embodiments, the administrative workstation116can cause the administrative server112to develop one of the network records118in response to the inquiry received from the subscriber. As an example, the administrative server112can provide a first network record that was developed at a first instance in time in response to the inquiry from among the network records118and a second network record that was developed at a second instance in time from among the network records118to the administrative workstation116. In this example, the administrative workstation116can compare one or more characteristics, parameters, and/or attributes from the first network record and one or more corresponding characteristics, parameters, and/or attributes from the second network record to detect for the presence of the impairment within the service provider network100in a substantially similar manner as described above. In some embodiments, the administrative workstation116can further diagnose and/or remedy the impairment upon detecting the presence of the impairment in a substantially similar manner as described above. As an example, the administrative workstation116can dispatch a service technician to the subscriber premises102and/or to cause one or more mechanical, electrical, and/or electromechanical devices to be replaced within the service provider network100to remedy the impairment. The service personnel workstation106represents one or more mobile computer systems which oversee the operation of the service provider network100. In some embodiments, the service personnel workstation106can be implemented as one or more mobile telephony devices, such as one or more mobile phones, and/or one or more mobile computing devices, one or more mobile internet devices, such as one or more tablet computers and/or one or more laptop computers, to provide some examples. In the exemplary embodiment illustrated inFIG.1, the service personnel workstation106can compare the network records118among each other to detect for the presence of the impairment in a substantially similar manner as described above. As described above, the impairment can be present anywhere within the service provider system104which includes the service personnel workstation106and the communication network108that are to be described in further detail below. In some embodiments, the service personnel workstation106can cause the administrative server112to develop one of the network records118in response to a service technician associated with the service personnel workstation106arriving and/or departing from one or more of the subscriber premises102. In these embodiments, this allows other service technicians that might be dispatched to the one or more of the subscriber premises102in the future to easily identify the state of the one or more of the subscriber premises102. As an example, the administrative server112can provide a first network record that was developed at a first instance in time in response to the service technician arriving and/or departing from the one or more of the subscriber premises102from among the network records118and a second network record that was developed at a second instance in time from among the network records118to the service personnel workstation106. In this example, the service personnel workstation106can compare one or more characteristics, parameters, and/or attributes from the first network record and one or more corresponding characteristics, parameters, and/or attributes from the second network record to detect for the presence of the impairment within the service provider network100in a substantially similar manner as described above. In some embodiments, the service personnel workstation106can further diagnose and/or remedy the impairment upon detecting the presence of the impairment in a substantially similar manner as described above. As an example, the service technician can replace and/or repair one or more mechanical, electrical, and/or electromechanical devices within the service provider network100to remedy the impairment. The communication network108communicatively couples the subscriber premises102and the service provider system104. The communication network108can implemented as a wireless communication network, a wireline communication network, and/or any combination thereof that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of the present disclosure. In some embodiments, the communication network108can include a hybrid fiber-coaxial (HFC) network that combines optical fiber and coaxial cable to deliver the electronic information, such as the video, the audio, and/or the data to provide some examples, from the service provider system104to the subscriber premises102in the downstream direction and/or to deliver the electronic information from the subscriber premises102to the service provider system104or in the upstream direction. In some embodiments, the communication network108can include a fiber to the home (FIFTH) network that utilizes optical fiber for at least a portion of the communication network108to deliver the electronic information, such as the video, the audio, and/or the data to provide some examples, from the service provider system104to the subscriber premises102in the downstream direction and/or to deliver the electronic information from the subscriber premises102to the service provider system104or in the upstream direction. Exemplary Operations of the Exemplary Service Provider Network FIG.2illustrates a first flowchart of a first exemplary operation for diagnosing and/or remedying an impairment within the exemplary service provider network according to some exemplary embodiments of the present disclosure. The disclosure is not limited to this operational description. Rather, it will be apparent to ordinary persons skilled in the relevant art(s) that other operational control flows are within the scope and spirit of the present disclosure. The following discussion describes an exemplary operational control flow200for diagnosing and/or remedying the impairment within a service provider network, such as the service provider network100. The operational control flow200can be executed by one or more computer systems, such as the service provider system104and/or the service personnel workstation106as described above inFIG.1to provide some examples. At operation202, the operational control flow200develops a first network record at a first instance in time. In some embodiments, the first instance in time can be from among a periodic instances of time, such as every twenty-four (24) hours, every multiple days, every week, or every month to provide some examples, from among an aperiodic instances of time, and/or in response to an event, such as a provisioning of electronic devices and/or access points within the subscriber premises102, a service call being received from a subscriber associated with the subscriber premises102, and/or a service technician arriving at and/or departing from the subscriber premises102to provide some examples. The first network record can represent an exemplary embodiment of one of the network records118that are described above inFIG.1. The operational control flow200can develop the first network record to record the performance of the service provider network at the first instance in time. In some embodiments, the first network record can include a “snapshot” of various characteristics, parameters, and/or attributes of the service provider network that characterize the performance of the service provider network at the first instance in time. In these embodiments, the operational control flow200can measure, estimate, and/or select one or more characteristics, parameters, and/or attributes of the service provider network during the first instance in time to develop the first network record. At operation204, the operational control flow200compares the first network record from operation202with a second network record that was developed at a second instance in time. In some embodiments, the second instance in time can occur prior to the first instance in time. In these embodiments, the second instance in time can be a prior instance of time from among the periodic instances of time, a prior instance of time from among the aperiodic instances of time, and/or in response to a prior event, such as a prior provisioning of electronic devices and/or access points within the subscriber premises102, a prior service call being received from a subscriber associated with the subscriber premises102, and/or a prior service technician arriving at and/or departing from the subscriber premises102to provide some examples. In these embodiments, the operational control flow200can retrieve the second network record from a network record repository, such as the network record repository114to provide an example, for comparison. At operation204, the operational control flow200can compare one or more characteristics, parameters, and/or attributes from the first network record from operation202and one or more corresponding characteristics, parameters, and/or attributes from the second network record. At operation206, the operational control flow200determines whether there is a difference between the first network record from operation202and the second network record from operation204to detect for the presence of the impairment within the service provider network. As described above, the impairment can be present anywhere within the service provider network, for example, one or more subscriber premises, such as one or more customer home networks within the one or more subscriber premises, a service provider system, a service personnel workstation, and/or a communication network of the service provider network. In some embodiments, the operational control flow200can determine that the impairment is present within the service provider network when the one or more characteristics, parameters, and/or attributes from the first network record from operation202differ from the one or more corresponding characteristics, parameters, and/or attributes from the second network record from operation204. The operational control flow200reverts to operation202to develop another first network record when the one or more characteristics, parameters, and/or attributes from the first network record from operation202are the same as the one or more corresponding characteristics, parameters, and/or attributes from the second network record from operation204indicating that the impairment is not present within the service provider network. Otherwise, the operational control flow200proceeds to operation208when the one or more characteristics, parameters, and/or attributes from the first network record from operation202are different from the one or more corresponding characteristics, parameters, and/or attributes from the second network record from operation204indicating that the impairment is present within the service provider network. At operation208, the operational control flow200can diagnose and/or remedy the impairment detected at operation206within the service provider network. In some embodiments, the operational control flow200can diagnose one or more mechanical, electrical, and/or electromechanical devices within the service provider network causing different characteristics, parameters, and/or attributes between the first network record and the second network record as being the cause of the impairment. In this example, the operational control flow200can cause these mechanical, electrical, and/or electromechanical devices to be repaired, for example, by dispatching a service technician, and/or replaced, for example, by causing delivery of a new mechanical, electrical, and/or electromechanical device, to remedy the impairment. Alternatively, or in addition to, the operational control flow200can signal the service provider network of the presence of the impairment detected at operation206. In some embodiments, this signaling can include dispatching a service technician to diagnose and/or remedy the impairment detected at operation206and/or alerting a subscriber of the service of the presence of the impairment detected at operation206. In these embodiments, the alerting can include sending an electronic mail (email) message and/or a short message service (SMS) text message to a subscriber whose service is affected by the impairment detected at operation206to provide some examples. FIG.3illustrates a second flowchart of a second exemplary operation for diagnosing and/or remedying the impairment within the exemplary service provider network according to some exemplary embodiments of the present disclosure. The disclosure is not limited to this operational description. Rather, it will be apparent to ordinary persons skilled in the relevant art(s) that other operational control flows are within the scope and spirit of the present disclosure. The following discussion describes an exemplary operational control flow300for diagnosing and/or remedying the impairment within a service provider network, such as the service provider network100. The operational control flow300can be executed by one or more computer systems, such as the administrative workstation116as described above inFIG.1to provide some examples. At operation302, the operational control flow300receives an inquiry from a subscriber of a service provided by the service provider network whose service is affected by an impairment that causes the service to not perform as expected. As described above, the impairment can be present anywhere within the service provider network, for example, one or more subscriber premises, such as one or more customer home networks within the one or more subscriber premises, a service provider system, a service personnel workstation, and/or a communication network of the service provider network. In some embodiments, a customer service representative of the service provider network operating the administrative workstation116can receive an inquiry, such as a telephone call, a short message service (SMS) text message, or an electronic mail (email) message to provide some examples, from the subscriber whose service is affected by the impairment. For example, the inquiry can indicate that the subscriber is experiencing a black screen, pixelization of a movie or a television program, lack of sound for the movie or television program, intermittent connectivity, slow speed, no internet connectivity, no dial-tone, and/or an inability to receive electronic mail (email) messages to provide some examples. At operation304, the operational control flow300develops a first network record in response to the inquiry from operation302. The first network record can represent an exemplary embodiment of one of the network records118that are described above inFIG.1. The operational control flow300can develop the first network record in response to the inquiry from operation302to record the performance of the service provider network. In some embodiments, the first network record can include a “snapshot” of various characteristics, parameters, and/or attributes of the service provider network that characterize the performance of the service provider network. In these embodiments, the operational control flow300can measure, estimate, and/or select one or more characteristics, parameters, and/or attributes of the service provider network in response to the inquiry from operation302to develop the first network record. At operation306, the operational control flow300identifies the impairment from operation302based upon a difference between the first network record from operation302and a second network record that was developed prior to receiving the inquiry from operation302. In some embodiments, the operational control flow300can determine which one or more characteristics, parameters, and/or attributes differ between the first network record from operation304and the second network record. In these embodiments, these differences between the first network record from operation302and the second network record can be characterized as being related to, for example, causing, the impairment from operation302. At operation308, the operational control flow300can diagnose and/or remedy the impairment from operation302. In some embodiments, the operational control flow300can diagnose one or more mechanical, electrical, and/or electromechanical devices within the service provider network causing the impairment from operation302. In this example, the operational control flow300can cause these mechanical, electrical, and/or electromechanical devices to be repaired, for example, by dispatching a service technician, and/or replaced, for example, by causing delivery of a new mechanical, electrical, and/or electromechanical device, to remedy the impairment. Alternatively, or in addition to, the operational control flow300can signal the service provider network of the presence of the impairment from operation302. In some embodiments, this signaling can include dispatching a service technician to diagnose and/or remedy the impairment from operation302and/or alerting the subscriber from operation302, or another subscriber of the service, of the presence of the impairment from operation302. In these embodiments, the alerting can include sending an electronic mail (email) message and/or a short message service (SMS) text message to the subscriber from operation302, or the other subscriber, to provide some examples. FIG.4illustrates a third flowchart of a third exemplary operation for diagnosing and/or remedying the impairment within the exemplary service provider network according to some exemplary embodiments of the present disclosure. The disclosure is not limited to this operational description. Rather, it will be apparent to ordinary persons skilled in the relevant art(s) that other operational control flows are within the scope and spirit of the present disclosure. The following discussion describes an exemplary operational control flow400for diagnosing and/or remedying the impairment within a service provider network, such as the service provider network100. The operational control flow400can be executed by one or more computer systems, such as the service personnel workstation106as described above inFIG.1to provide some examples. At operation402, the operational control flow400dispatches a service technician of the service provider network to a subscriber premises to diagnose and/or remedy an impairment that causes a service provided by the service provider network to not perform as expected. The subscriber premises can represent an exemplary embodiment of one of the subscriber premises102that are described above inFIG.1. In some embodiments, a customer service representative of the service provider network can receive an inquiry, such as a telephone call, a short message service (SMS) text message, or an electronic mail (email) message to provide some examples, from the subscriber whose service is affected by the impairment. For example, the inquiry can indicate that the subscriber is experiencing a black screen, pixelization of a movie or a television program, lack of sound for the movie or television program, intermittent connectivity, slow speed, no internet connectivity, no dial-tone, and/or an inability to receive electronic mail (email) messages to provide some examples. In this example, the customer service representative can dispatch the service technician to the subscriber premises to diagnose and/or remedy the black screen, the pixelization of the movie or the television program, the lack of sound for the movie or television program, the intermittent connectivity, the slow speed, the no internet connectivity, the no dial-tone, and/or the inability to receive electronic mail (email) messages to provide some examples. At operation404, the operational control flow400develops a first network record in response to the service technician from operation402arriving at the subscriber premises. The first network record can represent an exemplary embodiment of one of the network records118that are described above inFIG.1. The operational control flow400can develop the first network record to record the performance of the service provider network upon the service technician from operation402arriving at the subscriber premises. In some embodiments, the first network record can include a “snapshot” of various characteristics, parameters, and/or attributes of the service provider network that characterize the performance of the service provider network upon the service technician from operation402arriving at the subscriber premises. In these embodiments, the operational control flow400can measure, estimate, and/or select one or more characteristics, parameters, and/or attributes of the service provider network upon the service technician from operation402arriving at the subscriber premises to develop the first network record. At operation406, the operational control flow400identifies the impairment from operation402based upon a difference between the first network record from operation402and a second network record that was developed prior to the service technician from operation402arriving at the subscriber premises. In some embodiments, the operational control flow400can determine which one or more characteristics, parameters, and/or attributes differ between the first network record from operation404and the second network record. In these embodiments, these differences between the first network record from operation402and the second network record can be characterized as being related to, for example, causing, the impairment from operation402. At operation408, the operational control flow400can diagnose and/or remedy the impairment from operation402. In some embodiments, the operational control flow400can diagnose one or more mechanical, electrical, and/or electromechanical devices within the service provider network causing the impairment from operation402. In this example, the operational control flow400can cause the service technician from operation402to repair and/or replace these mechanical, electrical, and/or electromechanical devices to remedy the impairment. At operation410, the operational control flow400develops a third network record in response to the service technician from operation402departing from the subscriber premises. The third network record can represent an exemplary embodiment of one of the network records118that are described above inFIG.1. The operational control flow400can develop the third network record to record the performance of the service provider network upon the service technician from operation402departing from the subscriber premises. In some embodiments, the third network record can include a “snapshot” of various characteristics, parameters, and/or attributes of the service provider network that characterize the performance of the service provider network upon the service technician from operation402departing from subscriber premises. In these embodiments, the operational control flow400can measure, estimate, and/or select one or more characteristics, parameters, and/or attributes of the service provider network upon the service technician from operation402departing at the subscriber premises to develop the third network record. The third network record allows other service technicians of the service provider network to verify that the impairment from operation402has been remedied in operation408. Exemplary Network Record that can be Utilized within the Exemplary Service Provider Network FIG.5graphically illustrates an exemplary network record that can be utilized within the exemplary service provider network according to some exemplary embodiments of the present disclosure. One or more computer systems of a service provider network, such as the administrative server112of the service provider network100to provide an example, can develop network records, such as one or more of the network records118as described above inFIG.1to provide an example, to record the performance of a subscriber premises, such as one or more of the subscriber premises102to provide an example, in delivering a service provided by the service provider network at various instances in time. In some embodiments, these computer systems can compare these network records among each other to detect for the presence of an impairment within the service provider network that can cause the service to not perform as expected in a substantially similar manner as described above inFIG.1. The discussion ofFIG.5to follow is to describe an exemplary embodiment of one or more of these network records. However, those skilled in the relevant art(s) will recognize that other network records are possible without departing from the spirit and scope of the present disclosure. In the exemplary embodiment illustrated inFIG.5, a network record500includes network record fields502having corresponding values from among network record values504. Generally, the network record fields502represent exemplary characteristics, parameters, and/or attributes of the subscriber premises that characterize the performance of the subscriber premises in delivering the service at the specific instance in time. However, the exemplary characteristics, parameters, and/or attributes of the service provider network are for exemplary purposes only. Those skilled in the relevant art(s) with recognize that other characteristics, parameters, and/or attributes of the subscriber premises are possible for the network record500without departing from the spirit and scope of the present disclosure. Generally, the network record values504can be characterized as being various alphabetical, numerical, and/or alphanumerical values. In some embodiments, the network record values504can be assigned to various color codes to allow a customer service representative and/or a service technician viewing the network record500to quickly identify the characteristics, parameters, and/or attributes of the service provider network that are causing the impairment. For example, the network record values504can be assigned to a first color code, such as green, when the network record values504are conducive to delivering the service or to a second color code, such as red, when the network record values504are not conducive to delivering the service. Exemplary green and/or red color codes for the network record values504are illustrated inFIG.5. As illustrated inFIG.5, the network record500represents “snapshot” of network record fields502that characterize the performance of the subscriber premises at the various instances in time. In the exemplary embodiment illustrated inFIG.5, the network record fields502can include an <<access point radio frequency (RF) level check>> field506, an <<access point online check>> field508, an <<WiFi extender status>> field510, a <<DOCSIS RF parameters>> field512, and/or a <<Video STB status>> field514. In the exemplary embodiment illustrated inFIG.5, the <<access point RF level check>> field506indicates whether the signal strength of the radio waves that are received by an access point of the subscriber premises are sufficient to deliver the service. Generally, the radio waves should have a signal strength, for example, greater than −55 dBm to ensure the subscriber premises delivers the best service. In some embodiments, a signal strength, for example, greater than −70 dBm is acceptable but may result in a degraded service, for example, poor video experience, being delivered by the subscriber premises. In some embodiments, a signal strength less than, for example, −70 dBm often results in a severely degraded experience being delivered by the subscriber premises. As illustrated inFIG.5, the network record values504for the <<access point RF level check>> field506can be a <<PASS>> value to indicate that the radio waves received by the access point have a signal strength, for example, greater than −70 dBm or a <<FAIL>> value to indicate that the radio waves received by the access point have a signal strength, for example, greater less than −70 dBm. It should be noted that the various signal strengths, for example, −70 dBm, referred to in the description ofFIG.5are for illustrative purposes only and not limiting. Those skilled in the relevant art(s) will other signal strengths are possible without departing from the spirit and scope of the present disclosure. As an example, different wireless technologies, for example, WiFi6, WiFI6E, and/or WiFi7 can have different signal strengths than those as described within the description ofFIG.5. In the exemplary embodiment illustrated inFIG.5, the <<access point online check>> field508identifies whether the access point of the subscriber premises is online to deliver the service. As illustrated inFIG.5, the network record values504for the <<access point online check>> field508can be a <<PASS>> value to indicate that the subscriber premises is online to deliver the service or a <<FAIL>> value to indicate that the subscriber premises is offline and cannot deliver the service. In the exemplary embodiment illustrated inFIG.5, the <<WiFi extender status>> field510identifies one or more parameters, characteristics, and/or attributes relating to one or more WiFi signal extenders within the subscriber premises. As illustrated inFIG.5, the <<WiFi extender status>> field510can include an <<on account>> field516having a numerical value from among the network record values504indicating the number of WiFi signal extenders within the subscriber premises and an <<online>> field518having a numerical value from among the network record values504indicating the number of WiFi signal extenders within the subscriber premises that are online. In the exemplary embodiment illustrated inFIG.5, the <<WiFi extender status>> field510can further include one or more parameters, characteristics, and/or attributes for each of the WiFi signal extenders, denoted as WiFi signal extenders520.1through520.a, within the subscriber premises. In some embodiments, the <<WiFi extender status>> field510can further include <<device median access controller (MAC) address>> fields522, <<placement>> fields524, and/or <<connection status>> fields526for each of the WiFi signal extenders520.1through520.awithin the subscriber premises. As illustrated inFIG.5, the network record values504for the <<device median access controller (MAC) address>> fields522can be alphanumerical values. As illustrated inFIG.5, the network record values504for the <<placement>> fields524can be an <<OPTIMAL>> value to indicate that the corresponding WiFi signal extender is optimally placed within the subscriber premises, a <<FAIL—TOO FAR>> value to indicate that the corresponding WiFi signal extender is too far from the access point of the subscriber premises in terms of distance, or a <<FAIL—TOO CLOSE>> value to indicate that the corresponding WiFi signal extender is too close from the access point of the subscriber premises in terms of distance. As illustrated inFIG.5, the network record values504for the <<connection status>> fields526can be an <<ONLINE>> value to indicate that the corresponding WiFi signal extender is online to deliver the service or an <<OFFLINE>> value to indicate that the corresponding WiFi signal extender is offline and cannot deliver the service. In the exemplary embodiment illustrated inFIG.5, the <<DOCSIS RF parameters>> field512identifies one or more parameters, characteristics, and/or attributes relating the delivery of electronic information, such as video, audio, and/or data to provide some examples, to the subscriber premises in the downstream direction and/or from the subscriber premises in the upstream direction. As illustrated inFIG.5, the <<DOCSIS RF parameters>> field512includes a <<downstream direction>> field528.1and an <<upstream direction>> field528.2. In some embodiments, the downstream direction>> field528.1includes a <<number of downstream channels>> field530.1that identifies the number of downstream channels being used to deliver the electronic information in the downstream direction, a <<number of downstream channels impaired>> field532.1that identifies the number of downstream channels that are impaired, an <<average received power>> field534.1that identifies the signal strength of the signals carrying the electronic information in the downstream direction and/or a <<minimum downstream signal-to-noise ratio (SNR)>> field536.1that identifies the signal to noise ratio of the signals carrying the electronic information in the downstream direction. Generally, the <<average received power>> field534.1should have a signal strength, for example, between −8 dBm and 8 dBm to ensure the subscriber premises delivers the best service. In some embodiments, a signal strength, for example, between −15 dBm and 15 dBm for the <<average received power>> field534.1is acceptable. In some embodiments, a signal strength, for example, less than −15 dBm and/or greater than 15 dBm often results in a severely degraded experience being delivered by the subscriber premises. Generally, a received signal strength indicator (RSSI) value for the <<minimum downstream SNR>> field536.1should be, for example, greater than 30 dB. In some embodiments, the RSSI value for the <<minimum downstream SNR>> field536.1being, for example, less than 25 dB often results in a severely degraded experience being delivered by the subscriber premises. In some embodiments, the upstream direction>> field528.2includes a <<number of upstream channels>> field530.2that identifies the number of upstream channels being used to deliver the electronic information in the upstream direction, a <<number of upstream channels impaired>> field532.2that identifies the number of upstream channels that are impaired, an <<average received power>> field534.2that identifies the signal strength of the signals carrying the electronic information in the upstream direction and/or a <<minimum upstream signal-to-noise ratio (SNR)>> field536.2that identifies the signal to noise ratio of the signals carrying the electronic information in the upstream direction. Generally, the <<average received power>> field534.2should have a signal strength between 40 dBm and 50 dBm to ensure the subscriber premises delivers the best service. In some embodiments, a signal strength, for example, between 35 dBm and 55 dBm for the <<average received power>> field534.2is acceptable. Generally, a received signal strength indicator (RSSI) value for the <<minimum upstream SNR>> field536.2should be, for example, greater than 30 dB. In some embodiments, the RSSI value for the <<minimum upstream SNR>> field536.2being, for example, less than 25 dB often results in a severely degraded experience being delivered by the subscriber premises. In the exemplary embodiment illustrated inFIG.5, the <<video STB status>> field514identifies one or more parameters, characteristics, and/or attributes relating to one or more video set-top boxes (STBs) within the subscriber premises. As illustrated inFIG.5, the <<video STB status >> field514can include one or more parameters, characteristics, and/or attributes for each of the one or more video STBs, denoted as video STBs530.1through530.b, within the subscriber premises. In some embodiments, the <<video STB status>> field514can include <<device median access controller (MAC) address>> fields532, <<status>> fields534, <<connection status>> fields536, <<connection type>> fields538, <<placement>> fields540, <<RSSI>> fields542, and <<WIFI band>> fields544for each of the video STBs530.1through530.bwithin the subscriber premises. As illustrated inFIG.5, the network record values504for the <<device median access controller (MAC) address>> fields532can be alphanumerical values. As illustrated inFIG.5, the network record values504for the <<status>> fields534can be a <<PASS>> value to indicate that the subscriber premises is online to deliver the service or a <<FAIL>> value to indicate that the subscriber premises is offline and cannot deliver the service. As illustrated inFIG.5, the network record values504for the <<connection status>> fields536can be an <<ONLINE>> value to indicate that the corresponding video STB is online to deliver the service or an <<OFFLINE>> value to indicate that the corresponding video STB is offline and cannot deliver the service. As illustrated inFIG.5, the network record values504for the <<connection type>> fields538can be wired or wireless As illustrated inFIG.5, the network record values504for the <<placement>> fields540can be an <<OPTIMAL>> value to indicate that the corresponding video STB is optimally placed within the subscriber premises or a <<FAIL—TOO FAR>> value to indicate that the corresponding video STB is too far from the access point of the subscriber premises in terms of distance. As illustrated inFIG.5, the network record values504for the <<RSSI>> fields542can be a numerical value indicating the received signal strength of the radio waves that are received by the corresponding video STB. As illustrated inFIG.5, the network record values504for the <<WIFI band>> fields544can be <<5 GHz>> when the corresponding video STB is operating in the 5 GHz unlicensed band, <<2.4 GHz>> when the corresponding video STB is operating in the 2.4 GHz unlicensed band, or <<NA>> if the <<connection type>> fields538indicate the corresponding video STB is wired. Exemplary Computer System that can be Utilized within the Exemplary Service Provider Network FIG.6graphically illustrates a simplified block diagram of a computer system suitable for use with embodiments described herein, as well as circuit design and circuit embodiments of the technology, according to an exemplary embodiment of the present disclosure. The various electronic devices, for example, the service provider system202and/or the portable diagnostic system116, as described above can be implemented in hardware, firmware, software, or any combination thereof. The discussion ofFIG.6to follow describes an exemplary computer system610that can be used for these electronic devices. In the exemplary embodiment illustrated inFIG.6, the computer system610typically includes at least one processor614which communicates with a number of peripheral devices via bus subsystem612. Typically, the at least processor614can include, or can be, any of a microprocessor, graphics processing unit, or digital signal processor, and their electronic processing equivalents, such as an Application Specific Integrated Circuit (“ASIC”) or Field Programmable Gate Array (“FPGA”). As used herein, the term “processor” signifies a tangible data and information processing device that physically transforms data and information, typically using a sequence transformation (also referred to as “operations”). Data and information can be physically represented by an electrical, magnetic, optical or acoustical signal that is capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by the processor. The term “processor” can signify a singular processor and multi-core systems or multi-processor arrays, including graphic processing units, digital signal processors, digital processors or combinations of these elements. The processor can be electronic, for example, comprising digital logic circuitry (for example, binary logic), or analog (for example, an operational amplifier). The processor may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of processors available at a distributed or remote system, these processors accessible via a communications network (e.g., the Internet) and via one or more software interfaces (e.g., an application program interface (API).) The computer system typically includes an operating system, such as Microsoft's Windows, Sun Microsystems's Solaris, Apple Computer's MacOs, Linux or UNIX. The computer system also typically can include a Basic Input/Output System (BIOS) and processor firmware. The operating system, BIOS and firmware are used by the processor to control subsystems and interfaces coupled to the processor. Typical processors compatible with these operating systems include the Pentium and Itanium from Intel, the Opteron and Athlon from Advanced Micro Devices, and the ARM processor from ARM Holdings. As illustrated inFIG.6, these peripheral devices may include a storage subsystem624, comprising a memory subsystem626and a file storage subsystem628, user interface input devices622, user interface output devices620, and a network interface subsystem616. The input and output devices allow user interaction with computer system610. In the exemplary embodiment illustrated inFIG.6, the network interface subsystem616provides an interface to outside networks, including an interface to a communication network618, and is coupled via a communication network618to corresponding interface devices in other computer systems or machines. The communication network618may comprise many interconnected computer systems, machines and communication links. These communication links may be wired links, optical links, wireless links, or any other devices for communication of information. The communication network618can be any suitable computer network, for example a wide area network such as the Internet, and/or a local area network such as Ethernet. The communication network618can be wired and/or wireless, and the communication network can use encryption and decryption methods, such as is available with a virtual private network. The communication network uses one or more communications interfaces, which can receive data from, and transmit data to, other systems. Embodiments of communications interfaces typically include an Ethernet card, a modem (e.g., telephone, satellite, cable, or ISDN), (asynchronous) digital subscriber line (DSL) unit, Firewire interface, USB interface, and the like. One or more communications protocols can be used, such as HTTP, TCP/IP, RTP/RTSP, IPX and/or UDP. The user interface input devices622may include an alphanumeric keyboard, a keypad, pointing devices such as a mouse, trackball, touchpad, stylus, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems or microphones, eye-gaze recognition, brainwave pattern recognition, and other types of input devices. Such devices can be connected by wire or wirelessly to a computer system. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into the computer system610or onto the communication network618. The user interface input devices622typically allow a user to select objects, icons, text and the like that appear on some types of user interface output devices, for example, a display subsystem. The user interface output devices620may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other device for creating a visible image such as a virtual reality system. The display subsystem may also provide non-visual display such as via audio output or tactile output (e.g., vibrations) devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from the computer system610to the user or to another machine or computer system. The memory subsystem626typically includes a number of memories including a main random-access memory (“RAM”)630(or other volatile storage device) for storage of instructions and data during program execution and a read only memory (“ROM”)632in which fixed instructions are stored. The file storage subsystem628provides persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, a flash memory, or removable media cartridges. The databases and modules implementing the functionality of certain embodiments may be stored by file storage subsystem628. The bus subsystem612provides a device for letting the various components and subsystems of the computer system610communicate with each other as intended. Although the bus subsystem612is shown schematically as a single bus, alternative embodiments of the bus subsystem may use multiple busses. For example, RAM-based main memory can communicate directly with file storage systems using Direct Memory Access (“DMA”) systems. CONCLUSION The Detailed Description referred to accompanying figures to illustrate exemplary embodiments consistent with the disclosure. References in the disclosure to “an exemplary embodiment” indicates that the exemplary embodiment described can include a particular feature, structure, or characteristic, but every exemplary embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same exemplary embodiment. Further, any feature, structure, or characteristic described in connection with an exemplary embodiment can be included, independently or in any combination, with features, structures, or characteristics of other exemplary embodiments whether or not explicitly described. The Detailed Description is not meant to limiting. Rather, the scope of the disclosure is defined only in accordance with the following claims and their equivalents. It is to be appreciated that the Detailed Description section, and not the Abstract section, is intended to be used to interpret the claims. The Abstract section can set forth one or more, but not all exemplary embodiments, of the disclosure, and thus, are not intended to limit the disclosure and the following claims and their equivalents in any way. The exemplary embodiments described within the disclosure have been provided for illustrative purposes and are not intended to be limiting. Other exemplary embodiments are possible, and modifications can be made to the exemplary embodiments while remaining within the spirit and scope of the disclosure. The disclosure has been described with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Embodiments of the disclosure can be implemented in hardware, firmware, software application, or any combination thereof. Embodiments of the disclosure can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by one or more processors. A machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing circuitry). For example, a machine-readable medium can include non-transitory machine-readable mediums such as read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others. As another example, the machine-readable medium can include transitory machine-readable medium such as electrical, optical, acoustical, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Further, firmware, software application, routines, instructions can be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software application, routines, instructions, etc. The Detailed Description of the exemplary embodiments fully revealed the general nature of the disclosure that others can, by applying knowledge of those skilled in relevant art(s), readily modify and/or adapt for various applications such exemplary embodiments, without undue experimentation, without departing from the spirit and scope of the disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and plurality of equivalents of the exemplary embodiments based upon the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by those skilled in relevant art(s) in light of the teachings herein. | 69,287 |
11863376 | DETAILED DESCRIPTION In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. Some embodiments provide methods for enabling multiple smart NICs of the same host computer operate as a single entity (e.g., as a teamed set of smart NICs). In some embodiments, the smart NICs each execute a smart NIC operating system that performs virtual networking operations (and/or other operations, such as virtual storage operations) for a set of data compute nodes (e.g., virtual machines (VMs), containers, etc.) executing on the host computer. In some embodiments, the smart NICs are connected by a private communication channel in order to share dynamic state information, share configuration data (so that one of the smart NICs can act as a single point of contact for a network management and control system), and/or pass between each other data messages sent to and from the data compute nodes (DCNs) that require virtual network processing. By executing a smart NIC operating system, the smart NICs are able to perform various tasks that would otherwise be performed by the host computer software (e.g., the hypervisor of the host computer). These tasks can include virtual network processing for data messages (i.e., performing virtual switching and/or routing, firewall operations, etc.), virtual storage operations, etc. In order for multiple smart NICs to perform these operations that would otherwise be performed entirely by a single entity (e.g., the hypervisor), communication may be required between the smart NICs. FIG.1conceptually illustrates a host computer100with multiple physical smart NICs105and110that perform network virtualization operations. As shown, the host computer100includes multiple DCNs (in this case, virtual machines)115-125that connect to the smart NICs105and110in a passthrough mode (i.e., without having any sort of network virtualization processing applied within the virtualization software130of the host computer100). Each of the VMs115-125has an associated virtual NIC (vNIC)135-145that connects to a different virtual function (VF)161-164of one of the smart NICs105and110via a Peripheral Component Interconnect Express (PCIe) fabric165(a motherboard-level interconnect that connects the physical processor of the host computer100to the physical interfaces of the smart NICs105and110). Each vNIC135-145, and thus each VM115-125, is bound to a different VF of one of the smart NICs105or110. The VFs161-164, in some embodiments, are virtualized PCIe functions exposed as interfaces of the smart NICs. Each VF is associated with a physical function (PF), which is a physical interface of the smart NIC that is recognized as a unique PCIe resource. In this case, the smart NIC105has one PF170and the smart NIC110has one PF175, but in many cases each smart NIC will have more than one PF. The PF170is virtualized to provide at least the VFs161-162while the PF175is virtualized to provide at least the VFs163-164. In some embodiments, the VFs are provided so as to provide different VMs with different virtual interfaces of the smart NICs to which they can each connect. In some embodiments, VF drivers150-160execute in each of the VMs115-125to manage their respective connections to the VFs. As shown, in some embodiments, each VM115-125is associated with a vNIC135-145that is provided by the virtualization software130as a software emulation of the NIC. In different embodiments, the VMs115-125access the VFs either through their respective vNICs135-145or directly in a passthrough mode (in which the virtualization software130is not involved in most network communications. In yet other embodiments, the VMs115-125can switch between this passthrough mode and accessing the VFs via their respective vNICs135-145. In either case, the virtualization software130is involved in allocating the VFs161-164to the VMs115-125and enabling the VFs to be accessible from the VF drivers150-160. It should also be noted that although in this case all of the network virtualization operations have been shifted from the virtualization software130of the host computer to the smart NICs105and110, in other embodiments virtual switch(es) provided by the virtualization software130can connect directly to the PFs170and175. In some such embodiments, data traffic is sent from a VM via a vNIC to the virtual switch, which provides the traffic to the PF. In this case, the virtual switch performs basic switching operations but leaves the network virtualization operations to the smart NIC. The smart NICs105and110also include physical network ports181-184. In different embodiments, smart NICs may each include only a single physical network port or multiple (e.g., 2, 3, 4, etc.) physical network ports. These physical network ports181-184provide the physical communication to a datacenter network for the host computer100. In addition, a private communication channel180is shown between the two smart NICs105and110, which allows these smart NICs to communicate. As described further below, this communication channel180may take various forms (e.g., direct physical connection, logical connection via the existing network, connection via PCIe messages). Finally,FIG.1illustrates that the smart NICs105and110perform network virtualization operations185. In some embodiments, these operations can include logical switching and/or routing operations, distributed firewall operations, encapsulation, and other networking operations that are often performed in the virtualization software of host computers. In some embodiments, all of the smart NICs of a given host computer are provided with the same virtual networking configuration. Though not shown in the figure, in some embodiments each smart NIC is a NIC that includes (i) a packet processing circuit, such as an application specific integrated circuit (ASIC), (ii) a general-purpose central processing unit (CPU), and (iii) memory. The packet processing circuit, in some embodiments, is an I/O ASIC that handles the processing of data messages forwarded to and from the DCNs in the host computer and is at least partly controlled by the CPU. In other embodiments, the packet processing circuit is a field-programmable gate array (FPGA) configured to perform packet processing operations or a firmware-programmable processing core specialized for network processing (which differs from the general-purpose CPU in that the processing core is specialized and thus more efficient at packet processing). The CPU executes a NIC operating system in some embodiments that controls the packet processing circuit and can run other programs. In some embodiments, the CPU configures the packet processing circuit to implement the network virtualization operations by configuring flow entries that the packet processing circuit uses to process data messages. When a data message is sent by one of the VMs115-125, that data message is (in software of the host computer100) sent via the corresponding vNIC135-145. The data message is passed through the PCIe bus165to the corresponding VF161-164of the appropriate smart NIC. The smart NIC ASIC processes the data message to apply the configured network virtualization operations185, then (so long as the data message does not need to be sent to the other smart NIC of the host computer and the destination for the data message is external to the host computer) sends the data message out of one of its physical ports181-184. It should be noted that, whileFIG.1illustrates a host computer with virtualization software on which various VMs operate, the discussion of smart NICs herein also applies to host computers hosting other types of virtualized DCNs (e.g., containers) as well as bare metal computing devices (i.e., on which the computer does not execute virtualization software). In the latter case, the bare metal computing device will typically directly access the PFs of multiple smart NICs rather than any VFs. That is, the smart NICs are used to provide network virtualization (or other operations, such as storage virtualization) without the software on the computing device being aware of these operations. FIG.2conceptually illustrates the collective operation of a set of smart NICs205-210of a single host computer of some embodiments. Each of these smart NICs includes multiple VFs for communication with VMs of the host computer as well as multiple physical ports for communication with a datacenter network (e.g., to which other host computers, that may or may not use smart NICs, also connect). Each of the smart NICs runs (i.e., on the CPU of the respective smart NIC) a smart NIC operating system215-220. Each smart NIC operating system215-220controls the ASIC of the smart NIC and performs additional operations, such as network virtualization operations225and storage virtualization operations230. These operations225and230(and, in other embodiments, other types of operations) are distributed across the various smart NICs215-220of the host computer such that the smart NICs appear to operate as a single entity (i.e., in the same way as the virtualization software of the host computer is a single entity). The network virtualization operations225, as indicated above, include performing logical switching and/or routing of data messages for one or more logical forwarding elements, applying distributed firewall rules, performing network address translation, and other networking features. If each of the smart NICs205-210is configured to perform the same network virtualization operations, then any of the smart NICs can receive a data message directed to or sent from one of the DCNs executing on the host computer and properly process this data message. Similarly, if the storage virtualization operations230are configured across all of the smart NICs, then a VM can be bound to any of the smart NICs and can handle I/O requests from the VM to the virtual storage network. Whereas VMs are bound to smart NIC network adapter VFs for networking operations, the VFs to which the VMs are bound for the purpose of storage virtualization are storage VFs (e.g., non-volatile memory express (NVMe) devices or small computer system interface (SCSI) devices). In order for multiple smart NICs to perform these operations as though operating as a single entity (similar to a hypervisor of the host computer), communication may be required between the smart NICs. Therefore, in some embodiments, a private communication channel is setup between the smart NICs to enable communication between the smart NICs. The private communication channel, in some embodiments, is a physically separate channel. For instance, in some embodiments the smart NICs are connected via a set of physical cables that only carries communication between the smart NICs.FIG.3conceptually illustrates some embodiments in which the smart NICs of a host computer300are connected serially. As shown, each of the smart NICs305-320are connected to two other smart NICs. That is, the smart NIC305is connected to smart NICs320and310, the smart NIC310is connected to smart NICs305and315, the smart NIC315is connected to smart NICs310and320, and therefore the smart NIC320is connected to smart NICs315and305. Depending on the physical arrangement of the smart NICs, some embodiments do not directly connect the smart NICs on the end (i.e., in the example the smart NICs305and320would not be connected). Having a full ring connection (as shown inFIG.3) allows for any of the smart NICs305-320to communicate with any of the other smart NICs in case of a failure of one of the communication links (or one of the smart NICs themselves). For instance, if the link between smart NIC310and smart NIC315failed, then smart NIC310could still reach smart NIC315via the other two smart NICs305and320. Similarly, if the smart NIC310itself failed, then smart NIC305could still reach smart NIC315via smart NIC320. For even more robust failure protection, some embodiments include private communication channel links between each pair of smart NICs (i.e., a full mesh of connections).FIG.4conceptually illustrates some embodiments in which each smart NIC of a host computer400is connected directly to each other smart NIC of the host computer. As shown, each of the smart NICs405-420is connected directly to each of the other three smart NICs405-420. In such a setup, if there are N smart NICs for a host computer, then each smart NIC requires N−1 direct connections to the other smart NICs. This sort of setup is reasonable for a host computer with a reasonably small number of smart NICs (e.g., 3-5 smart NICs), but becomes more difficult for larger numbers. These connections may use a separate purpose-built channel for inter-NIC communication in some embodiments. In other embodiments, if the smart NICs have enough physical ports, the connections can repurpose the physical network ports of the NICs (e.g., using Ethernet cables—if there are more than two smart NICs, though, this can require two of the network ports). Yet other embodiments use management ports of the smart NICs if these ports are available and if the bandwidth of the management ports is high enough to handle the expected communications between the smart NICs. In some embodiments, the smart NIC components that enable the private communications channel are isolated from the other components of the smart NIC. In this case, even if the other smart NIC components are non-operational (e.g., due to a firmware or software bug, hardware failure, etc.), the smart NIC is still able to at least relay traffic between the smart NICs. Rather than have the smart NICs connected to each other directly (whether serially or in a mesh), in other embodiments these smart NICs connect via a separate physical switch so that each smart NIC can directly communicate with any other smart NIC through the physical switch.FIG.5conceptually illustrates an example of the smart NICs of a host computer500connected through a separate physical switch505. As shown, each of the smart NICs510-520of the host computer500connects to an isolated physical switch505. This physical switch505, in some embodiments, only handles inter-smart NIC communication (i.e., it is not part of a datacenter network that handles data messages between DCNs and/or management and control traffic). In fact, this physical switch might not even use the same switching technology (e.g., Ethernet or Infiniband) used to carry networking traffic within the datacenter. As in the previous example, these connections may use a separate purpose-built channel, a physical network port, or a management port. In addition, for redundancy, some embodiments use two separate isolated switches (or more than two), with each smart NIC510-520connecting to each of these isolated switches. Rather than use a separate physical channel for private communications between smart NICs (e.g., if there is no separate purpose-built channel and the network ports cannot be spared for this use), in some embodiments the smart NICs communicate via a logically private communication channel that uses existing physical connections. For instance, all of the smart NICs of a host computer will generally connect to the same physical datacenter network, so a private communication channel can be overlaid on that network. FIG.6conceptually illustrates two host computers605and610with three smart NICs each that connect to a datacenter network600and use overlays on that datacenter network as their respective private communication channels. As shown, the first host computer605includes three smart NICs615-625that connect to the datacenter network600while the second host computer610also includes three smart NICs630-640that connect to the datacenter network600. Each of these respective sets of smart NICs uses a different overlay network (e.g., using encapsulation) as a private communication channel. The first set of smart NICs615-625uses a first overlay network645and the second set of smart NICs630-640uses a second overlay network650. These overlay networks used as private communication channels may be VXLAN networks, Geneve networks, etc. In some embodiments, the encapsulation network addresses used are those associated with the physical network ports of the smart NICs (i.e., the same network addresses used for encapsulating data traffic between DCNs on their respective host computers) while the underlying overlay network addresses are logical addresses associated with the smart NIC operating systems (in fact, the first set of smart NICs615-625could use the same set of overlay network addresses as the second set of smart NICs630-640. The use of overlay networks requires only that all of the smart NICs of a host computer be attached to the same layer 3 network (but not necessarily the same subnet). Thus, if one of the smart NICs is connected only to a physically separate management network but the others are connected to a data network within a datacenter (and not to the management network), then the smart NICs cannot communicate via such an overlay network. Some other embodiments use a dedicated VLAN as the private communication channel if all of the smart NICs for a host computer connect to the same data link layer (layer 2) network. However, if this existing physical layer 2 network has numerous other host computers with their own sets of smart NICs that require separate VLANs and also carries data messages for the DCNs on these host computers, then the maximum number of VLANs (4094) available on a single network may be reached. In still other embodiments, the smart NICs of a host computer communicate via a private communication channel through that host computer. As described above, smart NICs typically connect to the PCIe subsystem of the host computer, which can be used for the private communication channel.FIG.7conceptually illustrates a host computer700with three smart NICs705-715that communicate through a PCIe fabric720of the host computer. Communicating through the PCIe subsystem typically allows any smart NIC to talk directly to any of the other smart NICs. In different embodiments, the smart NICs use the standard peer-to-peer transfer feature of PCIe, leverage the PCIe switching fabric, or use other enhancements on top of PCIe (e.g., Compute Express Link (CXL)). As mentioned, one use of the private communication channel is for a first smart NIC to pass a data message (e.g., a data message sent to or from the host computer or a DCN executing on the host computer) to a second smart NIC. The smart NICs operate as a single entity in that their smart NIC operating systems collectively implement a set of virtual networking operations (e.g., implementation of logical switches and/or routers, firewalls, etc.). However, each smart NIC has its own interfaces to which the DCNs of the host computer are bound (e.g., physical functions and virtual functions) as well as its own physical network ports. FIG.8conceptually illustrates a process800of some embodiments for processing a data message at a smart NIC that is one of multiple smart NICs for a host computer. As described above, each of the smart NICs has one or more interfaces, and DCNs operating on the host computer are each bound to a different interface. In addition, the virtual networking operations performed on the data messages have been pushed into the smart NIC operating system (as opposed to being performed by forwarding elements executing in the hypervisor of the host computer). The process800will be described in part by reference toFIGS.9-11, which illustrate examples of data messages being processed by a smart NIC. As shown, the process800begins by receiving (at805) a data message at a smart NIC. This data message could have been received from a datacenter network through a physical port of the smart NIC (e.g., as inFIG.9) or from the host computer (e.g., from a DCN executing on the host computer) through an interface of the smart NIC that binds to the host computer or one or more DCNs on the host computer (e.g., as inFIGS.10and11). It should be understood that the terms data message, packet, data packet, or message are used herein to refer to various formatted collections of bits that may be sent between network endpoints (e.g., between DCNs in a host and/or across a physical network), such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. While the examples herein refer to data messages, packets, data packets, or messages, it should be understood that the invention should not be limited to any specific format or type of data message. The process800then applies (at810) network virtualization operations to the received data message based on the data message headers. These operations, as described, may include logical switching (e.g., based on a logical destination MAC address of the data message), logical routing (e.g., based on a logical destination IP address of the data message), distributed firewall operations (based on, e.g., a connection five-tuple of the data message, including source and destination IP addresses, transport layer protocol, and source and destination transport layer ports), network address translation, encapsulation (if required), and other operations that are commonly performed by hypervisors of the host computer. If the smart NICs collectively implement the virtual networking operations, then the smart NIC that first receives the data message performs this processing. When a first smart NIC receives the data message from a second smart NIC through the private communication channel, the second smart NIC will typically have already performed the required network virtualization operations (or the majority of these operations) and the first smart NIC can determine the destination of the data message with minimal additional processing. Based on these network virtualization operations, the smart NIC is able to determine a destination for the data message. It should be understood that the process800is a conceptual process and does not necessarily reflect the specific operations performed by a smart NIC. For instance, rather than perform a series of determinations regarding whether the destination is of a particular type (i.e., those shown in operations815,825, and840), the smart NIC will typically just identify a matching record (e.g., a flow record) for the data message and perform an action specified by that matching record. It should also be noted that this process does not cover the full spectrum of data message processing options. For instance, in some embodiments the smart NIC may block and/or drop data messages due to firewall rules, congestion, etc. The process800determines (at815) whether the destination for the data message is a DCN that is bound to the current smart NIC (i.e., the smart NIC performing the process800). This could be the case for data messages received from external networks or from other DCNs on the host computer (which may be bound to any of the smart NICs). When the destination is such a DCN bound to the current smart NIC, the process outputs (at820) from the smart NIC via the interface to which the destination DCN is bound. In some embodiments, the data message is then handled by the host computer (e.g., sent to the DCN either via a vNIC or directly to the VF driver executing on the DCN without additional network virtualization processing in the hypervisor of the host computer). When the destination for the data message is not a DCN bound to the current smart NIC, the process800determines (at825) whether the destination is a DCN bound to a different smart NIC of the host computer. This could be the case for data messages received from external networks or from other DCNs on the host computer that are bound to the current smart NIC. Further, if the private communication channel does not have direct communication between every pair of smart NICs, then a first smart NIC might receive a data message from a second smart NIC and need to send that data message to a third smart NIC (e.g., in the example shown inFIG.3). When the destination is such a DCN bound to another smart NIC, the process800sends (at830) the data message to that other smart NIC (or an intermediary smart NIC if the NICs are connected serially) via the private communication channel between the smart NICs. FIG.9conceptually illustrates the path of two data messages910and915received at a first smart NIC900via the physical port905of that smart NIC. In this example, each of the smart NICs900and920of a host computer925have a single physical port each. At least two VMs930and935execute on the host computer925, and for simplicity only the VFs bound to these VMs are shown. The first smart NIC900provides a VF940to which the first VM930is bound (e.g., via its vNIC, which is not shown) while the second smart NIC920provides a VF945to which the second VM935is bound (also via its vNIC). Each of the smart NICs900and920performs network virtualization operations950, and a private communication channel955connects the two smart NICs. This private communication channel may be any of the types described above (e.g., a separate physical channel, a VLAN or overlay network on the physical network to which the physical ports905and960of the smart NICs connect, or a connection through the PCIe subsystem). The smart NIC900performs network virtualization operations950on each of the data messages910and915. Because the destination address for the first data message910is that of VM1930which is bound to that smart NIC900, the smart NIC900outputs the data message910via VF940to the VM930. On the other hand, the network virtualization operations950applied to the second data message915identify that the destination address for this data message915is that of VM2935, which is bound to the second smart NIC920. As such, the first smart NIC900passes this data message915to the second smart NIC920via the private communication channel955. In some embodiments, the first smart NIC900also provides context information to the second smart NIC920regarding processing of the data message by the network virtualization operations950, so that this processing does not need to be fully repeated at the second smart NIC920. The second smart NIC920, in some embodiments, applies network virtualization operations950to evaluate this context and determine that the data message915should be sent to the VM2935. As such, the smart NIC920outputs the data message915via VF945to the VM935. Returning toFIG.8, if the destination for the data message is not a DCN on the host computer, then (assuming the data message is not to be dropped or blocked) the destination is external to the host computer. As such, the process800identifies (at835) the physical network output port for the data message. In some cases, all of the ports of all of the smart NICs are teamed in a link aggregation group (LAG) or other teaming mechanism. In this case, the connections for a single DCN bound to a particular smart NIC are load balanced across all of the physical output ports of all of the smart NICs and not just output by the smart NIC that receives the data message. In other cases, the different smart NIC ports might have different connectivity such that data messages for certain destinations need to be output from one smart NIC and data messages for other destinations need to be output from another smart NIC (irrespective of any load balancing operations). As such, the process800determines (at840) whether the identified physical network output port is on another smart NIC or the current smart NIC. If the output port for the data message is a port of another smart NIC, then the process800sends (at830) the data message to the other smart NIC (or an intermediary smart NIC if the NICs are connected serially) via the private communication channel between the smart NICs. On the other hand, if the identified output port is a port of the current smart NIC, then the process800outputs (at845) the data message to the physical network via the identified output port. After outputting the data message to either a DCN (via an interface of the current smart NIC), the physical network, or another smart NIC via the private communication channel, the process800ends. FIG.10conceptually illustrates the path of two data messages1005and1010received at the first smart NIC900from the first VM930via the VF940to which that VM is bound. As shown, the first data message1005is directed to a first destination (Dest1) while the second data message1010is directed to a second destination (Dest2). The smart NIC900processes both of these data messages according to the configured network virtualization operations950, which in this case (i) determines that both data messages should be output to the physical network and (ii) includes load balancing across multiple output ports (e.g., in a LAG). In some embodiments, only the first data message in a connection has the load balancing operations applied, while for subsequent operations a cached result directs the data message to the same physical output port. Based on these operations, the smart NIC900outputs the first data message1005to the physical network via its own physical port905. The second data message1010, however, is sent to the second smart NIC920via the private communication channel955. In some embodiments, the first smart NIC900also provides context information indicating that network virtualization operations have been performed on the data message1010and that it should be output via the physical port960of the second smart NIC920. The second smart NIC920receives the second data message1010via the private communication channel955and outputs this data message1010to the physical network via its physical port960. As described above by reference toFIG.8, the private communication channel is also used when a DCN bound to an interface of one smart NIC sends a data message to a DCN bound to an interface of another smart NIC. Rather than the first smart NIC outputting the data message onto the physical network to be switched and/or routed back to the host computer via the second smart NIC, the private communication channel allows for the data message to be sent directly between the smart NICs. FIG.11conceptually illustrates the path of a data message1100sent from the first VM930operating on the host computer925to the second VM935. Here, the first smart NIC900receives the data message1100via the VF940to which the source VM930is bound. The smart NIC900applies network virtualization operations950to the data message1100to determine that the destination of the data message is a VM bound to the second smart NIC920. Based on this determination, the smart NIC900sends the data message1100to the second smart NIC920via the private communication channel955between the smart NICs. As in the other examples, in some embodiments the first smart NIC900also provides context information indicating that network virtualization operations have been performed on the data message1100and that the data message is directed to a DCN bound to the smart NIC920(though without necessarily indicating to which interface the DCN is bound). The second smart NIC920receives the data message1100via the private communication channel955and outputs the data message to the VM935via the VF interface945. The above-described process800as well as the examples shown inFIGS.9-11relate to unicast data messages (i.e., data messages with a single destination). In some embodiments, a single data message (e.g., a broadcast or multicast data message) might be sent along multiple paths by the network virtualization operations performed within one smart NIC. FIG.12conceptually illustrates the paths of a multicast data message1200received at the first smart NIC900via the physical port905of that smart NIC. In this example, the first smart NIC900applies network virtualization operations950to the multicast data message1200and determines that both the first VM930and the second VM935are in the multicast group to which the data message1200is sent. Based on this, the smart NIC900(i) outputs a first copy of the multicast data message1200via the VF940to the first VM930and (ii) passes a second copy of the data message1200to the second smart NIC920via the private communication channel955. In some embodiments, the first smart NIC900also provides context information to the second smart NIC920regarding processing of the data message by the network virtualization operations950, so that this processing does not need to be fully repeated at the second smart NIC920. The second smart NIC920, in some embodiments, applies network virtualization operations950to evaluate this context and determine that the multicast data message1200should be sent to the second VM935. As such, the smart NIC920outputs the data message1200via VF945to the VM935. In some embodiments, if multiple destinations of a multicast data message are bound to the second smart NIC920, only one copy of the data message is passed via the communication channel955, allowing the second smart NIC920to generate and output the necessary copies of the data message. Similarly, if one of the VMs attached to a first smart NIC sends a broadcast or multicast data message, the recipient smart NIC may process the data message and generate any copies necessary to send the data message to other VMs attached to the first smart NIC, output the data message via its physical port(s), and/or pass the data message to other smart NICs (either to send the data message to VMs bound to those smart NICs, output the data message via their physical port(s), or a combination thereof). Another situation that can require the use of the private communication channel for passing a data message between smart NICs occurs if all of the physical network ports of a smart NIC have become inoperable but the smart NIC itself is still operable. In this case, the smart NIC may still perform virtual networking operations on data messages sent from the DCNs bound to that smart NIC but will need to send those data messages to other smart NICs for output to the physical network irrespective of whether the ports operate in a LAG or not. When the ports do operate in a LAG or the smart NICs are configured in a NIC team using another teaming mechanism, connections that have been previously assigned to an inoperable physical port are moved to another physical port (e.g., on another smart NIC). FIG.13conceptually illustrates the path of a data message for one of the connections shown inFIG.10after the physical port905of the first smart NIC900has gone down. This could occur due to a problem with the smart NIC itself, the physical cable that connects the port905to the datacenter network becoming disconnected, etc. As shown, another data message1300directed to Dest1is sent from the VM930to the smart NIC900via the VF940. The smart NIC900processes the data message according to the configured network virtualization operations950, which determines that the data message should be output to the physical network but that the physical port905previously used for this connection is no longer up. As such, the connection is now re-balanced to use the other physical port960of the second smart NIC920. The data message1300is thus sent to the second smart NIC920via the private communication channel955. In some embodiments, the first smart NIC900also provides context information indicating that network virtualization operations have been performed on the data message1300and that it should be output via the physical port960of the second smart NIC920. The second smart NIC920receives the second data message1300via the private communication channel955and outputs this data message1300to the physical network via its physical port960. In many situations, the smart NICs receive configuration data for the virtual networking operations from a network management and control system. Such a network management and control system, in some embodiments, receives data defining networking operations (e.g., defining logical networks), security operations, etc. from a user (e.g., networking and/or security administrators) and uses this definitional data to generate configuration data for the various network elements (e.g., forwarding elements such as virtual switches and routers, middlebox elements such as distributed firewalls, etc.) and provide the configuration data to the network elements so that the network elements can implement the various networking and security operations. Such network elements include the smart NICs that perform network virtualization operations. Each of the ports of the different smart NICs (possibly including a management port) has its own network address, but many network management and control systems treat each host computer as a single entity. For instance, for host computers that do not use smart NICs for network virtualization operations, the network management and control systems of some embodiments communicate with an agent in the hypervisor of the host computer. The network management and control system uses a single management network address for each host computer and thus should not directly communicate with all of the multiple smart NICs of a host computer. In some embodiments, the smart NICs use clustering technology in order to appear to the network management and control system as a single entity for the host computer. For instance, in some embodiments, the smart NICs of a host computer perform a leader election to determine a single one of the smart NICs that communicates with the network management and control system. In some such embodiments, each of the smart NIC operating systems runs a deterministic algorithm that selects one of the smart NICs as the point of contact. Any messages needed for this leader election are communicated over the private communication channel. FIG.14conceptually illustrates a process1400of some embodiments for configuring multiple smart NICs to perform network virtualization operations. In some embodiments, the process1400is performed independently by each of a set of smart NICs of a host computer when the smart NICs are brought online (e.g., when the host computer is booted up). The process1400may also be performed (again, independently by each smart NIC of the host computer) in response to a change in smart NIC team membership (e.g., addition of a smart NIC to the NIC team or removal of a smart NIC from the NIC team, whether by external action or NIC failure). In some embodiments, the smart NICs monitor team membership using beaconing or keep-alive messages (e.g., sent via the private communication channel). The process1400is described in part by reference toFIGS.15and16, which illustrate operations of a pair of smart NICs1500and1505of a host computer (not shown). FIG.15illustrates that each of the smart NICs1500and1505respectively executes a smart NIC operating system1510and1515. The smart NIC operating systems include multiple modules, such as network virtualization operations1520and1525, control agents1530and1535, and leader election modules1540and1545. A private communication channel1550connects the two smart NICs and allows communication between the smart NICs (e.g., for sending data messages, configuration data, etc.). The control agents1530and1535, in some embodiments, communicate with a network management and control system that configures network virtualization operations on numerous host computers in a datacenter (e.g., by provisioning these host computers to perform switching and/or routing to implement logical networks). The control agents1530and1535receive configuration data from this network management and control system and use the configuration data to properly configure their respective network virtualization operations1520and1525. The control agents1530and1535are able to communicate with each other via the private communication channel1550. The leader election modules1540and1545perform leader election to assign one of the smart NICs as the leader for a particular task (e.g., communication with the network management and control system). The leader election modules1540and1545may communicate via the private communication channel1550in order to confirm leader elections for a task, share identifying information so that each leader election module is aware of all of the smart NICs of a host computer that can be chosen as the leader for a task, etc. As shown, the process1400begins by using (at1405) a leader election algorithm to determine which smart NIC is the single point of communication for the network management and control system. In some embodiments this leader election algorithm is a deterministic algorithm performed separately on each individual smart NIC of the group of smart NICs for a host computer. That is, if there are five smart NICs, then each of the five smart NICs runs the leader election algorithm to arrive at the same elected leader. An example of such an algorithm is a hash-based decision that hashes identifiers for the five smart NICs and computes the resultant hash modulo five (the number of smart NICs) to determine the leader. In other embodiments, the leader election algorithm involves communication and/or negotiation between the smart NICs to arrive at an elected leader smart NIC that is designated to communicate with the network management and control system. Once this election has been completed, the process1400determines (at1410) whether the current smart NIC (i.e., the smart NIC performing this process) is elected as the point of contact. It should be understood that the process1400is a conceptual process and that each smart NIC does not necessarily make such a specific determination. Rather, the smart NIC that is elected as the leader performs a first set of operations and the other smart NICs perform a different set of operations after the leader election. In the example ofFIG.16, which illustrates the distribution of configuration data to multiple smart NICs, the leader election module1540is bolded to indicate that the smart NIC1500has been elected as the leader to communicate with the network management and control system1600. For smart NICs that are not the elected point of contact with the network management and control system, the process1400eventually receives (at1415) configuration data via a private communication channel from the elected smart NIC. It should be noted that this will not occur until the elected smart NIC receives this configuration data from the network management and control system and distributes that data to the other smart NICs. At the smart NIC that is elected as the point of contact with the network management and control system, the process establishes (at1420) communications with the network management and control system using an assigned management IP address for the host computer. In some embodiments, each host computer is treated as a single entity by the network management and control system, which may not be concerned with the internal networking implementation on each host computer. To establish communications, in some embodiments the elected smart NIC sends a message or set of messages from the management IP address to the network management and control system. In some embodiments, the network management and control system will automatically use the assigned IP address, but the elected smart NIC needs to advertise to the datacenter network that messages sent to that IP address should be directed to a particular one of its ports that uses the IP address. Once communication is established, the process receives (at1425) configuration data from the network management and control system. This configuration data, in some embodiments, specifies how data messages should be handled by smart NICs. The configuration data can include routing tables, virtual switch configuration, firewall rules, network address translation rules, load balancing rules, etc. In some embodiments, the configuration data is in a particular format for the particular type of network virtualization software running on the smart NIC operating system. In other embodiments, the configuration data is in a generic format and the controller agent on each smart NIC is responsible for converting the data into the particular format for the network virtualization software.FIG.16illustrates that the network management and control system1600provides configuration data1605to the control agent1530of the smart NIC1500that has been elected as the point of contact for the network management and control system1600. Next, the process shares (at1430) the received configuration data with the other smart NICs (i.e., those smart NICs that do not communicate directly with the network management and control system). This data is provided to the other smart NICs via the private communication channel between the smart NICs. It is also at this point that the other smart NICs reach operation1415in their own processes, as they are now able to receive the configuration data. The process1400(whether being performed on the elected smart NIC or on one of the other smart NICs) next configures (at1435) the network virtualization operations on that smart NIC based on the configuration data. As mentioned, in some embodiments the control agent uses the configuration data received from the network management and control system (e.g., as a first set of data tuples) to generate the configuration data for the network virtualization operations (e.g., as a second set of data tuples). In some embodiments, the network virtualization operations and/or the control agent in the smart NIC operating system also program the data message processing ASIC of the smart NIC based on this configuration data. The process1400then ends, although in practice the elected smart NIC will receive updates regularly from the network management and control system as configuration changes are provided to the system. FIG.16shows that the control agent1530on the elected smart NIC1500provides the configuration data1605to the control agent1535on the second smart NIC1505(e.g., via the private communication channel1550). The control agent1530also uses this configuration data1605to configure the network virtualization operations1520on its smart NIC1500, while the control agent1535on the second smart NIC1505uses the configuration data1605to configure its respective network virtualization operations1525. In addition to disseminating the configuration data from the network management and control system, in some embodiments the leader smart NIC receives information from the other smart NICs via the private communication channel. In some embodiments, this information includes statistics (e.g., data message processing statistics), status/monitoring information, and other data. In some embodiments, the elected leader smart NIC performs various monitoring tasks based on this information (e.g., ensuring that the various smart NICs are currently operable and sending message to other smart NICs if one of the smart NICs goes down). In some embodiments, some of the shared information is reported to the network management and control system.FIG.17conceptually illustrates the elected leader smart NIC1500collecting statistics from itself and the other smart NIC1505and reporting those statistics to the network management and control system1600. As shown, the control agents1530and1535collect statistics from their respective sets of network virtualization operations1520and1525. The control agent1535provides these statistics via the private communication channel1550to the control agent1530at the leader smart NIC1500. At least some of these statistics from both smart NICs are sent from the control agent1530to the network management and control system1600. In some embodiments, the control agent1530or another module on the elected leader smart NIC1500aggregates the statistics so that the network management and control system1600is provided information that appears to come from a single entity. This collected information may be used by the network management and control system1600to monitor the host computer and/or individual smart NICs. The network management and control system may also use this information to modify the virtual networking configuration for the smart NICs, in which case the network management and control system provides configuration updates to the leader smart NIC that in turn distributes these updates to the other smart NICs via the private communication channel. In some embodiments, the network management and control system includes multiple components that perform different functions and provide different configuration data to the host computers (in addition to receiving different data from the host computers). For instance, the network management and control system of some embodiments includes both a management plane (MP) and central control plane (CCP). The MP receives the configuration data from administrators, persists this data, and provides certain configuration information to host computers. In addition, in some embodiments, the host computers provide statistics, status, and other real-time data to the MP. The CCP, in some embodiments, receives network configuration data from the MP, determines the host computers (and other forwarding elements, such as gateways) that require each portion of the network configuration data, and provides this data to agents on these host computers. In some embodiments, the smart NICs elect multiple different leaders for multiple different tasks. For instance, some embodiments elect one leader for receiving configuration data, another leader for collecting flow statistics, a third leader for collecting monitoring data, etc. In some embodiments, one leader is elected for communication with the MP and a second leader is elected for communication with the CCP. These leader elections may use different hash functions or different inputs to the same hash function in order to arrive at different smart NICs as the elected leader. In some embodiments, if a smart NIC is elected for communication with the MP then that smart NIC is removed from consideration for communication with the CCP, so as to ensure the load is shared. FIG.18conceptually illustrates three smart NICs1805-1815of a host computer (not shown) that respectively operate smart NIC operating systems1820-1830. As in the previous figures, each of the smart NIC operating systems1820-1830includes a respective control agent1835-1845and leader election module1850-1860. In addition, each of the smart NICs1805-1815is connected to the other smart NICs via a private communication channel1865. In addition, a network management and control system1800that includes both an MP1870and a CCP1875communicates with the smart NICs1805-1815. Here, the leader election modules1850-1860have designated the first smart NIC1805as the point of contact for the MP1870and have designated the third smart NIC1815as the point of contact for the CCP1875. As such, the control agent1835on the first smart NIC1805communicates with the MP1870and the control agent1840on the third smart NIC1815communicates with the CCP1875. In some embodiments, each of the smart NIC operating systems actually runs separate MP agents and CP agents, with the elected MP agent communicating with the MP1870and the elected CP agent communicating with the CCP1875. For various purposes, the smart NICs also use the private communication channel to synchronize dynamic state information in some embodiments. That is, when a first smart NIC receives or creates a set of dynamic state information, that first smart NIC uses the private communication channel to provide the same set of dynamic state information to one or more of the other smart NICs. Different types of state may be shared with a single other smart NIC or multiple (or all) other smart NICs of a given host computer. The synchronization of dynamic state information allows for that information to be preserved if one of the smart NICs fails, rather than the state information being lost. A smart NIC might fail due to an electrical short, disconnection, overheating, etc. As mentioned, an elected leader smart NIC among the group of smart NICs for a host computer might collect monitoring data from all of the other smart NICs. Either this collected data or data generated from the collected data could include dynamic state information that is synchronized to at least one backup smart NIC. Therefore, if the leader smart NIC fails, the monitoring state information is available for the next leader to retrieve. In addition, when performing virtual networking processing, the smart NICs may need to store dynamic state information and share that data with each other.FIG.19conceptually illustrates two smart NICs1905and1910that share connection state. As in previous figures, each of the smart NICs (e.g., within the smart NIC operating system) performs network virtualization operations1915and1920. These operations include switching & routing1925and1930as well as firewall engines1935and1940that perform firewall operations (i.e., determining whether to allow, block, or drop data messages based on the headers of those data messages). The firewall operations are stateful, in some embodiments, and thus use information from respective connection trackers1945and1950. The connection trackers1945and1950store information about open connections that are processed by the smart NICs. As shown, some embodiments store, for each open connection, at least a 5-tuple (source and destination IP addresses, source and destination transport layer ports, transport layer protocol), the current state of the connection, and a congestion window for the connection. This connection information is dynamic state that the connection trackers1945and1950synchronize over the private communication channel1955between the smart NICs. As shown, the connection tracker1945on the first smart NIC1905stores information for two open connections (cxn1and cxn2), along with a congestion window for these open connections. Other embodiments may also store additional data (e.g., a receiver window). The firewall engines1935and1940use this dynamic connection state information from their respective connection trackers to process data messages sent to and from the DCNs on their host computer. Information as to whether a particular connection has been opened (e.g., completed a three-way handshake) allows the firewall engines1935and1940to determine whether a data message should be allowed or not. The congestion window is a dynamic state variable determined by the connection endpoints (and learned by the smart NICs) that limits the amount of data for a particular connection that can be sent onto the network (i.e., from a physical port of one of the smart NICs), and typically starts out small and increases up to a maximum (which may be set by the receiver window). If connection state were to be lost for an ongoing connection (e.g., because the smart NIC storing that connection state in its connection tracker failed), then depending on the firewall engine settings, either all of the traffic for that connection would be blocked by the firewall engine of the smart NIC that picked up the connection or the firewall engine on that smart NIC would need to re-learn the connection state from the endpoints. In the first option, not only would the connection need to be re-established, but the congestion window would start out small again, limiting the amount of data that could be transmitted. The latter option avoids dropping the connection but at the cost of a window of lax security enforcement. As such, the connection trackers1945and1950share their dynamic state information with each other to avoid requiring either of these options. At this point, the state information for cxn1and cxn2has already been shared; these connections could be processed by either of the smart NICs1905and1910. At this point, a VM1900is in the process of opening a new connection (cxn3) and sending data message(s)1960for this connection to the network virtualization operations1915on the first smart NIC1905(i.e., the smart NIC to which the VM1900is bound). Accordingly, the connection tracker1945also synchronizes this connection state data1965to the connection tracker1950. In some embodiments each smart NIC synchronizes its connection state data (or other state data) only to one other smart NIC while in other embodiments each smart NIC synchronizes its connection state data (or other state data) to all of the other smart NICs. Different embodiments synchronize dynamic state information at different intervals. Some embodiments synchronize each change through the private communication channel, while other embodiments synchronize state data at regular time intervals (e.g., every 1 ms, every 100 ms, every second, every 5 seconds, etc.). If the private communication channel is a purpose-built channel, then this may enable very fast (e.g., every 1 ms or so) synchronization. In addition, some embodiments use a mechanism in the smart NIC to write connection state (or other synchronized data) to a specific memory region in that smart NIC with this write automatically mirrored to a peer memory region on another smart NIC, enabling even faster synchronization (e.g., a delay of less than 10 μs). If the synchronization interval is longer (a higher delay) such that the congestion window cannot be accurately synchronized, some embodiments only synchronize the basic connection state (i.e., whether the connection is open and allowed). In the case of failure of a first smart NIC that processes a particular connection, the new smart NIC that starts processing that connection allows traffic for the connection until that new smart NIC has learned the congestion window for the connection. While the VM1900is bound to the first smart NIC1905(and assuming that this connection is sent to and from a physical port of this first smart NIC1905), the second smart NIC1910does not actually have any use for this information. However,FIG.20illustrates that the first smart NIC1905has become inoperable while these connections remain open, and so the VI\41900is now bound to an interface of the second smart NIC1910. This does not mean that the VI\4needs to restart all of its connections, because this information has been synchronized from the first smart NIC1905. Depending on the configuration, if there are more than two smart NICs, all of the VMs that were bound to the now-inoperable smart NIC move over to the same smart NIC or are balanced across all of the remaining smart NICs in different embodiments. As shown, the VM1900continues sending data messages2000(now to the second smart NIC1910) for cxn3. Because the current state of this connection is that it is now open with a congestion window of 3 (prior to the failure of the first smart NIC1905), the firewall engine1940is able to process these data messages without requiring that the connection or its congestion window restart. This sort of state sharing may also be used by smart NICs that are performing operations other than virtual networking (or that perform multiple types of operations for which state sharing is used). If storage virtualization operations are handled by the smart NICs, then in some embodiments the storage virtualization functions include running a network stack to manage a transport layer (e.g., TCP) connection to the storage. In this case, connection information should again be shared between smart NICs in case of failover, so that these connections are not reset if one of the smart NICs fails. FIG.21conceptually illustrates an electronic system2100with which some embodiments of the invention are implemented. The electronic system2100may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system2100includes a bus2105, processing unit(s)2110, a system memory2125, a read-only memory2130, a permanent storage device2135, input devices2140, and output devices2145. The bus2105collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system2100. For instance, the bus2105communicatively connects the processing unit(s)2110with the read-only memory2130, the system memory2125, and the permanent storage device2135. From these various memory units, the processing unit(s)2110retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM)2130stores static data and instructions that are needed by the processing unit(s)2110and other modules of the electronic system. The permanent storage device2135, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system2100is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device2135. Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device2135, the system memory2125is a read-and-write memory device. However, unlike storage device2135, the system memory is a volatile read-and-write memory, such a random-access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory2125, the permanent storage device2135, and/or the read-only memory2130. From these various memory units, the processing unit(s)2110retrieve instructions to execute and data to process in order to execute the processes of some embodiments. The bus2105also connects to the input and output devices2140and2145. The input devices enable the user to communicate information and select commands to the electronic system. The input devices2140include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices2145display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices. Finally, as shown inFIG.21, bus2105also couples electronic system2100to a network2165through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system2100may be used in conjunction with the invention. Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals. This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules. VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs. Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc. It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments. While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (includingFIGS.8and14) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims. | 69,100 |
11863377 | DETAILED DESCRIPTION This section describes some embodiments in detail. The invention is defined by the appended claims. FIG.2shows computer architecture200that will be used to illustrate some network configuration aspects of the present invention. Any server110or switch120may have the architecture ofFIG.2, or may have some other architecture. Computer200includes a subsystem204connected to baseboard management controller (BMC)206. Subsystem204includes one or more computer processors210executing computer programs stored in memory220. Memory220is also used for data storage. Memory220may include volatile and/or non-volatile memories implemented as semiconductor, magnetic, optical, or other technologies. The computer programs stored in memory220include BIOS (Basic Input/Output System)230, which is a boot-strapping program executed automatically when the subsystem204is powered up. In this disclosure, the term “BIOS” is used broadly, to include any bootstrapping technology, for example UEFI (Unified Extensible Firmware Interface). Memory220stores an operating system (OS)234. In some embodiments, the OS is loaded into memory220in response to BIOS instructions. The OS is loaded from a storage location specified by OS image location identifier236. The OS image storage can be part of computer200, or can be remote (i.e. accessible over a network). BMC206provides remote access to computer200even when the processors210are down and/or memory220is corrupt. BMC206may include its own computer processors and/or memory (not shown), and/or may share processors or memory or other resources with subsystem204. Exemplary BMC types are Dell Remote Access Manager (DRAC) and integrated DRAC (iDRAC), that are available from Dell Corporation of the Unites States of America. See for example the following documents incorporated herein by reference: US Pre-Grant Patent Application US 2019/0020540 A1, published Jan. 17, 2019 (inventors: Yen et al.); US 2014/0215030 A1, published Jul. 31, 2014 (inventors: Terwilliger et al.); US 2014/0208133 A1, published Jul. 24, 2014 (inventors: Gopal et al.). Subsystem204and BMC206are connected to the network via one or more ports P1, P2, . . . each of which is implemented by a Network Interface Card (NIC)250. A port may include multiple subports, and a subport may include multiple slots, and each slot may provide a separate physical connection to the network. Also, a single NIC250may implement multiple ports or subports or slots. For simplicity, the term “port” will be used herein to refer to a port, a subport, a slot, or any other physical interface to the network, unless a different meaning is indicated. Also, for ease of description, we will assume that each NIC250corresponds to a single port; but the invention is not so limited. Different ports may be used for different roles. For example, port P1may be used for out-of-band (OOB) communications, i.e. communications with BMC206. Ports P2, P3, P4may be used for data communications (i.e. client communications and/or non-management communications). In the example ofFIG.2, ports P2, P3, P4are configured in different VLANS (Virtual Local Area Networks), shown as VLANs10,20,30. Thus, the role of port P2is limited to the traffic in VLAN10; the role of port P3is limited to the traffic in VLAN20; and the role of port P4is limited to the traffic in VLAN30. A role may be defined by one or more limitations to be met simultaneously or in the alternative or according to some other formula. One or more of ports P2, P3, P4can also be used for in-band management, e.g. to transmit statistical or other data to a management computer (not shown), or receive configuration data for configuring the computer200, e.g. to configure VLANs on the ports250, or configure Virtual Machines (VMs), or to load the OS234from an OS image, for other types of configuration. An in-band management role and other roles can be defined in any way meaningful for a particular application of the network. FIG.3Aillustrates an exemplary network310that will be used to illustrate some network configuration aspects of some embodiments. Network310includes servers110(110.1,110.2,110.3) and switches120(120.1,120.2, . . . ). Network310is connected to the Internet and/or other networks150for communication with outside computers140, as inFIG.1. Servers110may be storage units, computational units, or other types. Network310includes a management station (“Management Solution” or MS)320and Virtual Machine Manager (VMM)330. Each of nodes110,120,320,330may have some or all of the components of computer system200ofFIG.2, and/or other components. Network links335include links3350(shown by thick dashed lines) carrying OOB traffic; links335dcarrying data traffic; and links335i(thin dashed lines) carrying in-band management traffic. A link may have multiple uses, e.g. carry both in-band management and data traffic. A link335may be a physically link (electric or optical cable for example, or a string of cables). A link335may be a virtual link, possibly traversing different networks. The ports interconnected by a virtual link communicate as if they were interconnected by a physical link. For example, if the ports execute a Link Layer Discovery Protocol (LLDP), they treat each other as neighbors. Switches120.1and120.2are part of the OOB network, i.e. dedicated to OOB traffic. (A switch may or may not be dedicated to OOB traffic or some other kind of traffic.) Switch120.1is immediately (directly) connected to ports P1of servers110.1,110.2,110.3. As used herein, immediate (direct) connection is a connection by a link335. Switches120.3through120.6are used for in-band management and data traffic. The switches120.3and120.4are immediately (directly) connected to servers110. In some embodiments, servers110and switches120.1,120.3,120.4are mounted on a single rack (not shown), and switches120.3and120.4are “top of the rack” switches (TORs). In some embodiments, one or more servers110are configured for Storage Spaces Direct operation; see e.g. “Dell EMC Solutions for Microsoft Azure Stack HCl”, Dell Inc., 2019, Rev. A05, incorporated herein by reference. In some embodiments, one or more servers are configured for VSAN operation; see e.g. U.S. Pat. No. 8,862,799, issued Oct. 14, 2014, incorporated herein by reference. These details are exemplary and not limiting. FIG.3Bshows exemplary interconnection between switch120.3and servers110. Switch ports P30, P40, P50are immediately connected to ports P30of respective servers110.1,110.2,110.3; switch ports P60, P70, P80are immediately connected to ports P40of respective servers110.1,110.2,110.3; switch ports P90, P92, P94are immediately connected to ports P50of respective servers110.1,110.2,110.3. In this example, servers110will be configured to use their ports P30for in-band management traffic (and possibly data traffic); ports P40for data traffic on a VLAN10; and ports P50for data traffic on a VLAN20. Such configuration can be at least partially automated by a process shown inFIG.4. This process can be performed, for example, to deploy any one or more of servers110when they are newly connected (“bare metal” servers) or after a period of inactivity. In some embodiments, this process is performed before the servers execute or even load their respective OS234(FIG.2). The process ofFIG.4will now be illustrated on the example of server110.1ofFIG.3B. At step410, switches120are deployed using any suitable, possibly conventional, techniques. Of note, each port of each switch120has a MAC (Media Access Control) address, which is typically a physical address burned into the port's NIC250(FIG.2). Alternatively, the MAC address may be a logical address. In the switch deployment process, each switch port may be associated with a VLAN (e.g. VLAN10for switch ports P60, P70, P80inFIG.3A). A switch port may also be associated with properties such as maximum frame size (also called maximum transmission unit or MTU), duplex setting (full or half duplex); QoS; whether or not a particular protocol, e.g. Spanning Tree Protocol, is enabled on the port; and/or others. Table 1 below shows an exemplary configuration of ports P30through P94of switch120.3at the end of deployment process410: TABLE 1Ports of Switch 120.3VLANMTU andPORTMAC addressIDother propertiesP30 (FastEth1/3)00:50:56:9c:5d:4aNoneMTU = 1500; . . .P40 (FastEth1/4)00:50:56:9c:5d:4bNoneMTU = 1500; . . .P50 (FastEth1/5)00:50:56:9c:5d:4cNoneMTU = 1500; . . .P60 (FastEth1/6)00:50:56:9c:5d:4d10MTU = 9162; . . .P70 (FastEth1/7)00:50:56:9c:5d:4e10MTU = 9162; . . .P80 (FastEth1/8)00:50:56:9c:5d:4f10MTU = 9162; . . .P90 (FastEth1/9)00:50:56:9c:5d:5a20MTU = 9162; . . .P92 (FastEth1/10)00:50:56:9c:5d:5b20MTU = 9162; . . .P94 (FastEth1/11)00:50:56:9c:5d:5c20MTU = 9162; . . . In this example, the ports are assigned logical names (such as “FastEth1/3” for port P30) to simplify port management for human administrators. In-band management ports P30, P40, P50are configured with MTU=1500. The other ports are configured for data traffic with MTU=9162. At step420, server110.1is powered up. At step430, switches120execute a discovery protocol, e.g. Link Layer Discovery Protocol (LLDP), possibly over the in-band network (which may include the in-band management links335iand/or data links335dand/or other links, and the ports and computers interconnected by such links). The servers' NICs250can be pre-configured to respond to discovery protocol messages from the switches even if the servers have not yet been deployed. During discovery, each server's NIC250informs the immediately connected switch port of the server NIC's MAC address and possibly other properties, for example MTU etc. Some properties, including the MTUs, may be negotiated between the switch port and the server port in the discovery process, possibly changing the server settings obtained in step410. At the end of step430, switch120.3may store, in its memory220(FIG.2), a database such as shown in Table 2 below. Table 2 has the same information as Table 1, plus an additional column, “Adjacent MAC”, showing the MAC address for each immediately connected port, which can be a server port, a switch port, or a port of some other device. (Of note, a switch port may be immediately connected to multiple ports of other devices, and the “Adjacent MAC” entry may include multiple MACs.) The Adjacent MACs and possibly other Information in Table 2 may be included in the switch's forwarding tables. TABLE 2Database in Switch 120.3 after Step 430MTU and otherPORTMAC addressVLAN IDpropertiesAdjacent MACP30 (FastEth1/3)00:50:56:9c:5d:4aNoneMTU = 9162 . . .00:50:56:9c:6d:4aP40 (FastEth1/4)00:50:56:9c:5d:4bNoneMTU = 9162 . . .00:50:56:9c:6d:4bP50 (FastEth1/5)00:50:56:9c:5d:4cNoneMTU = 9162 . . .00:50:56:9c:6d:4cP60 (FastEth1/6)00:50:56:9c:5d:4d10MTU = 9162 . . .00:50:56:9c:6d:4dP70 (FastEth1/7)00:50:56:9c:5d:4e10MTU = 9162 . . .00:50:56:9c:6d:4eP80 (FastEth1/8)00:50:56:9c:5d:4f10MTU = 9162 . . .00:50:56:9c:6d:4fP90 (FastEth1/9)00:50:56:9c:5d:5a20MTU = 1500 . . .00:50:56:9c:6d:5aP92 (FastEth1/10)00:50:56:9c:5d:5b20MTU = 1500 . . .00:50:56:9c:6d:5bP94 (FastEth1/11)00:50:56:9c:5d:5c20MTU = 1500 . . .00:50:56:9c:6d:5c At step440, MS320communicates with other nodes' BMCs206(FIG.2) over the OOB network, to collect the NIC inventory. In particular, for each node110,120, and/or other nodes in network310, MS320obtains the node's MAC addresses and, possibly, port identifiers and/or other information. At step444, MS320uses a discovery protocol (e.g. LLDP) to identify switches120and the entire topology of network310. MS320then requests the switches120, possibly using SNMP or some other network protocol, possibly over the in-band network, to provide operation parameters for each switch port. The operation parameters may include, for each switch port, the Adjacent MACs, the VLAN IDs, and the Properties (see Table 2). In sending these requests to the switches, MS320may use the switches' MAC addresses as destination addresses. At step450, for each server port MAC address obtained at step440, MS320identifies the immediately connected switch port (“SW's Adjacent Port”) and the Properties configured on the switch port. For example, MS320may look up the server port MAC address in the Adjacent MAC column in Table 2, and obtain the corresponding entries in the same row, which include the immediately connected switch port's MAC address (in the “MAC address” column), the VLAN ID, and the Properties (“MTU and other properties” column). At step460, MS320obtains a solution blueprint for network310. The blueprints may be stored in any suitable database340(FIG.3A), possibly in a customer support computer140outside of network310, or they can be stored in network310. A blueprint is a template providing general information about the physical and virtual components in network310, including the roles. Exemplary roles may be: OOB; in-band management; data traffic; data traffic for a storage unit (e.g. for a server110) and/or a particular VLAN ID; data center bridging link; uplink; and possibly others. The invention is not limited to any specific roles. The blueprint does not necessarily associate a role with a specific server port or switch port. The blueprint may associate a role with properties and/or VLANs. For example, the blueprint may associate the data traffic role with the MTU property value of 9162, and the in-band management role with MTU of 1500. In another example, the blueprint may associates the data traffic role with a VLAN ID of 10. At step470, for a role specified in the blueprint, the MS320reads the blueprint's corresponding parameters such as VLAN IDs or Properties, and MS320matches these parameters against the data received from the switches (step450). If parameters match, MS320assigns the role to the corresponding switch port(s), and to the adjacent server ports. For example, if the blueprint associates a role with a VLAN ID, then all the switch ports associated with the same VLAN ID at step450, and their adjacent server ports, will be assigned the same role. Many implementations of this process are possible. For example, in some embodiments, instead of looping through the roles, MS320may loop through the server ports (obtained at step444), and for each server port, may determine the Adjacent switch port and the corresponding VLAN ID and/or Properties (as in step450; steps450and470may be merged). If the blueprint specifies a role for the VLAN ID, and/or for any of the Properties, MS320will assign the role to the server port. For example, suppose the blueprint associates the data traffic (or corresponding VLAN IDs) with MTU of 9162, and the in-band management traffic (or corresponding VLAN IDs) with MTU of 1500. In configuring the server110.1(FIG.3B), at step450, MS320can determine (see Table 2) that the server port P30is adjacent to switch port P30associated with MTU of 1500; and the server ports P40and P50are adjacent to respective switch ports P60and P90with MTU of 9162. At step470, since the blueprint provides MTU of 9162 for data traffic (or for corresponding VLAN IDs), MS320associates the server ports P40and P50with data traffic role. Since the blueprint provides MTU of 1500 for in-band management traffic (or corresponding VLAN IDs), MS320associates the server port P30with in-band management role. If the roles are inconsistent, e.g. one role is associated with a VLAN ID configured on a switch port, and a different role is associated with an MTU value on the same switch port, the inconsistency may be resolved by assigning multiple roles to the Adjacent server port, and/or assigning the roles based on some priority or other default mechanism, and/or by getting a human administrator's input. Alternatively or in addition, MS320may resolve the inconsistency using the history data for the server port or switch port roles. For example, if the server port or the adjacent switch port had the data traffic role in the most recent use, MS320may assign the data traffic role to the server port, or may show the history data to the user to help the user determine the role. MS320may keep the history in its memory220for example, or the history may be kept, for each switch or server, in the switch's or server's memory and provided to MS320upon request from the MS. In some embodiments, the administrator may be shown (e.g. on a computer monitor) different roles as possible candidates based on the inconsistent roles and/or history roles, and may be requested to pick the role using a user interface device (keyboard, mouse, touch-screen technology, voice recognition, or other suitable types). The same techniques can be used if the blueprint specifies the same parameter values (e.g. the same MTU value) for different roles, so the parameters cannot be used to determine a port's role. For example, if all the roles are associated with the MTU value of 9162 and with no other parameters, and if all the switch ports have the MTU value of 9162, the server ports' roles cannot be determined from the process ofFIG.4. In this case, MS320may seek a human administrator input and/or use the switch port history as described above for the inconsistent roles. At step480, the server port roles are used to configure the server110, possibly by conventional techniques. For example, in some embodiments, MS320writes the server ports' roles in the OS image to be loaded into the server's memory220. MS320then automatically powers down the server110(or powers the server subsystem204) via an OOB command to BMC206. Then MS320powers the server (or subsystem204) back up to cause BIOS execution. BIOS230then loads the OS image containing the updated server port roles into memory220. In another embodiment, BIOS230is designed to load the OS234through an in-band management port. Upon discovering the server ports' roles, MS320writes the roles to server memory220at a location coordinated with the BIOS. Then MS320powers down each server and powers it up again, to cause the server to execute the BIOS230. BIOS230uses the roles written by MS320to identify the in-band management port, and loads the OS through this port. Some embodiments of the present invention are defined by the following clauses:Clause 1 defines a method for remotely discovering (i.e. discovering over a network), by a management computer system (e.g.320; the management computer system may include a distributed system of multiple computers interconnected over a network, including computers outside of network310), one or more roles of one or more network interfaces (a network interface can be a physical port, subport, slot, or some other type) of a first computer system (e.g. a server110or some other server or non-server computer) in a first network comprising one or more switches, the method comprising:obtaining (e.g. as in step460), by the management computer system, a blueprint specifying one or more roles, at least one role being associated by the blueprint with one or more network parameters (parameters can be a VLAN ID, or MTU value, or a duplex setting (e.g. full or half duplex), or QoS, or whether the Spanning Tree Protocol or some other protocol should be enabled or disabled, or other parameters or combinations of parameters);obtaining (e.g. as in430), by the management computer system, switch operation data from one or more switches in the first network, the switch operation data comprising one or more network parameters for one or more network interfaces of one or more of the switches;determining (e.g. as in450) by the management computer system, from the switch operation data, at least one switch interface adjacent to at least one interface of the first computer system;matching, by the management computer system, the network parameters obtained from the switch operation data for said at least one adjacent switch interface, against the network parameters of the blueprint, to determine one or more roles associated with at least one matched parameter by the blueprint, and using the determined one or more roles to determine at least one role for the at least one interface of the first computer system.2. The method of clause 1 further comprising configuring the first computer system by the management computer system to use said at least one interface of the first computer system according to the determined at least one role. (Configuring may involve updating the server's BIOS or OS image; see step480for example.)3. The method of clause 2 wherein said configuring comprises configuring the first computer system's BIOS.4. The method of clause 2 or 3 wherein said configuring comprises configuring the first computer system's operating system image.5. The method of any preceding clause wherein said at least one role is in-band management.6. The method of any preceding clause wherein said at least one role is data traffic.7. The method of any preceding clause wherein at least one matched parameter identifies a maximum transfer unit (MTU).8. The method of any preceding clause wherein at least one matched parameter identifies a VLAN ID.9. The method of any preceding clause wherein the switch operation data comprise data obtained by the one or more switches performing network discovery.10. A method for remotely discovering, by a management computer system, one or more roles of one or more network interfaces of a first computer system in a first network comprising one or more switches, the method comprising:(1) obtaining, by the management computer system, switch operation data from one or more switches in the first network, the switch operation data comprising, for at least one network interface of the switch:(1)(a) an identification of each adjacent interface, and(1)(b) at least one of:(1)(b)(i) one or more network interface properties (e.g. MTU, duplex setting, or other);(1)(b)(ii) one or more VLAN IDs;(2) obtaining, by the management computer system, a blueprint specifying one or more roles and, for at least one role, at least one of:(2)(a) one or more network interface properties;(2)(b) one or more VLAN IDs;(3) for at least one interface of the first computer system, determining an adjacent interface of a switch using the data (1)(a), and determining the corresponding at least one of (2)(a), (2)(b);(4) for said at least one interface, searching the blueprint for the determined at least one of (2)(a), (2)(b), to obtain one or more corresponding roles.11. The method of clause 10, wherein the at least one of (2)(a) and (2)(b) is (2)(a).12. The method of clause 10, wherein the at least one of (2)(a) and (2)(b) is (2)(b). The invention includes switches, servers, management computer systems, and other computers for performing the methods described herein, and includes computer readable media with software for causing the computers to perform methods described herein. The invention is defined by the appended claims. Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein. | 23,395 |
11863378 | DESCRIPTION OF EXAMPLE EMBODIMENTS Overview This disclosure describes techniques for automating the provisioning, configuring, and onboarding of network devices into a cloud management platform. The techniques may include a first method performed by an endpoint device (e.g., server, an I/O Module fabric extender, etc.). The first method may include generating, at the endpoint device, an Internet Protocol version 6 (IPv6) link-local address using a Media Access Control (MAC) address of the endpoint device, and receiving, at the endpoint device, an advertisement message that was sent using a discovery protocol. The first method may further include identifying, from the advertisement message, contact information associated with contacting a discovery service associated with the network fabric. Generally, the discovery service provides connectivity information for connecting to a cloud management platform. Further, the first method may include using the contact information, obtaining the connectivity information from the discovery service, and establishing a connection between the endpoint device and the cloud management platform using the connectivity information and the IPv6 link-local address. In some instances, the techniques may include a second method performed by an endpoint device (e.g., server, an I/O Module fabric extender, etc.). The second method may include receiving, at the endpoint device and from a fabric interconnect, an advertisement message that was sent using a discovery protocol. The second method may further include receiving, from the fabric interconnect, a signed security digest that has been signed by a private key associated with the fabric interconnect. The second method may further include identifying, from the advertisement message, contact information associated with contacting a discovery service associated with the network fabric. Generally, the discovery service provides connectivity information for connecting to a cloud management platform. Further, the second method may include using the contact information, obtaining the connectivity information from the discovery service, and establishing a connection between the endpoint device and the cloud management platform using the connectivity information. Further, the second method may include sending the signed security digest to the cloud management platform. Additionally, the techniques described in this disclosure may be performed as a method and/or by a system having non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, performs the techniques described above. EXAMPLE EMBODIMENTS This disclosure describes techniques for automating the provisioning, configuring, and onboarding of network devices into a cloud management platform. The cloud management platform can be used to manage network devices that are provisioned in on-premise environments, cloud environments, and/or hybrid environments. However, it can be a cumbersome and error-prone process for a user to manually configure each of the network devices with connectivity settings needed to be managed by the cloud management platform. The techniques described herein provide an automated process to distribute connectivity information to the network devices to allow them to be managed by the cloud management platform. Once connected to the cloud management platform, the techniques described herein further include automating the process for attaching the network devices with the appropriate user account registered with the cloud management platform. When a server is connected to another device (e.g., a switch) in a UCS domain, such as a data center, the server needs to be configured with connectivity settings that enable the server to communicate with the cloud management platform. To automate the process for configuring and registering network devices with a cloud management platform, a network device that is connected into a network fabric may self-assign an IP version 6 (IPv6) link-local address using a media access control (MAC) address of the network device. For instance, when a server or host boots up, it may create an IPv6 link-local address from a MAC identifier of the server according to various techniques described in the Request for Comments (RFC) 4291 published by the Internet Engineering Task Force (ETF). Generally, a UCS domain in a data center may include one or more fabric interconnects (e.g., switches, I/O module fabric extenders, etc.) behind which are disposed a plurality of servers and/or blade servers. When a server and/or blade server (referred to herein as “server”) is connected to a fabric interconnect, the server may self-assign an IPv6 link-local IPv6 address and listen on the connection to the fabric interconnect. The fabric interconnect may utilize discovery protocols, such as Link Layer Discovery Protocol (LLDP), Satellite Discovery Protocol (SDP), etc., to advertise various information about an Endpoint Discovery Service (EPDS) that is running the switched network fabric. For instance, the fabric interconnect may advertise one or more LLDP packets that include attributes such as one or more Type-Length-Values (TLVs) and/or sub-TLVs that are used to propagate or advertise contact information that is usable to contact the EPDS. The EPDS may be running on any device in the switching fabric, including the fabric interconnect itself. Upon receiving the advertisement message(s), the server may identify the contact information that is usable to contact the EPDS, such as a network used to contact the EPDS (e.g., a virtual local area network (VLAN)), an IP address of the EPDS, and/or a port of the EPDS. In some instances, the EPDS may be a web-service that is embedded in an agent that is running on the fabric interconnect, but in some instances, the EPDS may be hosted outside the fabric interconnect. Generally, the EPDS acts or serves as a directory service that provides cloud management platform connectivity information to the endpoints/devices connected in the switching fabric (e.g., connected to fabric interconnects). The sever may use the contact information to reach the EPDS by setting up a management interface on the advertised VLAN and obtains connectivity information from the EPDS that is usable to establish a connection with the cloud management platform. The server may then establish a connection with the cloud management platform using the connectivity information received from, or obtained from, the EPDS. In some instances, this disclosure may include techniques for automating and streamlining the onboarding of devices into a user account that is registered with the cloud management platform. For instance, when the server is started up, the server (e.g., child) may request “parent” configuration details from the fabric interconnect. The parent configuration details may include a Domain Name Service (DNS) of the cloud management platform, IP and port information for a proxy running on that fabric interconnect, a unique identifier of the parent FI that is used by the cloud management platform, and a time-bound security digest that has been signed by the private key of the parent fabric interconnect. This information allows the child/server to inherit connectivity information from the parent FI as well as a means to authenticate itself to the cloud management platform. For instance, the child/server uses the parent configuration to connect to the cloud management platform DNS via the proxy. The child/server is connected or attached directly to the parent FI, and thus can gain access to the configuration details of the parent FI. The cloud management platform is then able to authenticate the connection request from the child/server by using the public key of the parent FI to validate the signed security digest sent from the child/server. Then, the server/child is registered and claimed into the same user account as the parent FI in the cloud management platform. In this way, each server or other network device that is introduced to a switching fabric can be registered and claimed into the same user account as the parent FI devices such that users do not need to manually authenticate and claim their devices that are being provisioned. To manage all of the devices for a user, the devices must be onboarded with a user account that is registered with the cloud management platform. It is critical that parent devices (e.g., FIs, switches, etc.) are onboarded in or registered with the same user account as child devices (e.g., servers, blade servers, etc.). In order to ensure that a device is installed, set up, and being managed by the cloud management platform on behalf of a user, the devices need to be claimed by the user account (e.g., onboarded into the account). While users can manually claim a device, this can take a significant amount of time when many devices need to be claimed. The techniques described herein include techniques for automating the onboarding of devices with the correct user account (e.g., the user account with which the parent devices are onboarded). Certain implementations and embodiments of the disclosure will now be described more fully below with reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout. FIG.1illustrates a system-architecture diagram100of an example network architecture102(e.g., switched fabric) in which server devices are automatically provisioned and configured for management by a cloud management platform. Generally, the network architecture102may include devices that are housed or located in one or more data centers104that may be located at different physical locations. For instance, the network architecture102may be supported by networks of devices in a public cloud computing platform, a private/enterprise computing platform, a hybrid computing platform, and/or any combination thereof. The one or more data centers104may be physical facilities or buildings located across geographic areas that are designated to store networked devices that are part of the network architecture102. The data centers104may include various networking devices, as well as redundant or backup components and infrastructure for power supply, data communications connections, environmental controls, and various security devices. In some examples, the data centers104may include one or more virtual data centers which are a pool or collection of cloud infrastructure resources specifically designed for enterprise needs, and/or for cloud-based service provider needs. Generally, the data centers104(physical and/or virtual) may provide basic resources such as processor (CPU), memory (RAM), storage (disk), and networking (bandwidth). However, in some examples the devices in the network architecture102may not be located in explicitly defined data centers104and, rather, may be located in other locations or buildings. The switched fabric102may include a domain of network devices located in one or more data centers104, including various hardware devices and/or virtualized components. For instance, the switched fabric102may include one or more fabric interconnects108A,108B, etc., where the fabric interconnects108provide network connectivity and management capabilities to attached devices. The attached devices may include one or more servers located in one or more server racks110, one or more blade serves116disposed in one or more chassis114. The fabric interconnects108may be various types of devices, such as switches, network extenders, and so forth. Generally, the devices in the domain(s) of the data center(s)104may each run an agent118A-118D where the agent acts as a device connector that enables the devices to communicate with, and be managed by, a cloud management platform106. The agents118generally enable the devices in the UCS domain (e.g., fabric interconnects108, servers112, blade servers116, etc.) to be managed and monitored by the cloud management platform106. The cloud management platform106may generally be a management system or platform that delivers visualization, optimization, and orchestration for applications and infrastructure of users' computing environments. In order to register the devices in the data center(s)104with the cloud management platform106, the devices generally need various connectivity settings configured, such as proxy settings, and be provided with connectivity information. To automate the process for configuring and registering the servers112/116(and/or other network devices) with the cloud management platform106, the servers112/116that are connected in the switched fabric102may self-assign IPv6 link-local addresses using respective MAC addresses of the servers112/116. For instance, when a server or host boots up, it may create an IPv6 link-local address from a MAC identifier of the server according to various techniques described in the Request for Comments (RFC) 4291 published by the Internet Engineering Task Force (ETF). When a server112/116is connected to a fabric interconnect108, the server112/116may self-assign an IPv6 link local IPv6 address and listen on the connection to the fabric interconnect108. The fabric interconnect108may utilize discovery protocols, such as Link Layer Discovery Protocol (LLDP)124(e.g., for servers112), Satellite Discovery Protocol (SDP)126(e.g., for blade servers), etc., to advertise various information about an Endpoint Discovery Service (EPDS)120A/120B that is running the switched fabric102. For instance, the agents118A/118B running on the fabric interconnects108may advertise one or more LLDP packets that include attributes such as one or more Type-Length-Values (TLVs) and/or sub-TLVs that are used to propagate or advertise contact information that is usable to contact the discovery service120. The discovery service120may be running on any device in the switched fabric102, including the fabric interconnects108themselves (e.g., running in the agents118). Upon receiving the advertisement message(s), the server112/116may identify the contact information that is usable to contact the discovery service120, such as a network used to contact the EPDS (e.g., a VLAN), an IP address of the discovery service120, and/or a port of the discovery service120. In some instances, the discovery service120may be a web-service that is embedded in the agents118that is running on the fabric interconnects108, but in some instances, the discovery service120may be hosted outside the fabric interconnects108. Generally, the discovery service120acts or serves as a directory service that provides cloud management platform106connectivity information to the endpoints/devices connected in the switched fabric102(e.g., connected to fabric interconnects108). The sever112/116may use the contact information to reach the discovery service120by setting up a management interface on the advertised VLAN and obtain connectivity information from the discovery service120that is usable to establish a connection with the cloud management platform106. The server112/116may then establish a connection with the cloud management platform106using the connectivity information received from, or obtained from, the discovery service120. Generally, in order to establish a connection to the cloud management platform106, the servers112/116may utilize a local proxy122A/122B that is running in or embedded in the agent118. The proxy122A/122B may extend the web-socket and TLS connectivity to one or more external networks128, and thereby providing connectivity to the cloud management platform106. The proxy122A/122B may be converted to proxy communications from the link-local addressing of the switched fabric102to communicate over the external network(s)128. The proxy122A/122B may, in some examples, be chained behind a Hypertext Transfer Protocol (HTTP) proxy that provides access outside of the data center102in restricted environments. The external network(s)128include one or more networks implemented by any viable communication technology, such as wired and/or wireless modalities and/or technologies. The external network(s)128may include any combination of Personal Area Networks (PANs), Local Area Networks (LANs), Campus Area Networks (CANs), Metropolitan Area Networks (MANs), extranets, intranets, the Internet, short-range wireless communication networks (e.g., ZigBee, Bluetooth, etc.) Wide Area Networks (WANs)—both centralized and/or distributed—and/or any combination, permutation, and/or aggregation thereof. The external network(s)128may include devices, virtual resources, or other nodes that relay packets from one network segment to another by nodes in the computer network. In some examples, the switched fabric102may include various types of devices configured to communicate using various communication protocols (e.g., VPN, SSL, TLS, DTLS, and/or any other protocol) over the networks external network(s)128. For instance, the endpoints may comprise personal user devices (e.g., desktop computers, laptop computers, phones, tablets, wearable devices, entertainment devices such as televisions, etc.), network devices (e.g., servers, routers, switches, access points, etc.), and/or any other type of computing device. FIG.2illustrates an example Link Local Discovery Protocol (LLDP) packet200having one or more Type-Length-Values (TLVs) or sub-TLVs that convey contact information for communicating with an endpoint discovery service associated with a switched fabric. Generally, the LLDP packet200(or packets) may be used to advertise identity information, connectivity information, capability information, and/or other information with other devices. Generally, each LLDP packet200may be sent between devices from each of their interfaces at a fixed interval in the form of Ethernet frames. Each Ethernet frame contains one LLDP Data Unit (LLDPDU) that is a sequence of type-length value (TLV) structures. According to the techniques described herein, an LLDP mechanism may be used and/or modified such that TLVs (and/or sub-TLVs) can be used in the LLDP packet200to provide information for contacting the discovery service120. As illustrated, the LLDP packet200may include a sub-TLV that includes a network indication (VLAN)202over which the discovery service120may be reached, a sub-TLV that indicates an IP address204at which the discovery service120may be reached, and a sub-TLV that indicates a port206on which the discovery service may be reached. Thus, sub-TLVs and/or TLVs may be used to propagate connectivity information for a server112/116to contact a discovery service120in order to get connectivity information to connect to the cloud management platform106. Although not illustrated, a similar extension may be made to the SDP for communicating with the FI108and blade servers116in instances where SDP is utilized. FIG.3illustrates a system-architecture diagram300of an example switched fabric in which server devices are automatically provisioned, configured, and onboarded with the cloud management platform102. The techniques forFIG.3streaming device claim processes for devices that are attached to a clustered pair of FIs108that form a domain. The agents118C running on devices connected to the Hs108are considered as child-agents. The logical agent118A/118B running on a clustered pair of FIs108is considered as the parent-agent118C. The child-agents118C; receive an advertisement from FI108via LLDP or DCBX TIN containing the FI agent's IP address, Port number, and Infra VLAN over which to communicate to the FI-agent118A/118B (described with respect toFIG.1andFIG.2). Upon startup, child-agent118C requests “parent configuration” from the FI-agent118A/118B. The parent configuration includes the cloud management platform106DNS (which could be cloud or appliance), FI proxy IP/port, parent-agent118unique identifier (e.g., unique identifier of parent agent in cloud management platform106) and a time-bound security digest304signed by the private key302of the parent-agent118. At “1,” the agent118B running on a fabric interconnect108B may use a private key302to sign a security digest and create a signed security digest304. In some instances, the server112(e.g., child) may request the parent configuration information from the FI108, and the FI108may provide the signed security digest304to the server112at “2.” In other examples, the signed security digest304may be provided to the server112in response to the get parent request's response. At “3,” the server112(e.g., agent118C) may send the signed security digest304to the cloud management platform106as a means to authenticate itself with the cloud management platform and to inherit connectivity information from the fabric interconnect108. The connectivity information in the signed security digest304is used to connect to the DNS of the cloud management platform106via the proxy122B, and the signed security digest304is also used to authenticate the server112as in fact being a child device to the FI108B. The child-agent's118C device (e.g., server112, IOM, etc.) is directly attached to the parent-agent's118B device (FI108B) and only the child-agent118C can gain access to the parent configuration. At “4,” the cloud management platform106may authenticate the child-agent's118C connection request by using the public key306of the parent-agent118B to validate the child-agent's118C security digest304. At this point, the child-agent118C is registered and automatically claimed directly into the user account310(and/or in some examples from an onboarding account308) of the parent-agent118C. as illustrated, an endpoint identifier312N (corresponding to the server112/agent118C) max be moved from a general onboarding account308and into the same user account310as the fabric interconnect identifier314(e.g., corresponding to the FI108B/agent118B). In this way, the signed security digest304, which may be time-bound, can be used to distribute connectivity information for endpoints to connect to the cloud management platform, and also a way to authenticate themselves as indeed being children to a FI108B by having a signed piece of data that is verifiable by the cloud management platform106as being signed by a particular FI108. FIGS.4and5illustrate flow diagrams of example methods that illustrate various aspects of the techniques of this disclosure. The logical operations described herein with respect toFIGS.4and5may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in theFIGS.4and5and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components. FIG.4illustrates a flow diagram of an example method400for automatically provisioning and configuring an endpoint for management by a cloud management platform. At402, an endpoint may generate an Internet Protocol version 6 (IPv6) link-local address using a Media Access Control (MAC) address of the endpoint device. That is, the endpoint may self-assign an IPv6 link-local address using its own MAC address such that there will not be overlapping IPv6 local-link addresses in the local domain of the endpoint. At404, the endpoint device (e.g., server, blade server, IOM, etc.) may receive an advertisement message that was sent using a discovery protocol. In some instances, the discovery protocol is LLDP, SDP, and/or any other type of discovery protocol. At406, the endpoint device may identify, from the advertisement message, contact information associated with contacting a discovery service associated with the network fabric. Generally, the discovery service120provides connectivity information for connecting to a cloud management platform106. The contact information may include an indication of a network (e.g., VLAN) usable to connect to the discovery service120, an IP address associated with the discovery service120, and an indication of a port of the discovery service120. At408, the endpoint may, using the contact information, obtain the connectivity information from the discovery service. At410, the endpoint may establish a connection with the cloud management platform using the connectivity information. In some instances, the method400may further include receiving, from the fabric interconnect, a signed security digest that has been signed by a private key associated with the fabric interconnect, and sending the signed security digest from the endpoint to the cloud management platform for authentication as being connected to the fabric interconnect. FIG.5illustrates a flow diagram of an example method500for automatically provisioning, configuring, and onboarding an endpoint with a cloud management platform. At502, an endpoint (e.g., server, blade server, IOM, etc.) may receive, from a fabric interconnect, an advertisement message that was sent using a discovery protocol. The discovery protocol may be LLDP, SDP, and/or any other discovery protocol running at any layer. At504, the endpoint may receive, from the fabric interconnect, a signed security digest that has been signed by a private key associated with the fabric interconnect. In some instances, the signed security digest may include, be included with, or otherwise be associated with the advertisement message. At506, the endpoint may identify, from the advertisement message, contact information associated with contacting a discovery service associated with the network fabric. Generally, the discovery service provides connectivity information for connecting to a cloud management platform106. The contact information may includes an indication of a network usable to connect to the discovery service, an Internet Protocol (IP) address associated with the discovery service, and an indication of a port of the discovery service. At508, the endpoint may, using the contact information, obtain the connectivity information from the discovery service. For instance, the endpoint may reach out to the discovery service running in the fabric to obtain connectivity information for connecting to the cloud management platform106. At510, the endpoint may establish a connection with the cloud management platform using the connectivity information, such as by using one or more proxies and/or a tunneling protocol (e.g., SSL, TLS, etc.). At512, the endpoint may send the signed security digest to the cloud management platform. The cloud management platform106may then use a public key of the fabric interconnect to verify that the signed security digest was signed using a private key of the fabric interconnect. The cloud management platform may then automatically register the endpoint with the user account of the fabric interconnect. In this way, endpoints are automatically onboarded into the appropriate user accounts without manual user intervention. FIG.6illustrates a computing system diagram illustrating a configuration for a data center600that can be utilized to implement aspects of the technologies disclosed herein. The example data center600shown inFIG.6includes several server computers602A-602F (which might be referred to herein singularly as “a server computer602” or in the plural as “the server computers602”) for providing computing resources. In some examples, the resources and/or server computers602may include, or correspond to, the any type of networked device described herein. Although described as servers, the server computers602may comprise any type of networked device, such as servers, switches, routers, hubs, bridges, gateways, modems, repeaters, access points, etc. The server computers602can be standard tower, rack-mount, or blade server computers configured appropriately for providing computing resources. In some examples, the server computers602may provide computing resources604including data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, VPNs, and others. Some of the servers602can also be configured to execute a resource manager606capable of instantiating and/or managing the computing resources. In the case of VM instances, for example, the resource manager606can be a hypervisor or another type of program configured to enable the execution of multiple VM instances on a single server computer602. Server computers602in the data center600can also be configured to provide network services and other types of services. In the example data center600shown inFIG.6, an appropriate LAN608is also utilized to interconnect the server computers602A-602F. It should be appreciated that the configuration and network topology described herein has been greatly simplified and that many more computing systems, software components, networks, and networking devices can be utilized to interconnect the various computing systems disclosed herein and to provide the functionality described above. Appropriate load balancing devices or other types of network infrastructure components can also be utilized for balancing a load between data centers600, between each of the server computers602A-602F in each data center600, and, potentially, between computing resources in each of the server computers602. It should be appreciated that the configuration of the data center600described with reference toFIG.6is merely illustrative and that other implementations can be utilized. In some examples, the server computers602and or the resources604may each execute/host one or more tenant containers and/or virtual machines to perform techniques described herein. In some instances, the data center600may provide computing resources, like tenant containers, VM instances, VPN instances, and storage, on a permanent or an as-needed basis. Among other types of functionality, the computing resources provided by a cloud computing network may be utilized to implement the various services and techniques described above. The computing resources604provided by the cloud computing network can include various types of computing resources, such as data processing resources like tenant containers and VM instances, data storage resources, networking resources, data communication resources, network services, VPN instances, and the like. Each type of computing resource604provided by the cloud computing network can be general-purpose or can be available in a number of specific configurations. For example, data processing resources can be available as physical computers or VM instances in a number of different configurations. The VM instances can be configured to execute applications, including web servers, application servers, media servers, database servers, some or all of the network services described above, and/or other types of programs. Data storage resources can include file storage devices, block storage devices, and the like. The cloud computing network can also be configured to provide other types of computing resources604not mentioned specifically herein. The computing resources604provided by a cloud computing network may be enabled in one embodiment by one or more data centers600(which might be referred to herein singularly as “a data center600” or in the plural as “the data centers600”). The data centers600are facilities utilized to house and operate computer systems and associated components. The data centers600typically include redundant and backup power, communications, cooling, and security systems. The data centers600can also be located in geographically disparate locations. One illustrative embodiment for a data center600that can be utilized to implement the technologies disclosed herein will be described below with regard toFIG.6. FIG.7illustrates a computer architecture diagram showing an example computer hardware architecture700for implementing a computing device that can be utilized to implement aspects of the various technologies presented herein. The computer hardware architecture700may be a conventional server computer, computing resource, network device (e.g., router, load balancer, data store, etc.), workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein. The computer700may, in some examples, correspond to at least one of a server112, a blade server/component116, and/or a system of computers700may make up the cloud management platform106. The computer700may comprise networked devices such as servers, switches, routers, hubs, bridges, gateways, modems, repeaters, access points, etc. The computer700includes a baseboard702, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”)704operate in conjunction with a chipset706. The CPUs704can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the computer700. The CPUs704perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like. The chipset706provides an interface between the CPUs704and the remainder of the components and devices on the baseboard702. The chipset706can provide an interface to a RAM708, used as the main memory in the computer700. The chipset706can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”)710or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer700and to transfer information between the various components and devices. The ROM710or NVRAM can also store other software components necessary for the operation of the computer700in accordance with the configurations described herein. The computer700can operate in a networked environment using logical connections to remote computing devices and computer systems through a network, such as the network106. The chipset706can include functionality for providing network connectivity through a Network Interface Controller (NIC)712, such as a gigabit Ethernet adapter. The NIC712is capable of connecting the computer700to other computing devices over the network106. It should be appreciated that multiple NICs712can be present in the computer700, connecting the computer to other types of networks and remote computer systems. In some examples, the NIC712may be configured to perform at least some of the techniques described herein, such as packet redirects and/or other techniques described herein. The computer700can be connected to a storage device718that provides non-volatile storage for the computer. The storage device718can store an operating system720, programs722, and data, which have been described in greater detail herein. The storage device718can be connected to the computer700through a storage controller714connected to the chipset706. The storage device718can consist of one or more physical storage units. The storage controller714can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. The computer700can store data on the storage device718by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device718is characterized as primary or secondary storage, and the like. For example, the computer700can store information to the storage device718by issuing instructions through the storage controller714to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The computer700can further read information from the storage device718by detecting the physical states or characteristics of one or more particular locations within the physical storage units. In addition to the mass storage device718described above, the computer700can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer700. In some examples, the operations performed by the network106and or any components included therein, may be supported by one or more devices similar to computer700. Stated otherwise, some or all of the operations performed by the servers112, blade servers116, and or any components included therein, may be performed by one or more computer devices700operating in a cloud-based arrangement. By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically-erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory technology, compact disc ROM (“CD-ROM”), digital versatile disk (“DVD”), high definition DVD (“HD-DVD”), BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion. As mentioned briefly above, the storage device718can store an operating system720utilized to control the operation of the computer700. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device718can store other system or application programs and data utilized by the computer700. In one embodiment, the storage device718or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the computer700, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the computer700by specifying how the CPUs704transition between states, as described above. According to one embodiment, the computer700has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer700, perform the various processes described above with regard toFIGS.1-5. The computer700can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein. The computer700can also include one or more input/output controllers716for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic stylus, or other type of input device. Similarly, an input/output controller716can provide output to a display, such as a computer monitor, a flat-panel display, a digital projector, a printer, or other type of output device. It will be appreciated that the computer700might not include all of the components shown inFIG.7, can include other components that are not explicitly shown inFIG.7, or might utilize an architecture completely different than that shown inFIG.7. As described herein, the computer700may comprise one or more of a server112, a blade server116, or a system of devices that make up the cloud management platform106or a network device (e.g., server computer, computing resource, etc.). The computer700may include one or more hardware processors704(processors) configured to execute one or more stored instructions. The processor(s)704may comprise one or more cores. Further, the computer700may include one or more network interfaces configured to provide communications between the computer700and other devices, such as the communications described herein as being performed by the client devices106and computing resources114The network interfaces may include devices configured to couple to personal area networks (PANs), wired and wireless local area networks (LANs), wired and wireless wide area networks (WANs), and so forth. For example, the network interfaces may include devices compatible with Ethernet, Wi-Fi™, and so forth. The programs722may comprise any type of programs or processes to perform the techniques described in this disclosure for determining connectivity in multi-hop paths using BFD Echo packet(s). The programs722may enable the computing resources114and/or the load balancers112of the computing resources114to perform various operations. While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention. Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application. | 44,742 |
11863379 | DETAILED DESCRIPTION The following discussion is directed to various examples of the disclosure. The examples disclosed herein should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, the following description has broad application, and the discussion of any example is meant only to be descriptive of that example, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that example. Throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. In addition, as used herein, the term “includes” means includes but not limited to. The term “based on” means based at least in part on. Some computing systems employ ‘containerization’. Containerization can take place at the operating system level. In some examples, mutually isolated computing instances, known as containers (or in some examples, by other terms such as virtualisation engines or partitions), operate as separate computers from the point of view of programs deployed thereon. While a deployed program may utilize, and be aware of, the resources of its container, it will generally be unaware of the resources of any other container, even where an underlying physical resources is shared. Thus, a computing resource such as a computer, a server, or the like, may have part of its resources allocated to one container and another part allocated to another. Programs running within containers (and in some examples, there may be several programs running within each container) have access only to the resources allocated to the container. Such computing resources allow of ease of scalability and accessibility of the same underlying resource by mutually distrusting instances with little additional overhead. An example of a container manager and deployment system is Kubernetes. In examples described herein, a processing resource may include, for example, one processing resource or multiple processing resources included in a single computing device or distributed across multiple computing devices. As used herein, a “processing resource” may be at least one of a central processing unit (CPU), a semiconductor-based microprocessor, a graphics processing unit (GPU), a field-programmable gate array (FPGA) configured to retrieve and execute instructions, other electronic circuitry suitable for the retrieval and execution of instructions stored on a machine-readable storage medium, or a combination thereof. In examples described herein, entities may communicate with one another via direct connections, via one or more computer networks, or a combination thereof. In examples described herein, a computer network may include, for example, a local area network (LAN), a virtual LAN (VLAN), a wireless local area network (WLAN), a virtual private network (VPN), the Internet, or the like, or a combination thereof. In examples described herein, a memory resource may include, for example Random Access Memory (RAM), including any variant thereof (e.g. DRAM, SRAM, etc.). In examples described herein, a “node” entity is a virtualised processing resource, which may run on all or part of a computing device, such as a server, storage array, storage device, desktop or laptop computer, switch, router, or any other processing device or equipment including a processing resource. In some examples herein, a node may forward requests for services provided by one or more containers, which may be organised into sub-clusters or ‘pods’, as is described in greater detail below. FIG.1is an example of a container cluster management system100comprising nodes102aand102b(also referred to generally or collectively as node(s)102) and a redistribution manager104. A node102aincludes a utilization monitor106. In some examples, each node102of the system100may comprise a utilization monitor106. In other examples, some nodes of the system100may comprise a utilization monitor106whereas other nodes of the system100may not. The node102ahas an external IP address108aallocated to it. In some examples, the external IP address108amay initially be allocated to the node102aby the redistribution manager104. In some examples, the container cluster management system100is manage a plurality of container sub-clusters, each sub-cluster comprising a plurality of containers and having a sub-cluster IP address; and the nodes104are to forward service requests associated with the external IP address to a container sub-cluster by translating the external IP address to a sub-cluster IP address. In use of the system100, the utilization monitor106provides data relating to the utilization level of the node102a, to the redistribution manager104. This data may comprise ‘health status’ data, and may be indicative of the loading of the node102a. In some examples, the data relating to the utilization level of the node102arelates to at least one of processing resource usage, memory usage and data for mapping requests per second to utilization. In some examples, the utilization monitor106continuously monitors a utilization status of the node102a, however in some examples the utilization monitor106acquires the data relating to the utilization level periodically. In some examples the utilization monitor106shares the utilization data of the node102awith the redistribution manager104periodically. Once utilization of a node102reaches a certain maximum level (i.e. 100% utilization), failure of the node102may occur, causing the node to become unresponsive. The redistribution manager104determines, based on the data from the utilization monitor106, whether the utilization level of the node102ahas exceeded a predetermined threshold. In some examples, the predetermined threshold may be set below a level where failure of the node102ais likely to occur. In some examples, the predetermined threshold may be a value representing between 80% and 95% of maximum utilization of the node102a, where maximum utilization represents the maximum amount of requests per second that can be handled by a processing resource or a memory resource. For example, the predetermined threshold may be set at 90% of maximum utilization of the node102aand if either or both of the processing resource utilization or memory resource utilization reaches 90% of maximum then the predetermined threshold has been reached. In some examples, the threshold value can be configured by a user of the system100. In response to determining that the utilization level of the node102ahas exceeded the predetermined threshold, the redistribution manager104reallocates the external IP address108afrom the node102ato a different node102bof the container cluster management system. In some examples, reallocating the external IP address involves updating a Virtualised Router-to-IP Address (VRID-to-IP) mapping table and sending it to an API server associated with the containerized computing system. This provides load redistribution (also referred to as load balancing) between nodes of the container cluster management system100, which may reduce instances of node failure caused by high utilization level, while maintaining servicing of the external IP addresses108so that there is no outage in the reachability of a service or application associated with an external IP address108. The system100also enables the dynamic load redistribution of applications or services which have already been deployed and are running on the nodes by redistributing the already configured external IP addresses from highly loaded nodes to less loaded nodes. Each of the redistribution manager104, nodes102a,102band the utilization monitor106may be any combination of hardware and programming to implement the described functionalities. In examples described herein, such combinations of hardware and programming may be implemented in a number of different ways. For example, programming may be processing resource executable instructions stored on at least one non-transitory machine-readable storage medium and the hardware may include at least one processing resource to execute those instructions. In some examples, the hardware may also include other electronic circuitry to at least partially implement at least one of the redistribution manager104, nodes102a,102band the utilization monitor106. In some examples, the at least one machine-readable storage medium may store instructions that, when executed by the at least one processing resource, at least partially implement some or all of the redistribution manager104, nodes102a,102band the utilization monitor106. In such examples, a computing device at least partially implementing the processing redistribution manager104and/or a node102a,102bmay include the at least one machine-readable storage medium storing the instructions and the at least one processing resource to execute the instructions. In other examples, the redistribution manager104, the nodes102a,102band the utilization monitor106may be implemented by electronic circuitry. FIG.2is an example of a container cluster management system200comprising a plurality of nodes202a-c, each comprising a utilization monitor206a-c. The cluster management system200further comprises a cluster manager212including a redistribution manager204. In use of the system200, the cluster manager212provides access to services provided by containers within the system200. The cluster manager212may be any combination of hardware and programming to implement the described functionalities. A service may comprise a predetermined set of ‘pods’214a-d, where a pod214is a logical host of a set of containers216a-mor, expressed another way, a pod214comprises a sub-cluster of related containers216. For example, the containers216of a particular pod214(e.g. with reference toFIG.2, the containers216a-dof pod214a, the containers216e-hof pod214b, the containers216i-kof pod214cor the containers216l-mof pod214d) may be co-located and co-scheduled, and run in a shared context. The pods214may be configured independently of one another and may provide different services. Containers216within a pod214may share an IP address and/or port space, and may be able to communicate with one another (whereas, generally, containers216in different pods214may have distinct IP addresses and are not typically in direct communication with one another, instead communicating via Pod IP addresses and the like). Applications deployed within a pod214may have access to shared ‘volumes’, which are usually directories, in some examples holding data. Such volumes may be logically considered to be part of a pod, and may have the same life cycle as a pod. To consider a particular example, a pod214may comprise frontend and backend containers216, where the frontend containers may provide user interfaces and the like and the backend containers may provide databases, data processing and the like. The containers216of a pod214may work together to provide a service. A pod (as well as an individual container) may be a temporary configuration. Pods214may be created, assigned a unique ID, and scheduled to at least one node202where they remain until termination (according to restart policy) or deletion. If a node202fails, the pods scheduled to that node202may be scheduled for deletion, for example after a timeout period. In some examples, in use of the system200, the node202forwards a service request for a first service received via the cluster manager212to at least one container sub-cluster (i.e. in the example ofFIG.2, one of pods214aand214b) by translating the external IP destination address of the service request to an IP address of a container sub-cluster (which may comprise one or more pods214). For example this may utilize Destination Network Address Translation (DNAT) and redirect the incoming traffic to the pod or pods which make up the service identified by the IP address. In some such examples, a pod's reply may be routed back to a service IP address, i.e. the node202, and then forwarded thereby to a client. In other words, the method may be carried out at what may be termed a ‘worker node’ of a containerised computing system. Such nodes may comprise resources to run container sub-clusters (for example, pods), and may redirect the requests but it may be noted that the nodes do not themselves carry out the requested computing. Thus, in such examples, the utilization of the containers/pods ‘behind’ each node may be balanced effectively indirectly by considering the utilization level of the node which redirects requests to the container sub-cluster. Each node of the plurality of nodes202a-cshown inFIG.2has been allocated an external IP address208a-c. One of the nodes202ahas additionally been allocated a further external IP address208d. In some examples, some or all of the nodes202a-cmay be allocated a plurality of external IP addresses or some of the nodes may only be allocated one, or no external IP address. In some examples, the external IP addresses may be allocated to the nodes by the cluster manager212. As shown inFIG.2, each node202is associated with at least one pod214. A first node202ais to receive service requests sent to external IP addresses208aand208d, and to forward those service requests to, respectively, a first214aand second pod214b. A second node202bis to receive service requests sent to external IP address208b, and to forward those service requests to a third pod214c. A third node202bis to receive service requests sent to external IP address208c, and to forward those service requests to a third pod214d. In other examples, however, there may be other arrangements and the relationship between external IP addresses and services need not be one-to-one as shown in this example. In use of the system200, the utilization monitor206provides data relating to the utilization level of the nodes202a-cto the redistribution manager204. In some examples, the utilization monitor206monitors a utilization status or level of each of the nodes202a-cand periodically populates a table, termed herein a ‘health status table’ stored in a memory of the redistribution manager204with data relating to the utilization level of each of the nodes202a-c. In some examples, the redistribution manager204may traverse such a health status table to determine if the utilization level of any of the nodes202a-chas exceeded the predetermined threshold. In response to determining that the utilization level of a node202ahas exceeded the predetermined threshold, and that therefore the node202ais considered unhealthy, the redistribution manager204reallocates an external IP address208from the node202ato a different node202of the container cluster management system200. In some examples, reallocating the external IP address involves updating a VRID-to-IP map (or lookup table) for the health status table. In some examples, this updated map may be sent it to an API server associated with the container cluster system. The system200is robust as it prevents outage in load distribution of network traffic among backend members of the container sub-clusters even when a node becomes unhealthy. FIG.3is a flowchart showing an example of a method300, which may be a method of managing a container-based computing cluster. Block302comprises receiving, at a redistribution manager of a container cluster system, utilization data of a first node of the container cluster system which has an allocated external IP address. In some examples, the nodes may forward service requests associated with the external IP address to a container sub-cluster by translating the external IP address to a sub-cluster IP address. Block304comprises determining whether the utilization data of the node indicates that the utilization level of the node has exceeded a predetermined threshold and that therefore the node is at risk of becoming unresponsive. If this is not the case, the method returns to block302and the redistribution manager continues to receive utilization data for the node. However, if the utilization data has exceeded the predetermined threshold, the method proceeds to block306which comprises reallocating the external IP address originally assigned to the first node to a different node of the container cluster system by the redistribution manager, thereby reducing the utilization level of the first node. In some examples, the method300may be carried out by a redistribution manager104of a system100as described in relation toFIG.1. FIG.4is a flowchart showing another example of a method400, which may be a method of managing a container-based computing cluster. In some examples, the method400may be carried out by a redistribution manager204as described in relation toFIG.2. Similarly to the method described in relation toFIG.3, block402comprises receiving utilization data from a utilization monitor of a node and block404comprises determining whether the utilization level of the node exceeds a predetermined threshold. For example, referring back toFIG.2, the redistribution manager204ofFIG.2may receive data from utilization monitor206aof node202aand may determine that node202ais unhealthy because the utilization level exceeds a predetermined threshold, which may be a percentage utilization (for example, 90% of maximum possible utilization). At block406, an external IP address of the unhealthy node is selected, for example by selecting from the health status table. For example, redistribution manager204may select an external IP address208ato potentially reallocate to a different node. At block408the least loaded node in the plurality of nodes is determined and is allocated as a target node to receive the reallocation of the external IP address. In some examples the least loaded node is determined from utilization data stored in a health status data table. For example, referring back toFIG.2, the redistribution manager204may determine that node202chas the lowest utilization level and is therefore the least loaded node. The redistribution manager204may therefore allocate node202cas a target node to potentially receive a reallocated external IP address208a. In other examples, any node having a utilization level below a threshold may be selected. This means that the third node202cwould now perform address translation for accessing services provided by the first pod214ain place of the first node202a. In some cases, the least loaded node (or any target node for reallocation) could become overloaded and fail if the external IP address is reallocated to it. Therefore, in this example, at block410, the redistribution manager performs a check to determine whether the utilization level of the target node will exceed a predetermined threshold if the external IP address is reallocated to it. In some examples, performing the check comprises determining the number of requests received per second for the external IP address, calculating the average utilization level increase per request, and multiplying the number of requests by the average utilization increase per request, thereby calculating an estimate for the total utilization level increase that will occur due to reallocating the external IP address. In some examples, the average utilization level increase per request may be based on an average detected utilization level due to a known number of actual requests that have been received by the nodes. In some examples, determining the utilization level increase includes determining the increase in both memory resource usage and the increase in processor resource usage of a node. If the system determines that the utilization level of the target node would exceed the predetermined threshold for the target node if the external IP address were to be reallocated, the method400proceeds to block412which comprises maintaining the allocation of the external IP address to the original node and optionally sending a notification to a user or admin of the container cluster management system requesting node scaling (i.e. the addition of one or more nodes to the system). In some examples, if the external IP address cannot be reallocated without the utilization level of the target node being exceeded, the system determines, at block416, whether the unhealthy node has any further external IP addresses allocated to it. If this is the case, the method returns to block406and a second external IP address is selected and the system then determines if it would be possible to reallocate the second external IP address without exceeding the predetermined threshold for the target node. If so, the second external IP address is reallocated to the target node. If reallocation of the second external IP address would cause the target node to become unhealthy, then the system determines whether the unhealthy node has a third allocated external IP address for potential reallocation. This process is iterated until either a suitable external IP address is found or all of the external IP addresses allocated to the unhealthy node have been checked. If none of the external IP addresses of the unhealthy node can be reallocated then the method continues to block412and the original allocation of external IP addresses is maintained. If the system determines that the utilization level of the target node would not exceed the threshold if the external IP address were it to be reallocated then the method400proceeds to block414which comprises reallocating the external IP address to the target node. For example, redistribution manager204may reallocate external IP address208ato node202c. FIG.5is an example of a tangible and non-transitory machine readable medium500in association with a processor502. The machine readable medium stores instructions504which, when executed, cause the processor to carry out certain processes. The instructions504comprise instructions506to cause the processor to receive health status data providing an indication of a utilization level of a node having an allocated external IP address; instructions508to cause the processor to determine, based on the health status data, that a utilization of the node exceeds a predetermined threshold; and, instructions510to, in response to this determination, reallocate an external IP address of the node to a different node. In some examples, the machine readable medium stores further instructions which, when executed, cause the processor to carry out a process described in relation toFIG.3orFIG.4. Examples in the present disclosure can be provided as methods, systems or machine readable instructions, such as any combination of software, hardware, firmware or the like. Such machine readable instructions may be included on a machine readable storage medium (including but not limited to disc storage, CD-ROM, optical storage, etc.) having machine readable program codes therein or thereon. The present disclosure is described with reference to flow charts and block diagrams of the method, devices and systems according to examples of the present disclosure. Although the flow diagrams described above show a specific order of execution, the order of execution may differ from that which is depicted. Blocks described in relation to one flow chart may be combined with those of another flow chart. It shall be understood that at least some blocks in the flow charts and/or block diagrams, as well as combinations of the blocks in the flow charts and/or block diagrams can be realized by machine readable instructions. While the method, apparatus and related aspects have been described with reference to certain examples, various modifications, changes, omissions, and substitutions can be made without departing from the spirit of the present disclosure. It is intended, therefore, that the method, apparatus and related aspects be limited by the scope of the following claims and their equivalents. It should be noted that the above-mentioned examples illustrate rather than limit what is described herein, and that those skilled in the art will be able to design many alternative implementations without departing from the scope of the appended claims. Features described in relation to one example may be combined with features of another example. The word “comprising” does not exclude the presence of elements other than those listed in a claim, “a” or “an” does not exclude a plurality, and a single processor or other processing resource may fulfil the functions of several units recited in the claims. The features of any dependent claim may be combined with the features of any of the independent claims or other dependent claims, in any combination. | 24,700 |
11863380 | SUMMARY Embodiments include systems, apparatuses, methods, computer readable media and other means for providing, among other things, a community internet drive having data storage, distributed data management and/or social networking functionality, and providing a virtual file system across potentially heterogeneous devices. Some users may be “members” of the “community” meaning that these users have opted-in to using the community internet drive. The community internet drive may be built from various “community machines” maintained and controlled by members of the community internet drive. The community machines may be networked together (e.g., using the Internet and/or other networks) to virtually connect all their personal hard drives (or at least a portion thereof) into one or more virtual drives that may each have a size that scales with the size of the community. At least some of the community machines may be computers, tablets, and/or other machines that are designed to be used predominately for personal and/or non-commercial purposes by their users (such as, e.g., desktop computers, laptop computers, smart phones, etc.), as opposed to commercial-grade servers and databases often used to implement cloud-computing functionality. As such, there may be no barrier to connecting hundreds, thousands, millions, and/or billions of individual drives, providing in aggregate exabytes, zettabytes, or any other amount of data for the community internet drive. For example, in some embodiments, a first user machine, such as a personal computer, tablet or smart phone, can be configured to generate a request to be included in the community internet drive. The request can be transmitted to a remote machine, such as a server and/or other machine that is configured to function as a managed hub. The first user machine can then receive configuration data from, for example, the remote machine. In some embodiments the configuration data may come from another user machine via a direct connection, such as those sometimes used for peer-to-peer communications (as opposed to client-server communications). The first user machine can then be configured to execute the configuration data, including partitioning its local storage device into a private portion and a shared portion. This partitioning may be performed by enabling the first use of the first machine to select one or more local directories (e.g., one or more folders included in the first computer's hard drive) to be mounted (e.g., or otherwise added to) the community internet drive. While the local directories that are mounted to the community internet drive can be considered “a shared portion” of the first machine's local storage device, other directories that are not mounted to the community internet drive can be considered the private portion of the local storage device that is excluded from the community internet drive. Additionally (or alternatively), some embodiments discussed herein provide various solutions to problems related to sharing data between or among users. For example, the managed hub can be configured to cause data to be transferred to a person or place where it is needed. Each user may also identify other users in particular and/or criteria for selecting other users to define a user-selected subset community within the larger community. For example, within a certain user base of the self-selected community, embodiments discussed herein can manage the sharing of data according to the social mapping of the user-selected subset community, while it also feeds the development of the larger community with user-generated content (“UGC”). Embodiments may also be easy to use, provide data security, be fair, be reliable, and provide benefit through the enabled quality of service. Data from user's “in-group” can be inherently more interesting than that published by strangers. As used herein, a user's “in-group” may include one or more other users that the user has identified (in the user's profile and/or otherwise) as having a relationship with, such as a friendship, family relationship, business relationship, and/or any other type(s) of relationship. Some types of in-group relationships may require the other user to accept a request to be in the group. For example, a first user may indicate that a second user is a friend and include the second user's email address in the user's profile, but the second user may not be considered a “friend” of the first user by the system until the second user indicates or otherwise confirms that first user is in fact a friend (e.g., by affirmatively accepting the first user's friendship and/or by not denying the friendship request within a given period of time). For example, an in-group may be created by a user machine or be created by the managed hub in response a user machine providing data for a user profile associated with a user. The first user profile can be configured to include data related to, for example, the user personal information, the user's machines and/or the user's social networks. For example, the user profile may include the user's birthday, username, address, credit card information, lap top's internet protocol (“IP”) address, cellular phone's IP address, FaceBook username/password, the user's email address, a friend's email address, a business partner's email address, etc. The friend's email address, business partner's email address and/or FaceBook information can be used to then create one or more in-groups (such as a friends' group, family group, business group, and/or any other type of group). The first user profile can then be provided to the remote machine (such as, e.g., a server functioning as the managed hub). While the first user machine can be configured to store user data (e.g., files, pictures, movies, websites, and/or other content associated with the first user) in the first user machine's shared portion of the local storage device, all the user data can be encrypted such that only machines associated with in-group users can access the user data. For example, the first user machine may also receive and store a stranger's user data (because the first user machine is part of the community internet drive), but if the stranger user is unassociated with any of the user's in-group(s), the stranger's user data can be encrypted such that the first user machine is unable to decipher the stranger's user data despite the stranger's data being physically stored on the user's computer (e.g., in the local portion of the local storage device). In accordance with some embodiments, published drive contents can have uniform resource identifiers (URIs) constructed from the Uniform Resource Locator (URL) of the community internet drive (such as, e.g., w3disk.com). Drive contents can be linked to similar to or the same as a website is linked to, and drive contents and/or community websites can operate with features similar to a local and/or networked disk drive and/or other storage device. For example, opening links to folders can include displaying directory listings, and opening links to files can include showing the file details. Each user can be enabled by the system to register a namespace. In some embodiments, namespaces may be organized and structured for usability and value, where higher value namespaces can indicate something about the language of posted content, geographic location of the community member, business area or an associated internet domain, any subject domain where specialized content will be aggregated, and/or naming of personal relevance to the user. In some embodiments, URI construction can include creating an identifier to an instance of data. The URI can be unique in the sense of identifying one exact instance of data on one particular community machine, but it can be possible to generate more than one URI to that instance of data in some embodiments. In other words, in some embodiments, the URI may not guarantee there are not multiple copies of the data, or that another different URI may not be pointing to an identical copy, or that contents do not change over time. This differs from other types of URIs/UUIDs that may be unique and one-to-one based on file contents, or that are designed to identify the same logical file entity even as it passes through revisions. The centralized management functionality of the community internet drive may be provided by a managed hub, which may be implemented in hardware, firmware and/or software. The managed hub can configure systems to inspect the URI and/or route the request to connect some number of community machines as peers to satisfy the request. The managed hub can also be configured to track duplicates (through, e.g., metadata comparisons, direct file or data inspection, and/or simply tracking associations generated from past successful copy events, among other ways). Each copy made is traceable to the progenitor of initial introduction to the community internet drive, and each duplicate can potentially be used as another available peer. Adding a central hub (such as a networked server and/or database) to a peer-based community internet drive can enable additional services and functionality that may not be available in a purely peer-to-peer-based system or in a purely cloud-based system. The central hub can, for example, track access control lists (ACLs) and in-group lists to restrict consumer views even when the publisher is offline. The central hub can also be configured to protect the publisher's data and IP addresses by checking access permissions before connecting peers. As yet another example, the central hub can also manage individual point-to-point connections, where the “publisher” (the community member publishing data to the community internet drive) provides keys to specific individuals for specific data or files. The central hub can also be configured to control the namespace to protect against infringement. In some embodiments, the central hub can also or instead be configured to manage encryption and file splitting (even below the byte level in some embodiments), so that backups and duplicates are not readable by the backup host, but using hub information the files can be reconstructed at the authenticated “consumer” (e.g., the user receiving and/or otherwise accessing the data provided by the publisher), even when multiple blind hosts are the only sources uploading the content. In some embodiments, only the authenticated downloader may be given enough information to reconstruct the original file. The central hub can also be configured to gate access to services provided to various types of users and/or enforce fairness among users. As used herein, “fairness” includes is giving a user access proportional to the user's level of participation. For example, fairness can cause a particular user to receive expanded or diminished services compared to those provided by the central hub to other users. For user satisfaction, the user can also be able to use the system to prioritize how the user's service credits are applied, such as the number of redundant backups of particular files of the user, and/or the number of uploading peers activated as they download. The hub can persist or cache data and files from community members for efficiency and reliability on other community members' machines. The hub can persist metadata about published materials for better exposure to searching. The hub can improve reliability by delivering duplicate or derivative files when the original is unavailable. The hub can persist user preferences for how pages under their mount point display, including but not limited to background images, fonts, styles, and messages. The hub can provide commerce features, where community members set a price for any of their published content. Although some embodiments of the managed hub may include both the central hub and/or control functionality distributed among community machines under the control of community members, other embodiments can provide greater or lesser services when the central hub is not functional, has reduced functionality, and/or is omitted from the system altogether. As such at least some of the functionality discussed herein can be implemented by an exclusively peer-based managed hub. An exclusive, peer-based managed hub however may not have complete or up-to-date published directories for offline community members. Other than providing a framework of central hub pages and container pages where mount points are listed, the rest of the contents of the virtual community internet drive are data driven. The users as independent publishers control their mount point, and any files or data they drop into the hierarchy of subdirectories under their mount point. As they drop in files, the drive grows. As they add messages, metadata, previews, and any other descriptive content, the content becomes richer and more discoverable. Additionally, some embodiments may include a first user machine being further configured to access decrypted data associated with another user whose machines publish data. The first user machine can be further configured to receive a notification that the other user machine has accepted an invitation to be in the group with the first user machine prior to the first user machine being configured to access the decrypted data associated with the second user. The first user machine can be further configured to send the second user machine a notification in response to new data being added, by the first machine, to the shared portion of the first machine's local storage device. The system can also be configured to maintain control over namespaces. In response to a user machine, such as an independent publisher, requesting control of a namespace from the server and/or other managed hub, the independent publisher may receive control of the namespace from the remote machine. In some embodiments, a fee may be charged for various premium namespaces before control is awarded to an independent publisher. Some embodiments also include a system and/or method of managing data among a plurality of machines, comprising: receiving a first request from a first machine to join a community internet drive managed at least partially by a remote machine, wherein whether the first machine is online is dependent on a first user; in response to receiving the first request, transmitting data initiating the configuration of the first machine to be part of the community internet drive; receiving a second request from a second machine to join the community internet drive managed at least partially by the remote machine; in response to receiving the second request, transmitting data initiating the configuration of the second machine to be part of the community internet drive; receiving a third request from a third machine to join the community internet drive managed at least partially by the remote machine; in response to receiving the third request, transmitting data initiating the configuration of the third machine to be part of the community internet drive; receiving an indication of first data associated with the first machine to be stored in the community internet drive, wherein the first data is encrypted; and in response to receiving the indication of the first data: causing at least a first portion of the first data, as encrypted, to be stored on the second machine, wherein the second machine is unable to decrypt the first data and wherein whether the second machine is online is under the control of a second user that is different than the first user; and causing at least a second portion of the first data, as encrypted, to be stored on the third machine, wherein the third machine is unable to decrypt the first data and wherein whether the third machine is online is under the control of a third user that is different than the first user and the second user. Further, the method can comprise generating profile information associated with the first user of the first machine; determining from the profile information that the first machine is associated with the first user; determining a fourth machine is associated with a fourth user, wherein the first user has indicated in the profile information that the fourth user is in a group authorized to access the first data; and enabling the fourth machine to access the first data as unencrypted. In some embodiments, the method can also comprise: receiving a request from the fourth machine to access the first data; determining the fourth machine is authorized by the first user to access the first data; in response to receiving the request, determining the first machine is offline; in response to determining the first machine is offline, determining the second machine and the third machine are online; and in response to determining the second machine and the third machine are online, enabling the first data to be transferred from the second machine and the third machine to the fourth machine. A determination can be made that the fourth machine has accessed and downloaded a first copy of the first data. Then a determination can be made whether the fourth machine has modified the first copy of the first data; and in response to determining the copy of the first data is unmodified by the fourth machine, progenitor data can be generated that is associated with the first copy indicating the first copy is substantively identical to the first data. In response to determining the copy of the first data was modified by the fourth machine, progenitor data can be generated that is associated with the first copy indicating the first copy is a modified version of the first data. DETAILED DESCRIPTION The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the inventions are shown. Indeed, these inventions may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. Some embodiments discussed herein can be used as a community internet drive. For example, system100shown inFIG.1Acan be configured to leverage and/or otherwise integrate storage devices across a heterogeneous set of machines and operating systems into one cohesive whole using various network infrastructure. In some embodiments, data from one or more storage devices, or even every storage device in the network community of system100, can be transferred and shared as in a single virtual file system. In combination, the processing and features bundled as discussed herein can create a powerful and useful tool that is relatively easy to use for solving long standing problems with enabling data sharing and availability among various machines. For example, system100can include network102, which may comprise the public Internet, private network(s), cellular network(s), direct connection(s), and/or satellite network(s), among other types of networks. The direct connections may be any suitable type of connection, such as one or more wired connections (e.g., a universal serial bus (“USB”) connections, Ethernet connections, etc.) and/or wireless connections (e.g., a BlueTooth® connections, WiFi connections, etc.). As such, network102may include, for example, one or more servers, switches, routers, processors and/or other hardware configured to facilitate the transmission of data and/or aid in providing the other functionality discussed herein, including that related to the management and storage of data on various storage devices. While network102may include various servers, etc.,FIG.1Aalso shows server104, which may in-and-of-itself comprise one or more servers, databases, and/or other machines used to provide management related functionality of data maintained on the distributed, heterogeneous storage devices, sometimes referred to herein as “independent publishers.” Server104may also be configured to provide, for example, cloud-computing functionality (or something similar thereto), virtual file system functionality (or something similar thereto) and/or social networking functionality (or something similar thereto), among other things. In other words, server104may be configured to be and/or otherwise implement the functionality associated with the managed hub discussed herein, which may be the aggregate of processes, such as those discussed in connection withFIGS.6-11, and in some embodiments can include a centralized framework based on one or more networked servers. In this regard, server104may aid in coordinating communications, data transfers, user profile permissions, and/or any other type of management functionality related to the distributed storage devices included in the community internet drive. As noted above, the distributed storage devices used to form the community internet drive may be included in one or more user machines, sometimes referred to herein as independent publishers, some examples of which are computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114, among others (such as gaming consoles, etc.). Example circuitry that may be included in one or more of server104, computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114, is discussed in connection withFIG.1B. Server104, computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114, among others, are each sometimes referred to herein as a “community machine” as each can be part of some embodiments of the community internet drive discussed herein. In some embodiments, the managed hub can be implemented in a decentralized fashion, rather than being consolidated in server104. For example, the managed hub functionality can be provided by any and/or all of the community machines operating collectively. Hence, at least some embodiments may include a decentralized framework based on one or more networked servers, computers, cellular devices, tablets, and/or other machines. The solid lines between the network102and each of the community machines represent centrally managed connections (such as, e.g., broadband connections) that can be predominately used for control and reporting communications flow in accordance with some embodiments discussed herein. The dashed lines ofFIG.1Abetween various community machines represent connections for peer-to-peer (P2P) communications (e.g., connections between two machines that are considered peers as opposed to connections between a client and server) in accordance with some embodiments discussed herein, with the peer-to-peer communications predominately designated for data transfers. While the centrally managed hub and P2P communications can exist in isolation and be implemented individually, when implemented collectively, the peer-to-peer communications are assembled to create a virtual file system. In some embodiments, P2P and/or centrally managed connections can be used to facilitate the transfer of any type of data, such as, e.g., data about system status and operations, among other things. In this regard, system100may enable “trusted peer machines” included in the community of machines, to elect to share contents and descriptive information from their local drives with each other directly and with others in the community. As referred to herein, “trusted peer machines” refers to community machines that are under control of an authenticated community member who makes their machine open for storing and sharing community data. It follows that the community internet drive is comprised of storage devices, or portions thereof, of various community machines that are used to store data for the trusted peer machines of system100, and that function to securely disseminate data between authorized members. The managed hub of system100can be a virtually centralized hub of various processes executed by system100. For example, the managed hub can allow system100to be configured to, for example, collectively manage the namespace used by the community machines of system100, distribute tasks performed by the community machines of system100, track and report statuses of community machines included in system100, and/or present a unified view of the virtual file system provided by the community internet drive distributed among the community machines of system100, among other things. FIG.1Bshows circuitry, logic, and other components that may be included in community machine116in accordance with some embodiments discussed herein. Community machine116is shown as a generic example of a community machine and the components and functionality discussed in connection with community machine116may also be included in or otherwise provided by one or more other machines or other types of apparatuses, such as server104, computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114. Although community machine116is generally discussed in connection with the machines included in system100shown inFIG.1A, a variety of other devices (such as, for example, an email server, proxy, other type of user device (such as network-attached storage device and/or gaming console), and/or any other type(s) of networked device) may provide some or all of the components and/or functionality discussed in connection withFIG.1B. Also, community machine116may include one or more additional components and/or components shown inFIG.1Bmay be combined or divided without departing from the spirit of the invention. For example, the elements of community machine116can be used to provide the managed hub, or at least a portion thereof. A combination of community machines may be configured to work with one or more other community machines, such as those shown inFIG.1A, to provide the managed hub and/or other services and functionalities discussed herein. At least some of the other community machines may include at least some of the same or similar components as discussed in connection with community machine116. Alternatively or additionally, the circuitry and other components discussed in connection withFIG.1Bmay be employed within a combination of apparatuses or other types of machines. Accordingly, some embodiments of the present invention may be embodied wholly at a single device and/or by devices networked together. Furthermore, it should be noted that the devices and/or other elements described herein may not be mandatory and thus some may be omitted in certain machines of various embodiments. One or more of the community machines may include or otherwise be in communication with community management circuitry118A that is configured to perform data transfer, processing, application execution and other processing associated with the distributed data storage and management functionality discussed herein. Community management circuitry118A may include managed hub processor120A and one or more storage devices, such as local internet drive122A and/or trusted peer machines memory124A (discussed below). Local internet drive122A and/or trusted peer machines memory124A may be used facilitate various features provided by some embodiments discussed herein. For example, local internet drive122A can be included in a particular community machine and be configured to support managed hub functionality of system100's community internet drive. As such, local internet drive122A may be configured to store community data, including data provided by and associated with one or more of the other, remotely-located community machines, data used to implement the managed hub, and associated shared and communal data for the virtual file system the local machine is configured to share with other community machines, among other kinds of data. Trusted peer machines memory124A can be configured to store data associated with other community machines, including data that identifies (e.g., IP address information of) the machines that are included in the community of machines, data that identifies which of the community machines is utilizing local internet drive122A of community machine116, data that identifies which of the community machines is authorized to utilize local internet drive122A, and data that identifies which of the community machines is prohibited from utilizing local internet drive122A, among other things. There can be a variety of trusted relationships and levels, on both the publishing and consuming side, which the managed hub may aid in maintaining, managing and/or otherwise facilitating. On the publishing side a community machine can have a level of trust as a content provider either by establishing a community rating or based on the personal relationship between the members. On the consuming side, either individuals or members of a defined in-group (such as friends and family) can be granted read and/or write access to content. Community management circuitry118A and/or managed hub processor120A may be embodied in a number of different ways. For example, managed hub processor120A may be embodied as various processing means such as a microprocessor or other processing element, a coprocessor, a controller or various other computing or processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), or the like. In some example embodiments, managed hub processor120A may be configured to execute instructions, such as those discussed in connection withFIGS.6-11, stored in storage device126or other storage device(s), such as local internet drive122A and/or trusted peer machines memory124A, that is otherwise accessible to managed hub processor120A. As such, whether configured by hardware, firmware and/or software, or by a combination thereof, managed hub processor120A and/or community management circuitry118A may represent an entity (e.g., physically embodied in circuitry hardware) capable of performing operations according to embodiments of the present invention while configured accordingly. Thus, for example, when managed hub processor120A is embodied as an ASIC, FPGA or the like, managed hub processor120A may be specifically configured hardware for conducting the operations described herein, including those discussed in connection withFIGS.6-11. Alternatively, as another example, when managed hub processor120A is embodied as an executor of software instructions, the instructions may specifically configure managed hub processor120A to perform the operations described herein. Regardless of how managed hub processor120A is embodied, it may be configured to work with other managed hub processors, independent publisher processors and/or any other type of circuitry that may be included in one or more other community devices embodied in the same and/or different forms. In some embodiments, community machine116may instead or also be configured to function as an independent publisher. For example, rather than or in addition to functioning as a managed hub, community machine116may be designed to be used predominately for personal and/or non-commercial purposes by its user, as opposed to commercial-grade servers and databases that may be used to implement managed hub functionality. As such, instead of community management circuitry118A, managed hub processor120A, local internet drive122A and/or trusted peer machines124A, community machine116may include community management circuitry118B, independent publisher processor120B, local internet drive122B and/or trusted peer machines124B. Community management circuitry118B, independent publisher processor120B, local internet drive122B and/or trusted peer machines124B may be embodied in hardware, firmware and/or software similar to or the same as that discussed above in connection with community management circuitry118A, managed hub processor120A, local internet drive122A and/or trusted peer machines124A, respectively. However, unlike that discussed above, community management circuitry118B, independent publisher processor120B, local internet drive122B and/or trusted peer machines124B may be optimized to be used as, for example, a network device used to generate data to be published to the community internet drive and/or used to consume data that is published onto the community internet drive. Community management circuitry118B may also be configured to perform data processing, application execution and other processing associated with the distributed data transfer, storage and management functionality discussed herein. In some embodiments, for example, community management circuitry118B may be included in a user machine, such as a personal computer, tablet or smart phone, that can be configured to generate a request to be included in the community internet drive. The request can be transmitted to a remote machine, such as a server and/or other machine that is configured to function as a managed hub. The first user machine can then receive configuration data from, for example, the remote machine. In some embodiments the configuration data may come from another user machine via a direct connection, such as those sometimes used for peer-to-peer communications (as opposed to client-server communications). In some embodiments, processor128may also or instead be included in community machine116. Processor128may function as a general processor and/or provide some or all of the functionality associated with managed hub processor120A and/or independent publisher processor120B. Like managed hub processor120A and/or independent publisher processor120B, processor128may be implemented in a number of different ways. For example, processor128may be embodied as various processing means such as a microprocessor or other processing element, a coprocessor, a controller or various other computing or processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), or the like. In some example embodiments, processor128may be configured to execute instructions, such as those discussed in connection withFIGS.6-11, stored in storage device126and/or other storage device that is otherwise accessible to processor128. As such, whether configured by hardware, firmware and/or software, or by a combination thereof, processor128may represent an entity (e.g., physically embodied in circuitry hardware) capable of performing operations according to embodiments of the present invention while configured accordingly. Thus, for example, when processor128is embodied as an ASIC, FPGA or the like, processor128may be specifically configured hardware for conducting the operations described herein, including those discussed in connection withFIGS.6-11. Alternatively, as another example, when processor128is embodied as an executor of software instructions, the instructions may specifically configure processor128to perform the operations described herein. In some embodiments, storage device126may include one or more tangible, non-transitory memory devices such as, for example, volatile and/or non-volatile memory that may be either fixed or removable. Storage device126may be configured to store information, data, applications, instructions or the like for enabling each machine of system100to carry out various functions in accordance with exemplary embodiments of the present invention. For example, storage device126can be configured to buffer input data for processing by processor128, community management circuitry118A and/or community management circuitry118B. Additionally or alternatively, storage device126could be configured to store instructions for execution by processor128and/or community management circuitry118A and/or community management circuitry118B, such as those discussed in connection withFIGS.6-11. Storage device126may also include some or all of local internet drive122A/B and/or trusted peer machines memory124A/B in addition to or instead of community management circuitry118A and/or community management circuitry118B. As such, local internet drive122A/B and/or trusted peer machines memory124A/B may be included in storage device126, community management circuitry118A, community management circuitry118B, and/or any other component(s). As yet another example, processor128, community management circuitry118A and/or community management circuitry118B may store data in one or more remote, centrally-networked databases, as well as a variety of files, contents, and/or data sets (including lists of trusted peer machines, encryption algorithms, and/or other data useful for implementing embodiments discussed herein), among other things. The contents of storage device126and/or one or more databases may include instructions that are stored for execution by processor128, community management circuitry118A and/or community management circuitry118B to carry out functionality associated with each respective application. Processor128, community management circuitry118A and/or community management circuitry118B may be in communication with or otherwise be configured to control user interface130and communications interface132. User interface130may be in communication with processor128, community management circuitry118A and/or community management circuitry118B to receive an indication of a user input at user interface130and/or to provide an audible, visual, mechanical or other output to a user. As such, user interface130may include, for example, a keyboard, a mouse, a joystick, a display, a touch screen, a microphone, a speaker, a cell phone, and/or one or more other input/output mechanisms. In exemplary embodiments, user interface130may include interface options for changing parameters and other configurations of one or more machines included in system100. Communications interface132may include one or more interface mechanisms for enabling communications with other devices and/or other types of machines. In some embodiments, communications interface132may comprise any means embodied in hardware, firmware, software, or combination thereof that is configured to receive and/or transmit data from/to a network and/or any other device or module in communication with processor128, community management circuitry118A and/or community management circuitry118B. In this regard, communications interface132may include, for example, an antenna (or multiple antennas) and supporting hardware (e.g., circuitry, communication ports, etc.), firmware and/or software for enabling communications with a wireless communication network and/or a communication modem or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB), Ethernet and/or other means for communication. In situations where communications interface132communicates with a network, such as network102, the network may be any of various examples of wireless or wired communication networks such as, for example, data networks like a Local Area Network (LAN), a Metropolitan Area Network (MAN), and/or a Wide Area Network (WAN), such as the Internet. Processor128, community management circuitry118A and/or community management circuitry118B, and/or any other circuitry that may be incorporated into one or more machines in accordance with some embodiments discussed herein, may operate under control of a computer program product and be used to control mechanical components and/or exchange transitory signals containing data. For example, a computer program product can be implemented on a computer-readable storage medium, such as storage device126. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus, e.g., processor128, community management circuitry118A and/or community management circuitry118B, to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions described herein. These computer program instructions may also be stored in a computer-readable memory that may cause a computer or other programmable apparatus to be configured to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means to implement the functions described herein. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions described herein. In this regard, community machine116may include any type of circuitry to facilitate the functionality discussed herein. FIG.2shows a conceptual example of a virtual file system that may be implemented by a community internet drive, such as community internet drive200. As noted above, community internet drive200may be distributed among a plurality of community machines, each having one or more local storage devices configured to function as local internet drives, such as local internet drives, and/or trusted peer machines memory among other things. Community internet drive200can be configured to store data associated with, for example, a community namespace, and also provide a root file202to the virtual file system. In some embodiments, some or all of community internet drive200may be implemented by networked, centrally-located servers and/or databases that are implemented by a community machine under the control of a network administrator as opposed to a community member, publisher, or other type of user. In some embodiments, community internet drive200may be implemented as a hierarchical namespace as shown inFIG.2. In other additional or alternative embodiments, community internet drive200can include the degenerate case of a hierarchy of a single level. Community internet drive200can include one or more folders of various types that can be linked, mounted or otherwise associated with each other in various ways. For example, connections204A-204H can be included in network102and/or the dashed lines shown inFIG.1A. Community internet drive200may also include one or more data folders, such as mount points206A-206H. One or more data folders, such as those in box208, may represent the centralized framework of the community internet drive200provided by a central server. The folders in box208may be provided as a service for the community, and mount points206A-206H can include content published by the community members using community machines. The folders of box208, although being configured to operate as a virtual file system in some embodiments, can also or instead be included in one or more community internet drives (such as community internet drive122) maintained by community members (as opposed to network administrators) and/or be configured to contain data associated with categorization for namespacing, data for generated pages of descriptive of controlling web presentations, and/or public domain data hosted as a service by community internet drive200, among other things. One or more mount points, such as mount points206A-206H, can be a subdirectory of one or more namespace folders of box208and be attached to pull in community member data. In some embodiments, the connections to those community members can include outside wiring in a datacenter used to implement the networked managed hub and/or the connections shown inFIG.1A. Box210and box212shown inFIG.2are also shown in more detail inFIG.3andFIG.4, respectively, and illustrate an example of how community members can publish from their personal file systems in accordance with some embodiments. For example, inside box212, connection204H and mount point206H illustrate how links to other elements of the community internet drive200can be embedded within a subtree. Among other things, such symbolic links can be to other devices owned by the same community member (e.g., stored on the same user's community machine(s), such as on a user's tablet computer, cloud-based storage device and/or laptop computer). As such, private or public data from a laptop, desktop, and smart phone for the user can be aggregated under one directory tree in one virtual file system in some embodiments. FIG.3shows box210, which is an example of an independent publisher's local internet drive (included in the independent publisher's public portion of its local storage device) and as it can be integrated into the community internet drive that is then be spread out across multiple community machines, such as at least one community machine116discussed in connection withFIGS.1B and2. The drive root folder, folder302, of the user's file system can be a locally stored folder in the community machine's local storage device (e.g., storage device126of computer106) unrelated to root file202of community internet drive200(which may be stored in storage device126of server104). The managed hub can be configured to enable a user to choose a folder, such as mount point206B, to contain a subtree to publish in association with the user. In response to the user indicating a desire to use a locally stored folder as a mount point in the community internet drive, community machine116can be configured to make the data within the chosen folder (and/or any subfolders) shared data. As such, the data within the chosen folder can be considered “transferred” and/or added to the “shared portion” (e.g., local internet drive122A/B) of the storage device (e.g., storage device126) from the “private portion” of the storage device (e.g., the portion of the storage device126that is excluded from the community internet drive). In the example shown byFIG.3, the user's published data, in the shared portion of the storage device, includes a link304to other content on the community internet drive associated with the user. The link304can refer to any other file or folder, whether from the shared portion of the member's storage, or elements of the otherwise private portion of local storage that are shared individually through that reference, or even references to other data within the community internet drive from other members (where the linking member must always have access to the linked content). Link304can be implemented in any suitable way or combination thereof, including without limitation using operating system features for writing a file, making an entry in a file containing this and other settings, and/or adding a record to a database, among other things. In some embodiments, software executed by the member's community machine, such as community machine116, can cause the member's community machine to be configured to connect across path306, so that the destination point of that symbolic link appears within the published space. The published folders and data in box210, including the resolved link, are associated with the shared portion of the community machine's local storage device and, therefore, may appear within community internet drive200. FIG.4shows box212, which represents another example of the logical organization of another independent publisher's local internet drive (included in the independent publisher's public portion of its local storage device) and as it can be integrated into the community internet drive. Drive root folder402of the second member's community storage device can again be unrelated to root file202of community internet drive200(which may be stored at the managed hub device), and/or the drive root file of other community members, such as folder302of the first member's machine discussed in connection withFIG.3. In the example shown inFIG.4, the second member's machine (i.e., the user of drive root folder402) has chosen a different directory, namely mount point206D, for the second member's published subtree from the second member's local storage device. Within the second member's subtree, community internet drive200can be configured to include (e.g., in response to receiving a user indication and/or automatically), based on the second member's interests and preferences, a connection204H to another folder, such as mount point204H, elsewhere on community internet drive200(e.g., on another community machine included in the system). With another connection (not shown) to another storage device below a mount point206H, a free form assemblage of content under namespaces based on individual preferences can be provided by community internet drive200. Such an organization of community internet drive200is another advantage of some embodiments discussed herein, both to the users who act as publishers expressing themselves and to the users who act as consumers who can benefit from the organization during data discovery. In this regard, the published contents, for example, of box210and box212can be files, folders and/or application data, among other things, included in the shared portion of a community machine's local storage device. As used herein, “files” can be any collections of binary data, but are typically collections of formatted data, such as documents, videos, movies, pictures, music, games and databases, among other things. “Folders” refer to collections of files and/or other sub-folders that are used to organize files. For example, box210and box212can include files and/or folders, among other things. FIG.5is a block diagram showing how the community internet drive can be enabled by the managed hub functionality of, for example, server104when combined with the storage devices and other circuitry of distributed heterogeneous machines, such as at least one computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114, among others. The managed hub machine can be connected to one or more independent publishers via connection502, which may include, for example, network102, one or more client-server connections and/or direct connections (such as those between networked peer machines). As such, the community internet drive may be provided and used to store and manage data stored among a plurality of community machines while also (or instead) providing novel data access, security and sharing features similar to those sometimes associated with social-networking systems. In some embodiments, the community internet drive may also provide web publishing and namespace hosting and assigning functionality. For example, one or more community machine(s), such as server104, can include managed hub data and aid in coordinating the availability of data made available on the Internet (or other network) by the independent publishers. In practice, some embodiments can include the shifting of processing to achieve differing optimizations, such as having quality and/or reliability of service for operations more at the managed hub side, and efficiencies and scalability achieved by distributing or consolidating processing loads with the independent publishers. Embodiments consistent with or similar to that shown inFIG.5can combine several synergistic processes implemented by various modules included in the managed hub and/or modules incorporated into the independent publishers. “Module,” as used herein, can include hardware (such as circuitry discussed in connection withFIG.1B), firmware and/or software. Collectively, the modules shown inFIG.5can enable the community internet drive to provide at least some of the example functionality discussed herein. For example, the managed hub can include module504as a core component, which can be configured to construct and manage namespace(s) for the managed hub. The namespace and structure overseen by module504specifies and identifies the contents of the virtual file system as illustrated in system200. In some embodiments, module504can be hierarchical by default. In other embodiments, module504can include flat namespaces as a degenerate case. While a hierarchical namespace may be better in some embodiments for data organization and discovery, even a flat namespace can allow assigning a unique uniform resource identifier (“URI”) to published content. As sometimes used herein, a “unique URI” refers to identifying a unique asset based on the mount point of the data instance within the community internet drive, as opposed to the opposite direction of a unique URI based on file contents regardless of location. Community machine(s) executing managed hub functions can track the instances of data, especially the common copies off each progenitor file introduced into the virtual file system. For example, consumers can request an instance of the data based on a URI, and the managed hub can match-make to all accessible copies of the data, to the limits of service level for that member and the associated access lists. Generally speaking, at least to some practical limit, the more copies uploading simultaneously from publishers, the faster the download to the consumer can be completed. The managed hub can also include module506, which may configure the manage hub to authenticate users. Each user of the community internet drive can be identified upon requesting and/or accessing the community internet drive (an example of which is discussed in connection with, e.g.,FIG.6). The user's identity can be determined even if that identification just marks the user as a temporary and/or anonymous user (as opposed to a registered community member and/or other type of known user). Users who are known, returning community members can be provided access to data based on and/or specific to the users' identity and any profile that may be associated with each of the community members. Module506can be configured to work in conjunction with other modules, such as module508and/or module524discussed below. Module508can be configured to manage access and permissions for one or more community machines to provide managed hub functionality. The access and permissions managed by module508are the access control for the virtual file system as illustrated in system200. For example, module508can enable the managed hub (e.g., server104) to track permissions from each publishing user of the independent publishers (e.g., computer106, personal computer108, tablet110, laptop computer112, and/or cellular device114), so that the view presented to a consumer end user is limited to match the end user's level of access. The independent publishers may use module524to manage access lists and groups by, for example, restricting data using rules setting various degrees of security. For example, the levels of security, which may range between publicly available data and private data and/or data that is specific one or more particular users. For example, public data can be exposed to everyone (e.g., similar to a public Internet web page), private data can be defaulted to be restricted to personal use and, in between, levels of semi-private data can be configured such that the data can be exposed only to people's machines, where the people have, for example, been granted access individually or have be placed in an in-group with access, such as friends and family. Specific relationships associated with the data can be assigned to the data in response to a user entry and/or in response to a preconfigured setting of the system. For example, a relationship can be configured to enable particular users to be granted write access to create files in specific directories on a publisher's local drive. Also, in some embodiments, module508and module524can be configured to enable relationships that provide an individual end user and/or some set of end users access to particular content. Module524, which may be implemented by the independent publishers to manage friend and access lists, can be configured to function in conjunction with, for example, module526and/or module508. In some embodiments, after module524is used by a community member user to specify the user's desired relationships to be associated with the user's data, the managed hub can be configured to use manage access and permissions508even when that community member is offline. For example, to limit spam, one or more users may configure their respective portions of community internet drive to block or accept communications through module526which come from users that are not in-group members or otherwise receive notifications of changes associated with that user's data. For example, when a first user's communications (such as, mail, messaging and chat data) is authorized for other users that the first user assigned to an in-group, those communications can be welcome or unwelcome to the other users depending on whether the other users also see the first user as belonging to an in-group of theirs. When the relationship has two members mutually in in-groups, module526has permissions to operate most fully in communication and notifications. For example, a notification can be sent to in-group members in response to one of the users adding data to the user's shared folder on the community internet drive. In this regard, the community internet drive can be configured to provide value when the shared content and the communications are following connections that map personal relationships. As another example, other types of data (such as posts) can be treated by embodiments discussed herein as an announcement of public content. In some instances, access seeds can be sent to leverage the location-based URIs of the system and to module506. As sometimes referred to herein, “access seeds” are combinations of keys, which may be bundled as a file, to allow a particular user access to a particular data. A member of the general public, for example, might be granted access to data generally restricted to a user's friends, for instance when the data is determined by the community internet drive to be somehow associated with the member of the public (e.g., the data is a picture including the member of the public), and the user having an in-group association might be granted access to some data that was otherwise private. In some embodiments, a particular member of the public might be granted access to a user's private data (e.g., an accountant or attorney may be granted access to tax return data) through an access seed. When a file is added to the community internet drive from an external source, that file may become an original or progenitor for all future chains of copies. Module510of the managed hub can be leveraged to, for example, track and mange duplicate and/or derivative files of that progenitor. For example, the community internet drive can be configured to have an awareness of each file download the drive mediates. Until modified, each downloaded file can be considered in the community internet drive to be a duplicate of the original. Being configured to consider downloaded files as duplicates provides a number of advantages, some of which are discussed below. As a first example, when the original data is unavailable (because, e.g., the independent publisher hosting the data has been powered down or otherwise disconnected from the network), the duplicate copy of the downloaded file can be provided to the end user when that end user has been authenticated as having been assigned the appropriate permissions by the independent publisher of the duplicate file. The duplicate file can, in some embodiments, still continue to exist after the original has been deleted. As another example, the duplicate file can be used for internal operating efficiency. In such embodiments, multiple sources for an upload can act to speed up the download to the end user, where each uploader is participating in sharing their bandwidth to aid the downloader. Derivative files in module510can arise after the duplicates are edited and/or otherwise substantively modified. Awareness of both duplicate and derivative files by the infrastructure implementing the community internet drive can aid in mitigating against link rot and provide other advantages when the original file is lost and/or destroyed. As such, some embodiments of the community internet drive, despite being dependent on machines that can be powered down and/or otherwise have no centralized control over their availability (because control rests in each machine's user), may achieve a higher reliability than might be achieved with a conventional website where each site redesign can destroy saved links. FIG.5also shows module518as part of the independent publishers, which may be configured to host backups and duplicates. In some embodiments, module518may be configured to operate in conjunction with module510of the managed hub. When module518services a download, that member's community machine is donating bandwidth to the benefit of the consumer. If that content was hosted in response to module510requesting module520to host a backup or duplicate file, module518can be configured to enable the community member to donate, e.g., storage and/or bandwidth. Publishing users of independent publishers can agree to, for example, hosting data from other publishing users of independent publishers by using the managed hub to coordinate the distribution of data among strangers and/or other users that may not otherwise know each other. In such instances, there may be symbiotic advantage achieved through blind collaboration. For example, users may mutually agree to provide their community machines to enable data redundancy to other's data, thereby improving the reliability and bandwidth of the community internet drive as a whole. In this regard, the managed hub can be configured to provide a fair management and distribution of each community member user's data. Two functions that may be included in some embodiments of the community internet drive are: one, uploading content and other data for publishing, and two, downloading content and other data for the end user (also sometimes referred to herein as the “consumer”). Several modules shown inFIG.5can be configured to directly or indirectly support those operations. For example, users of the independent publishers can choose which of their content to publish (as discussed in connection with, e.g.,FIGS.2-4). When content has associated metadata (e.g., descriptive data) and/or the user associated with the content is a registered user (e.g., “community member”) that has descriptive data, module522can be configured to manage the landing page, searchable information, and/or other metadata associated with the content. The metadata can be published as well in some embodiments by module522. As another example, module516can be configured to facilitate the managed hub functionality by notifying the managed hub and/or in-group machines (directly or through the managed hub) of (newly) available published data, propagating through a social network of in-group connections. In some embodiments, the metadata may include descriptions of the content, time stamps, author information, thumbnails (for image and video data), other types of preview data (for, e.g., large or commercial content files), and/or any other type(s) of metadata. Previews, for example, can be created automatically based on an application association (such as generating a low resolution or streamed copy), and/or the publishing machine can generate custom previews. The managed hub may include module514configured to facilitate the providing of various services such as publishing, searching, discovery, and/commerce, among other things. For example, module514can be configured to allow the consumer to find data the consumer wishes to download. As another example, module514can be configured to collect fees for commercial activity provided using one or more particular aspects of the managed hub functionality and/or any or all other functionality discussed herein. Charges can accrue to commercial customers for use of, for example, commercial namespace, the bandwidth provided and/or used, and the transaction(s) used for sales. Modules512and514operate under the guidance and constraints of module508. As module514supports publishing, searching, and discovery, it will only expose and share content with members and community machines that are known to have access by508. In the case of data restricted to an in-group, only those members with a personal relationship will have that content revealed. As module512seeks to match make peers for a P2P exchange, it must locate peers that both are known by510to be hosting a copy, with peers that also have access as is known by508. In the case of data restricted to an in-group, the peer community machines are making connections that follow the social relationships of the members that own those machines, increasing trust and security. Preferentially friends and family share with and download from friends and family, keeping within the in-group. In some embodiments, to enable authenticated users to share data, the content and/or other data can always be encrypted using the same technique and/or keys. In other embodiments, varying encryption and/or different encryption may be used. Module518may also be included in some embodiments and be configured to facilitate service downloads among the independent publishers and/or to other types of the consumers. FIGS.5A-5Eshow some example alternative embodiments combinations of modules that may be included in the managed hub and independent publishers to provide different types of community internet drives. FIG.5Ashows an example where the community internet drive is implemented using fewer modules. The managed hub and the independent publishers can still be configured to provide the community internet drive as a heterogeneous set of community machines assembled to present a unified virtual file system. To that end, the managed hub may still contain module504. The independent publishers can likewise support fewer and/or different features in embodiments consistent withFIG.5A. For example, module528can be configured to report published contents, and be similar to but less powerful than module514. For example, module528may omit descriptions that may be included in module514. As such, the consumer may be limited when browsing for content. However, even with module528being used instead of module514, once the consumer has found the desired content, the managed hub can still be configured to locate and connect peers while managing encryption using module512and the independent publishers can still be configured to service downloads using module518. While replacing module514with module528may reduce efficiency, ease of use, and synergistic benefits, such embodiments may still retain the reduction in barriers to publishing content as compared to known systems and methods. For example, users that are community members may not have to select and purchase a domain name, contract for content hosting, and/or install code to maintain their own web server, FTP server, or tomcat to publish so that a broader population can contribute data and files, among other things. Advantages of embodiments consistent withFIG.5A(as compared to those consistent withFIG.5) are simplicity in the implementation and lighter weight executables. FIG.5Bshows an example system that includes the components and can be configured to provide the functionality discussed in connection withFIG.5A, while also being configured to restore metadata and user-contributed content, such as file descriptions and rankings. Similar to the embodiments discussed above, the independent publishers can be configured to report published contents, descriptions and metadata, among other things, using module516, service downloads using module518and manage landing page and searchable descriptions using module522. The managed hub, as shown inFIG.5B, can be configured to construct and manage namespace using module504, locate and connect peers while managing encryption using module512, and provide services (such as publishing, searching, discovering and commerce, among other things) using module514. The synergistic advantages of searches across a broad and unified namespace can be restored using embodiments consistent with those shown inFIG.5B. Similarly, a virtual file system, such as those discussed in connection withFIGS.1A-4, can function similar to that discussed above from a user's perspective less some of the relatively more enhanced functionality discussed above. FIG.5Cshows another example in accordance with some embodiments discussed herein that can be configured to implement a virtual file system. Similar to some of the embodiments discussed above, the independent publishers can be configured to report published contents using module530, service downloads using module518, and host backups and duplicates using module520. Similarly, system100may be configured to provide various managed hub functionality, such as construct and manage namespace using module504, track and manage duplicates and derivative files using module510, locate and connect peers while managing encryption using module512, and provide publishing services using module528. Embodiments the same as or similar to that shown inFIG.5Ccan provide synergistic benefits of better availability when publishing users are unavailable, higher performance when more sources for upload are available, and recovery from link rot are all restored. Otherwise, the functionality of the community internet drive discussed above in connection withFIGS.1A-4has the same connectivity and user abstraction. FIG.5Dshows another example in accordance with some embodiments discussed herein that can be configured to restore user authentication and access control in addition to the functionality discussed in connection withFIG.5A. Because the managed hub can be configured to construct and manage namespace using module504, authenticate users using module506, manage access and permissions (based on, e.g., access control from the publishers and user identity) using module508, locate and connect peers while managing encryption using module512and provide publishing services using module528, many more synergistic benefits accrue. Coupled thereto, the independent publishers can enable the community internet drive to publish more than just public data. For example, the independent publishers of embodiments consistent withFIG.5Dcan be configured to manage access lists and groups (to, e.g., limit access to particular content) using module524, report published contents using module530and service downloads using module518. As such, embodiments consistent withFIG.5Dcan be configured to vet connections before forwarding. A consumer may have acquired a link to content, but the community internet drive can be configured to determine whether that user has permission to access the linked-to content. If not, no P2P connection will be made to the independent publisher hosting the data to which the link directs, and the hosting independent publisher's IP address can remain private. In this regard, “vetting connections” can include checking whether a consumer has any legitimate purpose in contacting a publisher of data. Such vetting may aid in protecting against an unauthorized consumer stumbling on a link/URI to a file, and using it to download data that was not intended for that consumer's consumption. For example, if a consumer is not allowed access to that content based on in-group or access lists, the managed hub can simply deny the request, and the consumer will never be provided the IP address of the publisher. No P2P connection will ever be made. Privacy and security are protected. Additionally or alternatively, a consumer's view when browsing can be constrained to their permissions, and the consumer may not be able to get, for example, the IP address and open ports of publishing users when the publishing users have no content they have exposed to the general public. Basically the community internet drive provides a firewall function, where embodiments discussed herein may also be configured to prohibit unauthorized consumers and/or other unauthorized users from being provided data that enables the unauthorized users from being able to determine, for example, what content to ask for and/or the IP address for sending user datagram protocol (“UDP”) messages, among other things. In some embodiments, the independent publishers can be configured to open transfer connection protocol (“TCP”) connections to the managed hub, where for reasons of security communication is initiated from the publisher side. For efficiency, and to simplify interactions with routers and firewalls, the file transfers themselves can be established with UDP tunneling, but the UDP tunnel will only be established when there is proper access and legitimate purpose. One of the synergistic advantages that may be realized with embodiments consistent with that shown inFIG.5Dis that shared authentication among community member users can protect privacy with access control while channeling interesting content and updates, among other things, to machines indicated to be used by in-group members of the community member. As such, embodiments discussed herein can be configured to enable the right people to get priority updates and the wrong people to be blocked from connecting. FIG.5Eshows embodiments that adds hosting of more direct user communications, both point-to-point and broadcast, to embodiments consistent withFIG.5D. Point-to-point communications can leverage the user authentication functionality discussed in connection withFIG.5D. Embodiments of the community internet drive offering communications such as posts, mail, messaging, chat, and sending access seeds using module526, provide advantages of community building while adding immediacy to contacts among members while potentially long downloads are progressing. Access seeds can bundle keys based on the instance and location tracking within the community internet drive with a key based on member identity to grant targeted individual access. FIGS.5-5Eshow various embodiments, additional embodiments that are not shown may also be realized without departing from the spirit of the invention. For example, some embodiments may be a combination ofFIGS.5B and5C, combining the features of duplicate tracking, metadata, and user contributed descriptions. One of the advantages of such a combination is that the managed hub can be configured to merge and rank descriptions on the content to improve and build more searchability. As another example, some embodiments may be a combination ofFIGS.5B and5D, and provide advantages realized when user authentication and user contributed descriptions come together. More specifically, the combination ofFIGS.5B and5Dcan be used to identify a user that added content, while also allowing the system to track rankings for each community member's descriptions and posts, thereby allowing the community internet drive to present higher quality descriptions and suppress lower quality descriptions. As yet another example,FIGS.5B and5Emay be combined in accordance with some embodiments, which may include adding user communications, as could the embodiments ofFIGS.5CandFIG.5D, which may bring user authentication together with duplicate tracking. One of the advantages of combiningFIGS.5C and5Dis that publishing users can choose individuals or in-group members they wish to collaborate with, when they want to preferentially host backups and duplicates for people they know. The combination ofFIGS.5C and5Ecan adds personal communications to the combination ofFIGS.5C and5D. As final example, the combination ofFIGS.5B,5C, and5Dcan embody everything discussed herein except hosting user communications. Other features not explicitly discussed inFIGS.5-5Emay also be included in accordance with some embodiments to further encourage and facilitate community involvement and participation. FIG.6shows a process which may be implemented to register a user using the machines discussed herein. Although the process is shown as being a sequential process, the user interfaces provided by machines discussed herein can combine many of these steps and allow users to navigate them in their own order. The process starts at602. At604, a determination can be made as to whether the user wishes to become a community member or be an anonymous user. As referred to herein, an anonymous user includes a user who cannot be identified by the system, a user who wishes to not be identified by the system, and/or any other user that the system determines wants to or should bypass regular user registration. This determination may be made based on one or more pieces of data received by the system. For example, the system may receive a signal that indicates the user has selected an option to remain anonymous or to create a profile. The system may cause anonymous users to bypass regular registration. As such, in response to determining at604that the user is an anonymous user, the system may be configured to provide, at606, the anonymous user open access to only public data on the community internet drive. In some embodiments, anonymous users may not have to create a regular account with the community internet drive, and can start downloading public files immediately at604. Anonymous users can, by definition used herein, never be on an access list or be a member of any member's in-group. Anonymous users may be prohibited from publishing data to the community internet drive. In other embodiments, anonymous users may still be required to provide at least some information that may or may not also be used to create an account with the community internet drive and/or various anonymous users may be treated differently (based on, e.g., the user's IP address, location, citizenship, other affiliations, and/or any other information associated with the user that the system is able to determine). In response to determining at604that the user wishes to become a regular, registered new user, the process may proceed to608and the system may be configured to create a user name and password at608to be used for future authentication. For example, at608, prompts may be generated and provided to a user, the data received from the user may be checked for conflicts (e.g., meeting predetermined rules associated with setting a username and/or password), and the user name and/or password may be saved. In some embodiments, the user name and password may instead or additionally be initially assigned to each user. Each user can also be associated with one or more mount points on the community internet drive, where the associated mount point(s) becomes embedded in the URI of all the user's published data. That mount point can be part of a namespacing process that is begun at610. Each user can be, in some embodiments, assigned a unique user name (“ID”) and a unique mount point in the logical hierarchy of the community internet drive. Uniqueness here is in the sense that each mount point is associated with a single user account, although it is possible for a single user to control more than one mount point. A centralized process for generating a namespace can help affix an often hierarchical meaning in that namespace while preventing collisions. The less flat the namespace (e.g., the less the names are in one group or at one hierarchical depth), the more useful the organization becomes for some embodiments. Hierarchical organization can be better for browsing and searching some embodiments. As referred to herein, “collisions” happen when the namespace is not unique, such as when two members want to create a mountpoint with the same name. For example, Jon Doe and Jane Doe both want jdoe@their_domain.com to be his/her domain namespace. Even in embodiments that allow P2P protocols to be used to transfer data among the machines in the virtual file system, the managed hub can be configured to assign the namespace to only one of Jon Doe or Jane Doe, such as the first member who is requested the namespace. As noted above, at610the system implementing the community internet drive may be configured to allow users to begin to select their namespace. A determination can be made at612whether the user's account is associated with an internet domain that is already registered. If so, the system can be configured to allow the user to opt to associate the mount point for the user's allotted portion of the community internet drive to that domain name. In response to determining at612that the user's account with the community internet drive is to be associated with the user's internet domain, the process can include configuring the system to further verify at614that the user has authorization to represent that domain. Users with existing internet domains may also be business publishers and/or other types of users and, in some embodiments, can be offered more options at616to define their data publishing, beyond those options of value to typical member users of the community internet drive. For example, the community internet drive might redirect traffic to the community member's existing URL, or the business user might integrate an application programming interface (API) to allow more seamless operation of file sharing between the community internet drive and community member's non-community infrastructure (e.g., private portions of machines and/or entire machines that may be accessible to community members, but not be part of the community internet drive). In response to determining at612that that the user's account with the community internet drive is to be associated with something other than a previously-registered internet domain, the process may proceed to618. At618, the system can be configured to determine whether the user has indicated a desire to create an impersonal account, which may be the default choice at618. If so, community internet drive618can be configured to generate a community namespace for the mount point at620, and the system can proceed to let the user publish. At622, the system can enable the user to choose a directory including content to be published, thereby causing the partitioning of the user's local storage device into a shared portion and a private portion. Selecting the local directory to mount at622can enable the system to manage publications and downloads of the user's content. Downloads happen as this member requests copies of shared files, and they are placed by the community internet drive into this directory, or into a sub-folder beneath it. Publishing is automatic for any files within or below that directory on this member's local storage device, and uploads can happen as other consumers requests those files, although the consumer may only know the mountpoint on the community internet drive, and nothing about where that shared portion of that local storage device might be embedded within the private portion of the local storage device. This distinction is implicitly and explicitly handled through namespace assignment, URI construction, and URI parsing. Namespace management is concerned with mapping an entry point for that user onto the community internet drive, typically as the leading portion of the URI. Typically the first element identifies the domain of the community internet drive, such as w3disk.com. In some embodiments, some number of folders and pages in the communal portion of the community internet drive website are traversed, as is shown in200. Finally the mountpoint itself is reached, which is reserved for and identified with the one individual user. Later elements of the URI are actually the path within the shared portion of the member's storage that is traversed in reaching the file or folder being uniquely identified. Thus the URI in some embodiments can traverse three or more systems, from domain through website to local storage in identifying and reaching content, often in the member's home or office, or even on a mobile device in their pocket. An effectively unlimited number of systems can be traversed physically as the URI embeds symbolic links304, or chains of symbolic links. At624, one or more keys can be generated that can be associated with the community member's account and used for future request validation(s) by the community member. Keys can be based on client and server identities, mount points, paths, user entered keys, and pass phrases, among other things. Among other things, the community internet drive uses those keys to validate incoming UDP traffic and stamp outgoing UDP traffic during P2P exchanges to protect against spoofing of identities. The community internet drive can be configured to function predominately by cooperation, where the community internet drive can enable users to opt into their degree of participation at626. For example, the more a community member invests at626into the community internet drive, the more resources of the community internet drive may be allocated to the community member. As a more detailed example, the amount of storage on other users' community machines made available for copies of a user's data may be correlated to the amount of local storage device126that the user is willing to share as local internet drive122B (for hosting backups and duplicates, among other things on the community internet drive). A number of other benefits may also be provided by the community internet drive as an incentive (or for any other reason) to increase a user's level of participation. For example, the more space on the community internet drive that the user has access to, the more copies of the user's data may be maintained by the community internet drive. Having more copies of the user's data may enable the user's in-group and/or other people to have better access to the user's data due to improved data availability and faster delivery, even when the user decides to power down or otherwise remove the user's machine from the community internet drive. In some embodiments, some and/or all community members (e.g., those that attain a certain level of participation, receive a promotional level of access and/or pay a fee) may also get credit for bandwidth shared (in addition to or instead of storage space shared). Bandwidth shared may depend on the community member's uptime, their upload bandwidth, and the amount of drive space they share, among other things. Other benefits community members could receive for participation are additional system uploaders being activated to improve their personal download experience, and offsets for fees for maintaining custom/premium namespacing, among other things. “Upgrading” at626, as referred to herein, is what happens when a community member pays for upgraded service from the community internet drive. Improved service levels from the community internet drive can be bundled with other incentives, for example usage of a custom/premium namespace or suppression of advertising. In addition to or instead of granting such benefits based on the user's commitment of resources under the user's control to the community internet drive, some embodiments discussed herein may be configured to provide some or all of the advantages discussed herein (as well as others not discussed explicitly herein) based on other criteria, such as in recognition of the user's role in aiding in the maintaining of the managed hub, the user receiving a promotional level of access, and/or the user paying a fee to receive some or all of the benefits discussed herein, among other things. At628, once the new user's account is fully defined, the system may be configured to generate and transmit invitations to friends to join the community internet drive. Invitations can be generated in a number of manners, most simply by generating an email invitation, but also through searching for existing members to add directly to their in-group and also keys can be generated and shared when the person being invited prefers coming to a website directly over receiving an email. The invitations can be received and accepted by the system, so that those known friends and family can be added to an in-group and get access to the new user's restricted material. The process may then end at630. Returning to618, in response to the system determining the user has indicated a desire to create a more personal account/namespace (instead of the default account/namespace), the process can proceed to632. For example, a user can choose “no” at618in response to the system asking if the user would like to set up a default, impersonal account, and the process can then determine, at632, whether the user would like to construct a namespace that includes geographic information related to the user. In response to determining at632that the user would like to include geographic information, such as country, state, city or zip code, the geographic information can be received by the system at634. The data associated with the geographic information can be used, among other things, to build the namespace to help more clearly identify that user at636. The managed hub can be configured to ensure that the combined namespace generated at636is not already owned by another user and is unique. Once the mount point is defined, the user can move on to publishing as discussed above. An example namespace including geographic information, such as Encinitas, CA, and a user name, such as Nathan, may be: “www.w3disk.com/us/ca/encinitas/nathan”, as compared to a namespace that includes only user name (Sarah) information: “www.w3disk.com/sarah”. The leading “us” may be geographical information (referring to the United States) determined based on the servers and other networking components used to route the user's requests, as opposed to being provided by the user at634. Different namespaces have differing advantages and value, whether the member prefers the prestige of a shorter namespace, or not sharing their location, while to other members more explicit namespaces including location have more meaning. In other embodiments, only user-provided geographical information or data in general may be included in the namespace. The process may then proceed to622. In response to determining at632that the user would rather not include geographic information to build the namespace, the process may proceed to638and system can be configured to receive subject-matter data to define the domain namespace. As such, systems in accordance with some embodiments can be configured to enable community members with particular interests to congregate around namespaces as they publish and share. For example, a namespace that includes, among other things, “English” “music” for “teenagers” can have the namespace: en.w3disk.com/music/teen/aubrey. In some embodiments, such as for business purposes, variable fees might be associated with some or all of the available interest domains (and/or geographic data, among other things). In response to determining at638that the user has selected to define a domain namespace using subject matter information, the process can proceed to640to enable the user to select the language of the user's content and/or select from the supported subject domains at640. The process may then proceed to636discussed above. Another option that may be included in process is a determination at642as to whether the user would like to define any other choices for building the user's namespace in the community internet drive. For example, in response to determining at642that the user would like to define other aspects of the namespace, the process may proceed to644and the user may find an open location in the community internet drive's virtual file system, selecting where to insert the user's mountpoint. In some embodiments, one or more namespaces may be tagged by the system as being premium namespaces for mount points and can be offered for a fee (e.g., a flat fee, subscription, etc.). In such embodiments, one or more additional steps may be included in the process that enables the verification of payment. Other steps may be included in the process and/or any other process discussed herein. Additionally or alternatively, one or more steps may be rearranged and/or combined. For example, while634,640and644are shown as being alternative functions in the process, the functions associated with634,640and644may be performed in series even when one or more of the decisions at634,640and/or644are affirmative. FIG.7is a flowchart showing process700which is an example method of how files may be published in accordance with some embodiments discussed herein. Process700starts at702. In response to determining at704that the user would like to proceed with easy publishing, the community internet drive can be configured to provide various options to the user to save time and effort. For example, at706, the system may be configured to provide the user a one-click publishing option. The one-click options may include, for example, a right click option and/or menu options can be installed for the operating system or the browser, among other things. The community internet drive can either make a copy of that file within a published folder, or it can create a link such as described at304, based on member preferences. Full copies are valuable in some circumstances because the data is more redundant and secure, but links are fast to create and use less space on the local hard drive. In response to receiving an indication at706that the user has selected a user a one-click publishing option, the community internet drive can execute, at708, the various functions needed to publish content for the user. For example, at708, the community internet drive can be configured to set default permissions and move or link the file in the published subtree, automatically fill in descriptive information to the extent possible, generate a preview/thumbnail, notify all members of the in-group, upload to in-group members who have requested their own local copies by default, among other things. At710, hub management components can be notified of the publication. For example, the independent publisher using module516can be configured to report the action and new content to both modules510and514at the managed hub. Based on user preferences, the managed hub, using module514, can propagate the news and notify all members of an in-group using module526. Returning to706, in response to determining that the one-click publishing option was not selected, the system can be configured to enable the user to, at712, drag and drop a file for publishing into the subtree on the local drive that was defined at622inFIG.6. In different implementations, these actions can be accessed and controlled through a web browser, through an application with a custom user interface, and/or by using the operating system directly. At714, the system can be configured to enable the user to and/or automatically set or adjust permissions based on, for example, user preferences and/or the publishing folder's settings. In some embodiments, some or all of the permissions can be adjusted by the user at714. In some embodiments, a plug-in or other component can enable the community internet drive to provide the user a view of the published files offered to the user by the community internet drive, which may also be used as a confirmation to the user that the publishing is completed at that moment. The independent publisher can also be configured to generate and transmit (from module516to module510and module514) a notification of the publication, at710, such that the newly published content can be integrated into the managed hub functionality. For example, the managed hub may let the user, in-group users and/or other users find the newly published content. Returning to704, in response to determining the user rather proceed with a custom or other type of non-easy publication, process700may proceed to716to identify the source within their storage system. At718, the system can enable the user to choose, at718, a destination location for where it will appear on the community internet drive as part of the published subtree. The community internet drive may be configured to support multiple operating systems, for example Microsoft's Windows®, Apple Inc.'s Mac OS®, and Linux. Local disk drive and file access can be part of a browser plug-in with permissions, or part of an independent application written in Java, C++, C#, or some other language. Local file access can be written using Boost or similar libraries to achieve OS independence at compilation, or custom clients can exist at the source code level for each platform. In practice trade-offs for performance, ease of use, and reliability may lead those skilled in the art to selections of different equivalents that achieve the same result of examining the local file system. At720, the system can be configured to confirm the user has the proper permissions and/or other authorizations to publish at that location. For example, the system can be configured to determine at720, among other things, that the user has ownership of the namespace the content would be published under. In some embodiments, a determination can be made at722whether or not one or more symbolic links should be created for files and/or folders. For example, files and/or folders that do not exist at the mirrored location on the local drive corresponding to the published location on the community internet drive can be created as symbolic links. In some embodiments, a full copy can be made instead or in addition to a symbolic link. When locations differ and a determination to generate a link is made at722, independent publisher (and/or other component) can be configured to persist the data at724to restore that publishing between sessions. Another publishing decision that may be made is whether to share application data at726. The determination at726may be made after724and/or after determining at722not to generate a link. Application data may come from a file with an application association, such as word processing files, spreadsheet files, presentations, audio, video, and image files. The community internet drive can incorporate understanding of such file formats and applications so that some or all of the file can be streamed or published in part. In response to determining at726that application data should be shared, data can be accessed at728based on requests and queries from the consumer, including but not limited to file chunks, document pages, copies of music and video tracks, streamed music and video tracks, video sequences, and database queries. In some embodiments, independent publishers may be configured to persist the application association that can process requests for application data. Process700may then proceed to710after728or after determining at726that application data should be shared. After710, process700may proceed to730and proceed to complete the publishing process. The file from the independent publishers may already be there to be discovered, but the publishing user and/or other users may want to a pass around references to that data. A determination can be made at730whether to create a reference. In response to determining a reference link should be created at730, a link may be created at732. One of the more simple options is to generate a link as a URI. The community internet drive can be configured to use that link to retrieve the data when it is presented by an authorized user, even recovering from outages at the publisher or deletion of the original source, when copies are available. In some embodiments, process700may end after732at734. In some embodiments, one or more additional options may be provided to, for example, the publishing user and/or owner of the data. For example, in response to determining generation of a reference link is to be omitted at730, a determination can be made at736as to whether to create an access seed to the data. If an access seed is not to be generated, process700may end at734. If an access seed is to be generated, process700can proceed to738to create one or more access seeds by combining URIs, keys, and user IDs, among other things, to grant individual access to data that would otherwise typically have higher restrictions. In some embodiments, granting individual private access to a public file may be harmless. The access seed creator can be configured to combine, at738, reference information like the URI and the user ID within the community internet drive along with security keys, and writes them into a file. At740, that file can be sent to the identified consumer. FIG.8shows an example of the user's perspective of downloading data from the community internet drive. The ease of use of combined features of the community internet drive can be a valuable aspect of some embodiments discussed herein. Users may be given three easy options for download. They can click on an access seed at802, click on a link at804, or, at806, discover the file or folder on the community internet drive by browsing or searching. Opening the access seed at802(e.g., with a software application) starts the download automatically. If the consumer has opened a link at804, the consumer can evaluate descriptions and metadata to decide whether to move on to saving that file or folder locally at808. If the destination folder already exists, the community internet drive can synchronize and patch contents rather than re-downloading data that is unchanged. If the consumer has discovered the file at806by exploring the community internet drive and thus have a view of the hierarchy exposed, the consumer can save to a default location or drag and drop at810the file or folder where they wish. If the newly created and downloaded copy is itself published, the user can set or adjust permissions at812, which may also occur after802and/or808. FIGS.8A-11show example processes that can function underneath the user's perspective ofFIG.8. For example,FIG.8Ashows an example process for initiating a download from a URI link. At814, a determination is made whether to open the URI from a browser, including any plug-in modules. Browsers excel at ease of use and an intuitive interface, while custom applications excel when more efficiency, access to hardware, and UI complexity are necessary. If a browser is used, clicking on the link and/or pasting it in the browser at816can function as a request for the file from the community internet drive. In some embodiments, client application software, firmware and/or hardware that services downloads can also be configured to offer a user interface (UI) for communicating with the managed hub. Instead of using a browser, a determination can be made at814to open that client application at818and then paste or type the link at820into the UI. In either case, the managed hub can be configured to authenticate the user at822and confirm the consumer's privileges at824to view the data. At this moment, the consumer is in the process of making a decision whether to download the content, and the managed hub can be configured to display any descriptive metadata it has stored. In response to determining at826that the consumer desires more detail, additional descriptive metadata can be requested. The community internet drive can be configured to examine the pool of data from all the community members and select the set of copies where the consumer has access permission. The managed hub can then request more information from independent publishers in that set. At828, the metadata including but not limited to file size, modification and creation dates, descriptions, directory listings, number of online peers in the set, number of downloads, and ratings are displayed to the user to inform the user's decision. Any previews of the contents can also be provided at828. These can be constructed custom by the publisher, such as pages from a book, copies of tracks of music or video, streamed tracks of music or video, lower resolution images or video, cropped pictures, or time cut sequences. Associated applications can also automatically generate and provide low resolution previews as copies or streams at828. In response to determining at830not to download the file/folder, the process ends at832(by, e.g., proceeding to812ofFIG.8). In response to determining at830to download the file/folder, the storage location can be selected at834and the file/folder downloaded. For example, the user can be provided the option to accept a default location or select the storage location on their local drive. When the storage location selected is within the subtree already published to the community internet drive, the subtree can be selected based on information already available at the hub. To download the file/folder from a web browser, a temporary copy could be cached in a temporary space. To select a storage location for a long term or larger copy, how to traverse the file system may vary based on the programming language and operating system at the community machine being used by the user. In this regard, the community internet drive can be configured to implement a multiplicity of traversal schemes, as the community internet drive can be configured to support multiple operating systems, be implemented as a web browser plug-in (e.g., with permissions), and/or be implemented as an independent application written in Java, C++, C#, and/or some other programming language. For example, in some embodiments, the community internet drive can be configured using Boost or similar libraries to achieve OS independence at compilation, and/or custom clients can exist at the source code level for each platform. In practice trade-offs for performance, ease of use, and reliability lead those skilled in the art to selections of different equivalents that achieve the same result of traversing the local file system. The process may then end at832(by, e.g., proceeding to812ofFIG.8). FIG.8Bshows an example of initiating a download from an access seed as discussed in connection with, e.g.,802ofFIG.8. The community machine can have received the access seed from any source, including email attachments or messaging within the community internet drive. A determination can be made at836as to whether to open the access seed with a browser. In response to determining at836that the access seed is to be opened by a web browser, the access seed may be opened at838using, for example, one or more plug-in modules. For example, a request for the file or folder from the community internet drive can cause a client application, that services downloads, to offer a user interface (UI) for communicating with the managed hub. In response to determining at836that the access seed should be opened in a manner other than using a browser, the access seed may be opened with a client application at840, for example. In some embodiments, application associations based on file extensions can open the access seed and application together at once. After838or840, the managed hub can be configured to authenticate the user at842. The system can also be configured to confirm at844that the packaged keys are authorized to grant that particular user access to that data owned by the publisher. Because the consumer at this moment is still in the process of making a decision whether to download the content in some instances, the system can be configured to display any descriptive metadata it has stored. In response to determining at846that the consumer desires more detail, additional descriptive metadata can be requested and displayed at848. For example, the community internet drive can be configured to examine the pool of data from one or more (including all) of the community members and select the set of copies where the consumer has access permission. The system can then be configured to request more information from any of independent publishers that both hold a duplicate copy and who allow access to the downloader. The metadata displayed at848may include, for example, file size, modification and creation dates, descriptions, directory listings, number of online peers in the set, number of downloads, and/or ratings to aid in inform the user's decision. Any available previews of the contents can also be viewed at848, when such previews are provided by the publisher and supported by an associated application. In response to determining to forgo obtaining descriptive data or after displaying the data, a determination can be made at850as to whether to download the file/folder at850. If so, the storage location can be selected at852. If not, a default location can be used. If a default location is used and/or after852, the process can end at854(by, e.g., proceeding to812ofFIG.8). FIG.8Cshows an example of using the community internet drive to discover content for downloading as discussed in connection with806ofFIG.8. At856, a determination can be made as to whether a browser should be used to access the community internet drive. In response to determining that an application or something other than a web browser should be used, a client application, for example, can be opened at858to access the community internet drive. In response to determining that a web browser should be used and/or after accessing the community internet drive via another vehicle, the system can be configured to authenticate the user at860. Based on the user's identity, the system can be configured, at862, to limit the user's view to content and other data that the user has permission to access, where typically that data will be personal and private, fully public, or limited within a social in-group. At864, the system can be configured to display the user's home location as a starting point. Each user's home location may include, for example, content the user published and/or favorite connections to content and/or members of their in-group, among other things. From the home location, a determination can be made at866whether nodes should be traversed. For example, starting either from the root or from any of user's preferred connections, one or more nodes can be traversed and the various files/folders can be explored at868. In response to determining that nodes are not to be traversed at this time, a determination can be made at870as to whether a substree should be searched and/or otherwise explored. If so, one or more subtrees can be explored at868. If not, a determination can be made at872whether or not the exploring should be stopped (e.g., in response to receiving a user indication that the exploring is complete). If exploring is to be continued,868can be executed. In response to determining at872that the exploring is complete, a source file and/or folder of interest can be selected at874. In some embodiments,874may be performed while the consumer is in the process of making a decision whether to download the content, and the system can determine at876whether or not descriptive metadata and/or other data that has been stored should be displayed. In response to determining at876that descriptive data is to be displayed (e.g., in response to receiving an indication of the user's desire to view more detail), additional data can be requested and displayed at878. For example, the community internet drive can be configured to examine the pool of data from all the community members and select the set of copies where the consumer has access permission. The system can also or instead be configured to request more information from any of independent publishers that both hold a duplicate copy and who allow access to the downloader. The metadata including but not limited to file size, modification and creation dates, descriptions, directory listings, number of online peers in the set, number of downloads, and ratings can be displayed at878to the user to inform their decision. Any available previews of the contents might also be presented at878. In response to determining descriptive data is not to be displayed or after displaying the descriptive data, a determination can be made at880as to whether or not a file/folder should be downloaded. If not, the user can continue to explore at868. In some embodiments, an option may be provided enabling the process to end at any point (like other processes discussed herein). In response to determining at880that a file/folder is to be downloaded, an option may be provided at882enabling the user to select the storage location. A default location and/or user-specified location may be selected for download. The location may be, for example, on the user's community machine's local storage device, among other places. The process may then end at884(by, e.g., proceeding to810or812ofFIG.8). FIG.9shows an example process of how the managed hub can configure the system to arrange the P2P servicing of a consumer's download request. Process can start at902by receiving a request to download a file/folder from a peer. At904, a determination can be made whether the first peer to contact (e.g., the peer associated with the independent publishing user referenced in the URI) is online. If so, the requesting user's community machine is connected by the system to the publishing user at906. Connecting peers at906may include sending different instructions to the requesting user (also referred to herein sometimes as the “consumer”) and publishing user (also referred to herein sometimes as the “publisher”), where a duplicate host can be acting for the publisher. The machines of both the consumer and the publisher may request and/or receive IP addresses and credentials for the other machine from the managed hub. Additionally, the managed hub provides both of the machines with the file identifier. Consumers may utilize decryption and file re-assembly instructions in some or all instances, again as instructed by the managed hub. In some embodiments for reasons of privacy and security, a duplicate host may be uninformed of the encryption, decryption, and/or file splitting, among other things of the data they are uploading on command from the managed hub. A publishing host, in some embodiments, can be given instructions for encryption and file splitting. File splitting may have any number of modes and, for example, may have two modes. When two modes are implemented, one mode can be used to reduce the size at the duplicate host, and the second can be used to add a barrier to the duplicate host snooping into the contents. Post-encryption, the system can be configured to enable some bit patterns to be removed from the data and stored on, for example, different hosts. Host recipients of the split files can be kept ignorant of each other in some embodiments, the combined data identity, any encryption, and/or whether they hold a complete file or some split, among other things. In response to determining at904the URI source is not currently online, the download request can be queued at908for when that publishing user and/or the related machine comes back online. In some embodiments, other copies of the requested content can be searched for that may be stored on other user's community machine(s) for the publishing user. After the requesting user's machine is connected, at906, to a machine having the publishing user's content and/or after queuing the request at908, a determination can be made at910whether there are in-group members online with permissions open for download to that consumer. In response to finding such in-group members, those in-group members can be connected at912(which may be the same as or similar to the functionality discussed in connection with906). Those in-group members will then join in sharing use of their storage and bandwidth to service the download as described at modules518and520, typically with a friend or family member. If response to determining at910there are no in-group members online with permissions also open for download to that consumer, a download request for each of those in-group members can be queued at914similar to or the same as the queuing at908. At916, a determination can be made as to whether any public sources are online that may contain the same data. In response to finding such a public source at916, the public source machine can be connected at912(which may be the same as or similar to the functionality discussed in connection with906). In response to determining at916such a public source is not currently online, some embodiments may be configured to queue, at920, the public sources, which were found at916, to (also or instead) service the download. In some embodiments, whether the public sources are queued is correlated with the number of peers found at904and/or910. The managed hub will make an estimate for the expected service to the consumer, basically how long the download will take, and based on other activity levels in the community internet drive plus the system load on the public uploaders, and make a determination of a fair allocation of resources and service. At922, a determination can be made as to whether or not the download has successfully commenced. If so, download progress is reported to the user at924. The system, for example, can be configured to report progress to the user's community machine if the download was initiated from a browser, the application, or otherwise. The process may then end at926. If at922it is determined that the download did not successfully commence, the user can be warned at928to leave the community machine online to wait for a queued download. Also the user can be offered derivative copies at930if, for example, a publisher had a copy but since modified it, or at least changed the modification date since the data's original download. The process may then end at926. FIG.10shows an example of P2P communications that may happen behindFIG.8. After the hub makes an introduction, the peers can contact each other at1002. In some embodiments, the system may be configured such that either peer can initiate the communication without an effect on security. For example, some embodiments may be configured to protect the user identities and IP addresses except as authorized by each user. The two peers then handshake at1004based on the system's introduction at1002. Handshaking can include, for example, a determination as to whether or not the peers may have been associated with each other as mutual members of each other's in-groups, where keys and/or other identifying data was previously exchanged between the peers, or whether this is a one-time introduction authenticated at the managed hub. At1006, a determination can be made as to whether the contact between the peers is based on special access bundled in an access seed. As such, the execution of1006may serve as a third level of identification that can be made. If so, the access seed can be verified at1008. After1008and/or after a determination is made at1006that an access seed was not used to as a basis for the peers to contact each other, a determination can be made at1010as to whether application data (rather than, e.g., a complete file or folder) is being requested. As in726and728, the community internet drive can parse or otherwise meaningfully examine into file formats for known application types. In response to determining that application data is being selected, the publisher can be enabled to select the data of interest at1012based on, e.g., that application view of the data. Examples include but are not limited to pages in a document, copies of video or music tracks, streamed video or music tracks, and/or records from a database or an Excel file, among other things. After1012and/or after determining that application data is not being requested at1010, the peers may then agree to transfer the data, and start transferring that data in chunks at1014. The integrity of each chunk of data can be validated at1016. For example, each chunk can be encrypted as directed by the system for the transfer, which can have a special meaning when the receiving peer is hosting a backup or duplicate, and is never given the key for decryption. For further protection, a backup host can be missing parts of the file, even at a bit level that would interfere with the decrypting of the file should the key ever be obtained. Even though that backup host might have a partial and encrypted copy that the backup host cannot read, the system can be configured to aid the downloader to assemble a complete copy and provide the key for decryption. In some embodiments, if the user selected a file or folder (as opposed to, e.g., application data), the resulting data can be written to the local storage. As another example, when the peer is hosting a backup, the known duplicate can be written to local storage. In some embodiments, the community internet drive may be configured to implement a multiplicity of writing schemes used to execute the functionality discussed herein. At the completion of the data transfer, the exactness of the new copy can be confirmed at1018. Exactness verification can be included in the managed hub functionality of system when, for example, the download included hosts with only partial and/or encrypted copies of the data. This may add protection from peers masquerading for access and counterfeiting their ownership of data when they do not have rights. In response to determining the copy was not exact at1018,1014can be repeated. In response to determining at1018that the copy is exact, the connections can be closed and the managed hub of the system can be notified of the availability of another copy at1020. The managed hub with associate this new copy with the original progenitor, so that future downloads can activate this peer as an identical data source for any instance or copy of that progenitor file. The process may then end at1022. FIG.11shows example operations that may be performed at the managed hub when a file is published. The managed hub running on the system may be configured to receive publication notice at1102. Receiving such notice may cause the system to make the URI a defined element of the drive, and/or the data available for access. At1104, the managed hub can cause the system to save the descriptive data associated with the published file, so that the published file remains discoverable when the publisher is offline. The descriptive data can also include previews of the published file in some embodiments. At1106, account data for the publisher can be examined to determine if the publisher is entitled to a backup and/or any other services. The determination at1106may be made based on publisher's level of participation in the community internet drive, any associated upgraded services, and/or the publisher's preferences. In response to determining at1106that the publisher is not entitled to the community internet drive backing up the publisher's data and/or performing any other services, the process may end at1108. In response to determining at1106that the publisher is entitled to an enhanced service level, such as the community internet drive storing one or more redundant copies of the publisher's data for better reliability and quality of service, a determination can be made at1110where to save the data. For example, at1110, the managed hub can cause the system to determine whether the highest reliability is appropriate by saving the data at the server machines. Various factors may be considered in making the determination at1110, including publisher participation, publisher service upgrades, rarity of data, freshness of data, and/or data size, among other things. If, for example, the managed hub determines there is sufficient value to the data, the data can be copied at1112to a networked database managed by the managed hub absent the control of a peer user. In response to determining at1110not to save the data a networked database or after saving the data at the networked data base at1112, the data may be backed-up to one or more community machines under one or more peers' control. In some embodiments, for scalability and/or other reasons, the data will not be saved at the hub at1112, and backups at1114can be made instead. Factors considered for determining the number of and which peers' community machines to activate for the backup of the publisher's data may include, for example, publisher participation, publisher service upgrades, rarity of data, freshness of data, data size, and/or any privacy limitations expressed by the publisher if they do not trust encryption, among other things. For example, a publisher might only allow replication of their data onto community machines associated with the publisher's identified in-group. Similarly the member hosting the duplicate as explained at module520is more generous in hosting data from an in-group member, who is typically a friend or family member. The process may then end at1108.FIG.10for peer-to-peer communications may function the same in this case when the managed hub initiates the data transfer for creating backup copies as it did during a regular member initiated download. As such, a number of advantages may be realized by various embodiments discussed herein. For example, the centralization of community file and data management creates a whole greater than the parts. Local hard drives and other storage can become interconnected into a vast whole. Once interconnected, the ease and efficiency for transfers, searches, and comparisons improves relative to prior systems. Multiple internal operations can be aggregated and chained into single inclusive steps for the user. Ultimately embodiments discussed herein can improve publishing, discovery, and transferring of data through combinations of features. When a community member joins through registration, the new member may receive personal namespace (such as, e.g., through an internet URL) and file sharing functionality on the member's community machine(s). After that, the new member does not have to select and purchase a domain name, contract for content hosting, or install code to maintain a web server, FTP server, or tomcat, so that a broader population can contribute data and files. Basic publishing can be exactly as simple as drag and drop. Basic downloading can be equally as simple with drag and drop. The pool of community data can be searchable and browsable for discovery, along with announcements to an in-group of new content. Massive files, such as high definition video, can be shared beyond what can be sent with email, or that might fit on a compact disk, flash drive and/or other removable storage devices. Physical copies such as to compact disks, digital versatile disks, Blu-ray disks and/or flash drives can be obviated, and the data passes directly onto the consumer's storage (such as a hard drive or solid state drive). Data management and user authentication provided by embodiments discussed herein can protect member identities and IP addresses when members would rather avoid sharing data with the broader public beyond their friends listed in an in-group. Because some embodiments provide for bandwidth management for performance and scalability, that scalability allows for the data variety and redundancy to create value. Because of the greater ease of use for publishing and downloading, users can link file addresses for data existing on their home drives in forums, blogs, diggs, SMS, and instant messenger traffic. Because there is tracking of copies and duplicates across the community internet drive, there is mitigation against link rot. An exactly equivalent copy can often be delivered. More consumer choice is enabled for what, when, and where to download, across a rich and browsable namespace. In some embodiments, the familiar view of file path is meshed with the domain URL for the community internet drive. Hard drives, SSDs, and any other storage devices can be linked into one massive virtual drive across various geographical locations, such as across the planet, or to the limits of the internet and broadband communications, or to the limits set by one or more users. Files on personal storage devices can be linked and downloaded through a URL the consumer opens in the consumer's web browser. The community internet drive, or portions of it, can be integrated with the OS on the local machine to appear as a connected physical drive. Removing the barriers and gaps makes the entire system more intuitive for the user. As such, in some embodiments, the disjoint pieces become one coherent whole. Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. For example, while the discussion herein often references examples based on reading and storing data, the participating members of the community internet drive may also manage editing access to certain of their data and storage without departing from the spirit of the invention. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments specifically disclosed herein and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. | 130,255 |
11863381 | DESCRIPTION OF EMBODIMENTS The following clearly and completely describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are merely some but not all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure. The expression “and/or” used in the claims and the specification means at least one of connected objects. The following description provides examples and does not limit the scope, applicability, or configuration set forth in the claims. Alterations may be made to functions and arrangements of the discussed elements without departing from the spirit and scope of the present disclosure. In various examples, various procedures or components may be omitted, replaced, or added appropriately. For example, the described methods can be performed in a different order from that described, and various steps can be added, omitted, or combined. In addition, features described with reference to some examples may be combined in other examples. Referring toFIG.1,FIG.1is a structural diagram of a network system to which an embodiment of the present disclosure can be applied. As shown inFIG.1, the network system includes: a user terminal11, a first base station12, and a second base station13. The user terminal11may be user equipment (User Equipment, UE), for example, may be a terminal side device such as a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer), a personal digital assistant (personal digital assistant, PDA), a mobile Internet device (Mobile Internet Device, MID), or a wearable device (Wearable Device). It should be noted that a specific type of the user terminal11is not limited in this embodiment of the present disclosure. The first base station12and the second base station13may be base stations of 5G or later releases (for example, a gNB or a 5G NR NB), or base stations in other communications systems, or are referred to as NodeBs, evolved NodeBs, transmitting receiving points (transmitting receiving point, TRP), or other terms in the art. Provided that the same technical effects are achieved, the base stations are not limited to specific technical terms. It should be noted that in the embodiments of the present disclosure, the 5G base station is merely used as an example, but specific types of the base stations are not limited. Embodiments of the present disclosure provide a reconfiguration method, applied to a terminal, where the terminal is connected to at least two base stations. To make a person skilled in the art better understand the technical solutions in the embodiments of the present disclosure, the following descriptions are provided first. (1) Dual Connectivity or Multi-Connectivity Dual connectivity is a technology introduced in a long term evolution (Long Term Evolution, LTE) system, and will also be used in new radio (New Radio, NR). Dual connectivity means that UE can connect to two base stations at the same time, and the two base stations provide data receiving and sending services for user equipment or a terminal (User Equipment, UE) at the same time. Since radio resources of the two base stations can be used at the same time, a transmission rate of service data of the UE doubles. There is a signaling interface between the two base stations serving the same UE, so that the two base stations can exchange related configuration information of the UE. The base stations serving the UE in dual connectivity may belong to a same radio access type (Radio Access Type, RAT), for example, may be two LTE eNBs, or may belong to different RATs, for example, may be one LTE eNB and one NR gNB. One of the base stations serving the UE in dual connectivity is a master base station (Master Node, MN), and the other is a secondary base station (Secondary Node, SN). Each base station can support carrier aggregation (Carrier Aggregation, CA). A network configures two special cells (special cell) for the UE in dual connectivity, that is, configures a serving cell of the MN as a primary cell (Primary Cell, PCell) of the UE, and configures a serving cell of the SN as a primary secondary cell (Primary Secondary Cell, PScell) of the UE. Other cells of the MN and the SN that serve the UE are secondary cells (Secondary Cell, Scell) of the UE. Multi-connectivity means that more than two base stations serve the same UE, and is similar to dual connectivity. One of the base stations serving the UE in multi-connectivity is a master base station (Master Node, MN), and the other base stations are secondary base stations (Secondary Node, SN). Each base station can support CA. A network configures multiple special cells (special cell) for the UE in multi-connectivity, that is, configures a serving cell of the MN as a primary cell (Primary Cell, PCell) of the UE, and configures a serving cell of each SN as a primary secondary cell (Primary Secondary Cell, PScell) of the UE. Other cells of the MN and the SN that serve the UE are secondary cells (Secondary Cell, Scell) of the UE. (2) Carrier Aggregation In LTE, a maximum system bandwidth of each cell is 20 MHz. One base station may manage multiple cells with different center frequencies. When the UE with a CA capability needs a large bandwidth (for example, needs to download a large file at a high speed), the base station may configure the multiple cells with different frequencies that are managed by the base station (the UE needs to be within the coverage of the multiple cells with the frequencies), to transmit data for the UE at the same time. For example, five cells of 20 MHz are configured for the UE, so that the UE can transmit data in a 100 MHz bandwidth at the same time. The base station configures, for the UE in a connected state by using RRC signaling, a set of carriers that can be aggregated. Among cells of the set of carriers aggregated, one cell is a primary cell (Primary Cell, PCell), and another cell is a secondary cell (Secondary Cell, SCell). An NR system also uses a carrier aggregation technology similar to that of LTE. (3) RLM and RLF In LTE and NR systems, the UE monitors whether there is a radio link failure (Radio Link Failure, RLF) through a radio link monitor (Radio Link Monitor, RLM) function. After determining that there is an RLF, the UE performs a corresponding link restoration procedure. The RLM is performed only in a PCell and a PScell. (3.1) RLM and RLF in a PCell In the RLM function of LTE, the UE monitors a radio link by measuring a signal to interference plus noise ratio (SINR) of a cell reference signal CRS corresponding to a physical downlink control channel PDCCH of the PCell. When a physical layer (L1) of the UE obtains through measurement that the SINR of the CRS corresponding to the PDCCH of the PCell is lower than a threshold, it is considered that the radio link is “out of sync”. The physical layer notifies an upper layer (RRC layer, L3) of an out-of-sync indication. If the RRC layer receives N310consecutive out-of-sync indications, the RRC layer of the UE starts a timer T310. If the measured SINR of the CRS corresponding to the PDCCH of the PCell is higher than a threshold, it is considered that the radio link is “in sync”. In this case, the physical layer notifies the upper layer (RRC layer) of an in-sync indication. If the RRC layer receives N311consecutive in-sync indications while the timer T310is running, the UE stops the timer T310. If the timer T310expires, the UE determines that the UE has a radio link failure (RLF), and starts the timer T311. the UE tries to search for a suitable cell for RRC connection re-establishment while T311is running. After the UE determines the RLF and before the re-establishment succeeds, exchange of user plane data between the UE and the network is interrupted. If the re-establishment of the UE does not succeed before T311expires, the UE switches from the RRC-connected (RRC-CONNECTED) state to the RRC-idle (RRC-IDLE) state. Values of N310and N311and durations of T310and T311are all configured by the network. The RLM process of NR is similar to that of LTE. In NR, an RLM reference signal RS detected in the PCell is configured by the network. As can be seen from the foregoing descriptions, in the process of RRC connection re-establishment, the transmission that is being performed by the UE needs to be interrupted. In dual connectivity or multi-connectivity, a signaling message may also be transmitted between the UE and the SN (for example, through an SRB1s and an SRB3). Therefore, when the radio link failure occurred between the UE and the MN, the re-establishment process may not be performed, and instead, the SN that can perform communication reports radio connection failure information to the network and the UE is reconfigured by the network. On this basis, as shown inFIG.2, an embodiment of the present disclosure provides a reconfiguration method, applied to a terminal, where the terminal is connected to at least two base stations, and the reconfiguration method includes: Step201: When a radio connection failure occurred between the terminal and a master base station MN, report a radio connection failure indication of the MN to a secondary base station SN. Herein, a radio connection failure includes the following cases:a radio link failure occurred between the UE and an MN (for example, a timer T310set by the UE to detect downlink quality of the MN expires, the UE performs MAC layer RACH attempts for the maximum number of times, but fails, and the UE performs RLC layer AM mode retransmission for the maximum number of times, but fails);the UE has a switching failure;signaling transmitted on an SRB1 or an SRB2 and received by the UE has an integrity check failure; orthe UE cannot execute an RRC reconfiguration instruction sent by the network (for example, a reconfigured parameter value exceeds a hardware capability of the UE). That is, the reconfiguration method in this embodiment of the present disclosure is applied to at least one of the following cases: a radio link failure occurred between the UE and an MN, the UE has a handover failure, signaling transmitted on an SRB1 or an SRB2 and received by the UE has an integrity check failure, and the UE cannot execute an RRC reconfiguration instruction sent by the network. In the following description, for example, a radio connection failure is a radio link failure. Specifically, when radio link failure between the terminal and the MN is detected, the terminal generates a radio link failure indication of the MN and reports the MN radio link failure indication to the SN. When the terminal monitors whether the radio link failure occurred between the terminal and the master base station MN, the method described above or other existing mechanisms may be adopted. Details are not repeated herein. The radio connection failure indication of the MN includes:at least one of an MN radio connection failure reason and a measurement result of the terminal, where the measurement result of the terminal is used by the network to reselect a serving cell for the UE. When the radio connection failure is a radio link failure, the radio connection failure indication of the MN is a radio link failure indication. Step202: If receiving an RRC reconfiguration message before a target timer expires, perform reconfiguration processing according to the RRC reconfiguration message. The RRC reconfiguration message is determined according to the MN radio link failure indication. In the embodiments of the present disclosure, in a process of generating or sending the radio connection failure indication of the MN, the terminal starts the target timer T. A specific start time of the target timer T includes:a preset moment in a process of generating the radio connection failure indication of the MN, wherethe preset moment may be a start moment at which a radio resource control RRC layer generates the radio connection failure indication of the MN, or may be an end moment at which the radio resource control RRC layer generates the radio connection failure indication of the MN, or may be any moment between the start moment and the end moment, and the preset moment is agreed on in a protocol;a moment at which an RRC layer of the terminal submits the radio connection failure indication of the MN to a lower layer; ora moment at which the radio connection failure indication of the MN is sent at an air interface. In addition, the network may notify the UE of a timing length of the target timer T by using dedicated RRC signaling or a system message. The RRC reconfiguration message is a reconfiguration message including a specific IE (Information Element, Information Element), and the specific IE may be a synchronous reconfiguration (reconfigurationWithSync) IE, or a full configuration IE (fullConfig), or a master cell group IE (masterCellGroup), or a failure indication response IE; or may be another specified IE in the RRC reconfiguration message, where a specific type of the specified IE is agreed on in a protocol; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. In the embodiments of the present disclosure, when the UE reports the MN connection failure indication to the SN, the network may reconfigure the UE at the same time. That is, the UE reports the radio connection failure indication of the MN at the moment T1, and at the same time, the network sends the RRC reconfiguration message. At the subsequent moment T2, the UE receives the reconfiguration message sent by the network. However, the reconfiguration message received by the UE at the T2 moment may not be used by the network to resolve the problem of the MN radio connection failure. Therefore, the network needs to use a specific identifier/IE to notify the UE that the current reconfiguration can resolve the MN radio connection failure. In the reconfiguration method in this embodiment of the present disclosure, when the radio connection failure occurred between the terminal and the master base station MN, the radio connection failure indication of the MN is reported to the secondary base station SN; and if the RRC reconfiguration message is received before a target timer expires, reconfiguration processing is performed according to the RRC reconfiguration message, to prevent the UE from initiating an RRC connection re-establishment process, and therefore avoid the problem of interruption of data receiving and sending of the UE. Further, the reconfiguration method in this embodiment of the present disclosure further includes:when the RRC layer receives the RRC reconfiguration message or performs the reconfiguration processing according to the RRC reconfiguration message, stopping the target timer. Herein, the RRC reconfiguration message includes an RRC connection reconfiguration (RRC Connection Reconfiguration) message of LTE and an RRC reconfiguration (RRC Reconfiguration) message of NR. In the embodiments of the present disclosure, if the connection between the terminal and the MN is re-established, the terminal stops the target timer. Restoration of the radio connection may mean that the RRC layer receives the RRC reconfiguration message or performs the reconfiguration processing according to the RRC reconfiguration message. Further, restoration of the radio connection may also mean that the terminal receives a reconfiguration message including a synchronous reconfiguration IE, and the terminal completes a random access process. On this basis, when the RRC reconfiguration message is a reconfiguration message including a synchronous reconfiguration IE,after the performing reconfiguration processing according to the RRC reconfiguration message, the method further includes:initiating a random access process according to the RRC reconfiguration message; andif the RRC layer of the terminal receives, before the target timer expires, a random access success indication sent by a media access control MAC layer, stopping the target timer. Herein, if the terminal receives a synchronous RRC connection reconfiguration message (a reconfiguration message including a synchronous reconfiguration IE) when the target timer runs, after random access to a RACH succeeds, the terminal stops the target timer T. The RRC reconfiguration message includes an RRC connection reconfiguration (RRC Connection Reconfiguration) message of LTE and an RRC reconfiguration (RRC Reconfiguration) message of NR. Further, after the radio connection failure indication of the MN is reported to the secondary base station SN, the method further includes:if the target timer expires, initiating an RRC connection re-establishment process. Herein, when the UE reports the radio connection failure indication of the MN to the network, the target timer T is started. If the MN radio connection still has not been re-established when T expires, the UE performs RRC connection re-establishment. In the reconfiguration method in this embodiment of the present disclosure, the terminal reports the MN connection failure to the SN, so that the network reconfigures the UE in time, to prevent the UE from initiating an RRC connection re-establishment process, and therefore avoid the problem of interruption of data receiving and sending of the UE. As shown inFIG.3, an embodiment of the present disclosure further provides a reconfiguration method, applied to a base station, where the reconfiguration method includes: Step301: Obtain a radio connection failure indication of the MN sent by a terminal. Herein, the base station is specifically a secondary base station SN connected to the terminal. The MN radio link failure indication is sent by the terminal to the secondary base station SN when the terminal detects that a radio connection failure occurred between the terminal and a master base station MN. Herein, a radio connection failure includes the following cases:a radio link failure occurred between the UE and an MN (for example, a timer T310set by the UE to detect downlink quality of the MN expires, the UE performs MAC layer RACH attempts for the maximum number of times, but fails, and the UE performs RLC layer AM mode retransmission for the maximum number of times, but fails);the UE has a handover failure;signaling transmitted on an SRB1 or an SRB2 and received by the UE has an integrity check failure; orthe UE cannot execute an RRC reconfiguration instruction sent by the network (for example, a reconfigured parameter value exceeds a hardware capability of the UE). The radio connection failure indication of the MN includes at least one of an MN radio connection failure reason and a measurement result of the terminal, where the measurement result of the terminal is used by the network to reselect a serving cell for the UE. Step302: Send an RRC reconfiguration message to the terminal according to the radio connection failure indication of the MN. Specifically, after receiving the radio connection failure indication of the MN, the secondary base station SN reports the radio connection failure indication of the MN to the master base station MN. The master base station MN determines the RRC reconfiguration message according to the radio connection failure indication of the MN, and sends the RRC reconfiguration message to the secondary base station SN, and then the secondary base station SN sends the RRC reconfiguration message to the terminal. Alternatively, after receiving the radio connection failure indication of the MN, the secondary base station directly determines the RRC reconfiguration message according to the radio connection failure indication of the MN, and sends the RRC reconfiguration message to the terminal. In the embodiments of the present disclosure, the RRC reconfiguration message is a reconfiguration message including a specific information element (Information Element, IE), and the specific IE may be a synchronous reconfiguration (reconfigurationWithSync) IE, or a full configuration IE (fullConfig), or a master cell group IE (masterCellGroup), or a failure indication response IE; or may be another specified IE in the RRC reconfiguration message, where a specific type of the specified IE is agreed on in a protocol; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. It should be noted that the failure indication response IE indicates that the current reconfiguration is a response to the MN radio link failure indication reported by the terminal. In an optional implementation, the radio connection failure indication of the MN carries a number, and the failure indication response IE is also a number. If the two numbers are the same, it indicates that the failure indication response IE is an IE corresponding to the radio connection failure indication of the MN. In the reconfiguration method in this embodiment of the present disclosure, the radio connection failure indication of the MN sent by the terminal is obtained; and the RRC reconfiguration message is sent to the terminal according to the radio connection failure indication of the MN, so that the terminal is reconfigured according to the RRC reconfiguration message, to prevent the UE from initiating an RRC connection re-establishment process, and therefore avoid the problem of interruption of data receiving and sending of the UE. FIG.4is a schematic diagram of modules of a terminal according to an embodiment of the present disclosure. As shown inFIG.4, an embodiment of the present disclosure further provides a terminal400. The terminal is connected to at least two base stations, and includes:a reporting module401, configured to: when a connection failure occurred between the terminal and a master base station MN, report a radio connection failure indication of the MN to a secondary base station SN; anda reconfiguration module402, configured to: if receiving an RRC reconfiguration message before a target timer expires, perform reconfiguration processing according to the RRC reconfiguration message. In the terminal in this embodiment of the present disclosure, a start time of the target timer includes:a moment at which a radio resource control RRC layer of the terminal generates the radio connection failure indication of the MN; ora moment at which an RRC layer of the terminal submits the radio connection failure indication of the MN to a lower layer; ora moment at which the radio connection failure indication of the MN is sent at an air interface. In the terminal in this embodiment of the present disclosure, the RRC reconfiguration message is a reconfiguration message including a specific information element IE, and the specific information element IE is a synchronous reconfiguration IE, or a full configuration IE, or a master cell group IE, or a failure indication response IE; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. The terminal in this embodiment of the present disclosure further includes:a first control module, configured to: when the RRC layer receives the RRC reconfiguration message or performs the reconfiguration processing according to the RRC reconfiguration message, stop the target timer. When the RRC reconfiguration message is a reconfiguration message including a synchronous reconfiguration IE, the terminal in this embodiment of the present disclosure further includes:a first processing module, configured to: after the reconfiguration module performs reconfiguration processing according to the RRC reconfiguration message, initiate a random access process according to the RRC reconfiguration message; anda second control module, configured to: if the RRC layer of the terminal receives, before the target timer expires, a random access success indication sent by a media access control MAC layer, stop the target timer. The terminal in this embodiment of the present disclosure further includes:a second processing module, configured to: after the reporting module reports the MN radio link failure indication to the secondary base station SN, if the target timer expires, initiate an RRC connection re-establishment process. According to the terminal in this embodiment of the present disclosure, the MN radio link failure indication includes:at least one of an MN radio connection failure reason and a measurement result of the terminal. When the radio connection failure occurred between the terminal and the master base station MN, the terminal in this embodiment of the present disclosure reports the radio connection failure indication of the MN to the secondary base station SN; and if the RRC reconfiguration message is received before a target timer expires, performs reconfiguration processing according to the RRC reconfiguration message, to prevent the UE from initiating an RRC connection re-establishment process, and therefore avoid the problem of interruption of data receiving and sending of the UE. An embodiment of the present disclosure further provides a terminal, including: a memory, a processor, and a computer program stored in the memory and executable on the processor. The computer program, when executed by the processor, implements the processes of the foregoing embodiments of the reconfiguration method applied to the terminal, and the same technical effects can be achieved. To avoid repetition, details are not described herein again. An embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program. The computer program, when executed by the processor, implements the processes of the foregoing embodiments of the reconfiguration method applied to the terminal, and the same technical effects can be achieved. To avoid repetition, details are not described herein again. The computer-readable storage medium is a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, an optical disc, or the like. To better achieve the foregoing objectives, as shown inFIG.5, an embodiment of the present disclosure further provides a terminal, including a memory520, a processor500, a transceiver510, a user interface530, a bus interface, and a computer program stored in the memory520and executable on the processor500. The processor500is configured to read the program in the memory520to perform the following processes:when a connection failure occurred between the terminal and a master base station MN, reporting a radio connection failure indication of the MN to a secondary base station SN; andif receiving an RRC reconfiguration message before a target timer expires, performing reconfiguration processing according to the RRC reconfiguration message. InFIG.5, a bus architecture may include any quantity of interconnected buses and bridges, which are specifically connected together by various circuits of one or more processors represented by the processor500and a memory represented by the memory520. The bus architecture may further connect together various other circuits of a peripheral device, a voltage stabilizer, a power management circuit, and the like, which are known in this art and are not further described herein. The bus interface provides an interface. The transceiver510may include a plurality of elements, that is, include a transmitter and a receiver, and provide units for communication with various other apparatuses on a transmission medium. For different user equipment, the user interface530may alternatively be an interface for externally and internally connecting a required device. The connected device includes, but is not limited to, a keypad, a display, a speaker, a microphone, a joystick, and the like. The processor500is responsible for management of the bus architecture and general processing. The memory520may store data used by the processor500when operations are performed. Optionally, a start time of the target timer includes:a preset moment in a process of generating the radio connection failure indication of the MN; ora moment at which a radio resource control RRC layer of the terminal submits the radio connection failure indication of the MN to a lower layer; ora moment at which the radio connection failure indication of the MN is sent at an air interface. Optionally, the RRC reconfiguration message is a reconfiguration message including a specific information element IE, and the specific information element IE is a synchronous reconfiguration IE, or a full configuration IE, or a master cell group IE, or a failure indication response IE; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. Optionally, the processor500reads the program in the memory520to further perform:when the RRC layer receives the RRC reconfiguration message or performs the reconfiguration processing according to the RRC reconfiguration message, stopping the target timer. Optionally, when the RRC reconfiguration message is a reconfiguration message including a synchronous reconfiguration IE;the processor500reads the program in the memory520to further perform:initiating a random access process according to the RRC reconfiguration message; andif the RRC layer of the terminal receives, before the target timer expires, a random access success indication sent by a media access control MAC layer, stopping the target timer. Optionally, the processor500reads the program in the memory520to further perform:if the target timer expires, initiating an RRC connection re-establishment process. Optionally, the MN radio link failure indication includes:at least one of an MN radio connection failure reason and a measurement result of the terminal. FIG.6is a schematic structural diagram of hardware of a terminal implementing embodiments of the present disclosure. The terminal600includes, but is not limited to: a radio frequency unit601, a network module602, an audio output unit603, an input unit604, a sensor605, a display unit606, a user input unit607, an interface unit608, a memory609, a processor610, a power supply611, and other components. A person skilled in the art may understand that the structure of the terminal shown inFIG.6does not constitute a limitation on the terminal. The terminal may include more or fewer components than those shown in the figure, or a combination of some components, or an arrangement of different components. In this embodiment of the present disclosure, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal, a wearable device, a pedometer, or the like. The processor610is configured to: when a connection failure occurred between the terminal and a master base station MN, report a radio connection failure indication of the MN to a secondary base station SN; and if the RRC reconfiguration message is received before a target timer expires, perform reconfiguration processing according to the RRC reconfiguration message. In the technical solutions in the embodiments of the present disclosure, when the radio connection failure occurred between the terminal and the master base station MN, the radio connection failure indication of the MN is reported to the secondary base station SN; and if the RRC reconfiguration message is received before a target timer expires, reconfiguration processing is performed according to the RRC reconfiguration message, to prevent the UE from initiating an RRC connection re-establishment process, and therefore avoid the problem of interruption of data receiving and sending of the UE. It should be understood that, in this embodiment of the present disclosure, the radio frequency unit601may be configured to receive and transmit signals during information receiving and sending or a call. Specifically, the radio frequency unit601receives downlink data from a base station, and transmits the downlink data to the processor610for processing; and in addition, transmits uplink data to the base station. Generally, the radio frequency unit601includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit601may further communicate with another device via a wireless communication system and a network. The terminal provides a user with wireless broadband Internet access through the network module602, for example, helps the user send and receive emails, browse web pages, and access streaming media. The audio output unit603may convert audio data received by the radio frequency unit601or the network module602or stored in the memory609into an audio signal, and output the audio signal into sound. In addition, the audio output unit603may also provide audio output related to a specific function performed by the terminal600(for example, call signal receiving sound or message receiving sound). The audio output unit603includes a speaker, a buzzer, a telephone receiver, and the like. The input unit604is configured to receive audio or video signals. The input unit604may include a graphics processing unit (Graphics Processing Unit, GPU)6041and a microphone6042. The graphics processing unit6041processes image data of a static picture or a video obtained by an image capturing apparatus (for example, a camera) in a video capturing mode or an image capturing mode. A processed image frame may be displayed on the display unit606. The image frame processed by the graphics processing unit6041may be stored in the memory609(or another storage medium) or sent via the radio frequency unit601or the network module602. The microphone6042may receive sound and process such sound into audio data. Processed audio data may be converted, in a telephone call mode, into a format that may be sent to a mobile communications network device via the radio frequency unit601for output. The terminal600further includes at least one sensor605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, where the ambient light sensor may adjust brightness of the display panel6061according to brightness of ambient light, and the proximity sensor may turn off the display panel6061and/or backlight when the terminal600moves towards the ear. As a type of motion sensor, an accelerometer sensor may detect accelerations in all directions (generally three axes), and may detect the magnitude and direction of gravity when it is still. The accelerometer sensor may be configured to identify a terminal posture (for example, switching between a landscape mode and a portrait mode, related games, and magnetometer posture calibration), perform vibration identification-related functions (for example, a pedometer and a knock), and the like. The sensor605may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like. Details are not described herein again. The display unit606is configured to display information entered by a user or information provided for the user. The display unit606may include the display panel6061, and the display panel6061may be configured in a form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like. The user input unit607may be configured to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit607includes a touch panel6071and another input device6072. The touch panel6071, also called a touch screen, may collect a touch operation of the user on or near the touch panel6071(for example, an operation performed by the user with any suitable object or accessory such as a finger or a stylus on or near the touch panel6071). The touch panel6071may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus detects a touch position of the user, detects a signal brought by the touch operation, and transmits the signal to the touch controller. The touch controller receives touch information from the touch detection apparatus, converts the touch information into contact coordinates, transmits the contact coordinates to the processor610, receives a command sent by the processor610, and executes the command. In addition, the touch panel6071may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel6071, the user input unit607may further include the another input device6072. Specifically, the another input device6072may include, but is not limited to, a physical keyboard, function keys (such as a volume control key and a switch key), a trackball, a mouse, and a joystick. Details are not described herein. Further, the touch panel6071may cover the display panel6061. When detecting a touch operation on or near the touch panel6071, the touch panel6071transmits the touch operation to the processor610to determine a type of a touch event. Then, the processor610provides corresponding visual output on the display panel6061based on the type of the touch event. InFIG.6, the touch panel6071and the display panel6061are used as two independent components to implement input and output functions of the terminal. However, in some embodiments, the touch panel6071and the display panel6061may be integrated to implement the input and output functions of the terminal. This is not specifically limited herein. The interface unit608is an interface connecting an external apparatus to the terminal600. For example, the external apparatus may include a wired or wireless headphone port, an external power supply (or a battery charger) port, a wired or wireless data port, a storage card port, a port used to connect to an apparatus having an identity module, an audio input/output (I/O) port, a video I/O port, and a headset port. The interface unit608may be configured to receive an input (for example, data information and power) from the external apparatus and transmit the received input to one or more elements in the terminal600, or transmit data between the terminal600and the external apparatus. The memory609may be configured to store software programs and various data. The memory609may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function and an image playback function), and the like. The data storage area may store data (such as audio data and a phone book) created based on use of the mobile phone, and the like. In addition, the memory609may include a high-speed random access memory or a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device. The processor610is a control center of the terminal, connects various parts of the entire terminal by using various interfaces and circuits, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory609and invoking data stored in the memory609, so as to monitor the terminal as a whole. The processor610may include one or more processing units. Preferably, the processor610can be integrated with an application processor and a modem processor. The application processor mainly processes an operating system, a user interface, an application program, and the like. The modem processor mainly processes wireless communication. It can be understood that the modem processor is not necessarily integrated in the processor610. The terminal600may further include the power supply611(for example, a battery) supplying power to various components. Preferably, the power supply611may be logically connected to the processor610through a power management system, so as to implement functions such as managing charging, discharging, and power consumption through the power management system. In addition, the terminal600includes some functional modules not shown, and details are not described herein again. FIG.7is a schematic diagram of modules of a base station according to an embodiment of the present disclosure. As shown inFIG.7, an embodiment of the present disclosure further provides a base station700, including:an obtaining module701, configured to obtain a radio connection failure indication of the MN sent by a terminal; anda sending module702, configured to send an RRC reconfiguration message to the terminal according to the radio connection failure indication of the MN. In the base station according to an embodiment of the present disclosure, the RRC reconfiguration message is a reconfiguration message including a specific information element IE, and the specific information element IE is a synchronous reconfiguration IE, or a full configuration IE, or a master cell group IE, or a failure indication response IE; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. In the base station according to an embodiment of the present disclosure, the radio connection failure indication of the MN includes at least one of an MN radio connection failure reason and a measurement result of the terminal. An embodiment of the present disclosure further provides a base station, including: a memory, a processor, and a computer program stored in the memory and executable on the processor. The computer program, when executed by the processor, implements the processes of the foregoing method embodiment of the reconfiguration method applied to the base station, and the same technical effects can be achieved. To avoid repetition, details are not described herein again. An embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program. The computer program, when executed by the processor, implements the processes of the foregoing method embodiment of the reconfiguration method applied to the base station, and the same technical effects can be achieved. To avoid repetition, details are not described herein again. The computer-readable storage medium is a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, an optical disc, or the like. As shown inFIG.8, an embodiment of the present disclosure further provides a base station800, including a processor801, a transceiver802, a memory803, and a bus interface. The processor801is configured to read a program in the memory803to perform the following process:obtaining a radio connection failure indication of the MN sent by a terminal; andsending an RRC reconfiguration message to the terminal according to the radio connection failure indication of the MN. InFIG.8, a bus architecture may include any quantity of interconnected buses and bridges, which are specifically connected together by various circuits of one or more processors represented by the processor801and a memory represented by the memory803. The bus architecture may further connect together various other circuits of a peripheral device, a voltage stabilizer, a power management circuit, and the like, which are well known in this art and are not further described herein. The bus interface provides an interface. The transceiver802may include a plurality of elements, that is, include a transmitter and a receiver, and provide units for communication with various other apparatuses on a transmission medium. The processor801is responsible for managing the bus architecture and common processing, and the memory803may store data used when the processor801performs an operation. Optionally, the RRC reconfiguration message is a reconfiguration message including a specific information element IE, and the specific information element IE is a synchronous reconfiguration IE, or a full configuration IE, or a master cell group IE, or a failure indication response IE; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a primary cell Pcell; orthe RRC reconfiguration message is a reconfiguration message for instructing the terminal to modify a radio link monitoring reference signal RS of a primary cell Pcell. Optionally, the radio connection failure indication of the MN includes at least one of an MN radio connection failure reason and a measurement result of the terminal. It should be noted that in this specification, the terms “comprise”, “include” and any other variants thereof are intended to cover non-exclusive inclusion, so that a process, a method, an article, or an apparatus that includes a series of elements not only includes these very elements, but may also include other elements not expressly listed, or also include elements inherent to this process, method, article, or apparatus. In the absence of more limitations, an element defined by “including a . . . ” does not preclude the existence of other identical elements in the process, method, article, or apparatus that includes the element. Based on the foregoing descriptions of the implementations, a person skilled in the art may clearly understand that the method in the foregoing embodiment may be implemented by software in addition to a necessary universal hardware platform or by hardware only. In most circumstances, the former is a preferred implementation. Based on such an understanding, the technical solutions of the present disclosure essentially or the part contributing to the prior art may be implemented in a form of a software product. The computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, or a compact disc), and includes several instructions for instructing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, a network device, or the like) to perform the methods described in the embodiments of the present disclosure. The embodiments of the present disclosure are described with reference to the accompanying drawings. However, the present disclosure is not limited to the foregoing specific implementations. The foregoing specific implementations are merely an example, but are not limiting. A person of ordinary skill in the art may make many forms without departing from the objective of the present disclosure and the protection scope of the claims. | 48,160 |
11863382 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. In fully-managed enterprise solutions, all aspects of device management such as installation, configuration, administration, and run time operations are performed by a network service provider. With fully-managed solutions, a change management system is typically used in which customers (e.g., enterprise network administrators) submit proposals (e.g., change requests) to make changes to software defined network (SDN) device configurations. The change request process typically includes a complex chain of sub-processes that can take several days or weeks to complete. For example, in existing change management processes, a customer may submit a change request ticket that is assigned to an operations team for the service provider. The operations team would schedule and reserve a time slot to address the ticket. At the scheduled time, the operations team may conduct an analysis for the ticket and change the configuration. After the configuration change, the team would validate the changes and notify the customer by closing the change request ticket. Customers and networks service providers alike can benefit from a self-service model, referred to herein as a co-managed configuration service, which would allow customers to make policy changes for SDN devices that service an enterprise network. The co-managed configuration service would expedite and simplify some types of network changes over the current fully-managed change processes. One challenge to allowing enterprise customers to directly make changes to SDN devices is effectively limiting access to exposed network functions in the service provider network. Another challenge is minimizing the possibility of such changes causing unintended behavior, which may lead to operational losses and violation of service level agreements (SLAs). Systems and methods described herein provide a co-managed configuration service that enables self-service management of network configurations for SDN devices. Co-managed configurations may include, for example, policy changes, changes to IP address, and static route additions. According to an implementation, a customer may make configuration changes directly, within a predefined scope, via an enterprise customer portal (e.g., a web-based change management portal). The customer may manage predefined and mutually-agreed-upon network and security changes through custom Application Programming Interfaces (APIs), in effect automating the change requests. Thus, customers of the co-managed configuration service may implement policy and other network changes in an enterprise network without the manual procedures and delays associated with typical managed network services. FIG.1is a diagram of an exemplary environment100in which the systems and/or methods, described herein, may be implemented. As shown inFIG.1, environment100may include a provider network110that provide services to an enterprise network160. According to other embodiments, environment100may include additional networks, fewer networks, and/or different types of networks than those illustrated and described herein. Environment100includes links between the networks and between the devices. Environment100may be implemented to include wired, optical, and/or wireless links among the devices and the networks illustrated. A communication connection via a link may be direct or indirect. For example, an indirect communication connection may involve an intermediary device and/or an intermediary network not illustrated inFIG.1. Additionally, the number and the arrangement of links illustrated in environment100are exemplary. Provider network110may generally include one or more wired, wireless and/or optical networks that are capable of receiving and transmitting data, voice and/or video signals. For example, provider network110may include one or more access networks, IP multimedia subsystem (IMS) networks, core networks, or other networks. The access network may include a wireless communications network that connects users/customers (e.g., using user device180) to other portions of provider network110(e.g., the core network). In one example, the access network may include a Fifth Generation (5G) access network and/or a long-term evolution (LTE) access network. Provider network110may further include one or more satellite networks, one or more packet switched networks, such as an IP-based network, a local area network (LAN), a wide area network (WAN), a personal area network (PAN) (e.g., a wireless PAN), a wireless local area network (WLAN), an intranet, or another type of network that is capable of transmitting data. In an exemplary implementation, provider network110may represent a network associated with a service provider that provides various services, such as IP-related services, value added services, etc. In the example ofFIG.1, provider network110may include network devices120. Each network device120may be configured to perform network functions in provider network110. For example, network device120may include a switch, a router, a firewall, a gateway, a Network Address Translation (NAT) device, a Reconfigurable Optical Add-Drop Multiplexer (ROADM), and/or another type of network device. Some or all of the functionality of network device120may be virtualized as a virtual network function (VNF) in provider network110. Depending on the implementation, network110may include other types of network devices120, such as, for example, a base station (e.g., a next-generation NodeB, an evolved NodeB, etc.), a gateway device, a support node, a serving node, a core access and mobility management function (AMF), a session management function (SMF), a policy control function (PCF), as well other network devices that provide various network-related functions and/or services, such as charging and billing, security, authentication and authorization, network policy enforcement, management of subscriber profiles, and/or other functions and/or services that facilitate the operation of the core network. Network devices120may receive, store, and enforce policies for end devices in enterprise network160(e.g., SDN device instances168, described below) and other user devices (e.g., user device180). According to implementations describe herein, provider network110may also include an order planning system130, a co-management service framework140, and a customer portal150. Order planning system130, co-management service framework140, and a customer portal150may be used to implement a co-managed configuration service for enterprise network160and are described further below. Enterprise network160(also referred to herein as a “customer network”) may include a network that receives services from provider network110. Enterprise network160may include a local area network (LAN), a WAN, or a combination of networks that provide network access to devices in provider network110. In one implementation, enterprise network160may include a network interconnecting one or more physical network functions (PNF)162, virtual network functions (VNF)164or cloud-native network functions (CNF), and/or universal customer premises equipment (uCPE)166(referred to collectively herein as “SDN device instances168” or “SDN devices168”). SDN device instances168may be provided by different suppliers/vendors for a service provider and may be configured using vendor-specific APIs. In another implementation, enterprise network160may include application servers for user devices180(e.g., machine-type communication (MTC) devices, mobile devices, etc.). The application servers may, for example, receive and process data from user devices180. In another implementation, enterprise network160may include gateway (GW) routers (e.g., customer premises equipment) that act as a secure gateway for devices within enterprise network160. As used herein, configuration changes for SDN devices168may also refer to changes to a firewall, WAN optimization, or other network changes associated with enterprise network160. Order planning system130may configure available services for customers (e.g., enterprise network160) of the co-managed configuration service. Order planning system130may include components to receive requests for a scope of available services, generate service orders and work orders, and configure network exposure and APIs for the co-managed configuration service. Co-management service framework140may include a collection of network tools and interfaces to activate the co-managed configuration service and apply policy changes initiated by a customer. Co-management service framework140may provide secure exposure of SDN device instances168for configuration by customers. Co-management service framework140may also provide automated change management with co-management capabilities. Co-management service framework140may enable flexible domain-agnostic device access profiles and an API gateway with dynamic ingestion of control logic. Co-management service framework140is described further in connection withFIG.3. Customer portal150may include network devices that provide a web-based interface for a customer (e.g., using user device180) to access the co-managed configuration service. Via user device180, users (e.g., customers) of provider network110may access customer portal150to manage (e.g., introduce, configure, issue commands, update, monitor, etc.) policies for SDN device instances168associated with enterprise network160, for example. Using customer portal150, customers may manage their SDN device configurations for selected eligible parameters and make changes into SDN device instances168by changing the configuration that is managed by the service provider of provider network110. User device180may include a computational or communication device that is capable of communicating with provider network110. In one aspect, user device180may be used by an operator (e.g., a network administrator) to communicate with network devices120, order planning system130, and/or a co-management service framework140. In another aspect, user device180may enable a customer to access customer portal150or interact with devices in enterprise network160. User device180may include, for example, a personal communications system (PCS) terminal (e.g., a smartphone that may combine a cellular radiotelephone with data processing and data communications capabilities), a tablet computer, a personal computer, a laptop computer, a gaming console, an Internet television, or other types of computation or communication devices. According to implementations described herein, parameters for the co-managed configuration service may be configured using instructions from order planning system130. The parameters may be stored by co-management service framework140and identify exposed policies/services that may be changed by enterprise network160customers. An enterprise customer wishing to make network configuration changes, such as setup static routes or open a firewall policy, may use customer portal150to open a change request with provider network110in co-management service framework140. As described in more detail below, co-management service framework140may authenticate the change request and retrieve vendor director (e.g., a vendor-specific orchestration device or network management system) information for an SDN device (e.g., SDN device instances168) applying the policy change. Co-management service framework140may invoke specific application programming interface (API) call(s) for implemented changes on the SDN device based on the vendor director information. While examples provided herein are described primarily in the context of policy changes for simplicity, the co-managed configuration service may also be used for other network configuration changes in both physical and virtual network functions. FIG.2is a diagram illustrating exemplary components of a device200that may correspond to one or more of the devices described herein. For example, device200may correspond to components included in network device120, ordering system130, SDN device instance138, co-management service framework140, customer portal150, and user device180. As illustrated inFIG.2, according to an exemplary embodiment, device200includes a bus205, processor210, memory/storage215that stores software220, a communication interface225, an input230, and an output235. According to other embodiments, device200may include fewer components, additional components, different components, and/or a different arrangement of components than those illustrated inFIG.2and described herein. Bus205includes a path that permits communication among the components of device200. For example, bus205may include a system bus, an address bus, a data bus, and/or a control bus. Bus205may also include bus drivers, bus arbiters, bus interfaces, and/or clocks. Processor210includes one or multiple processors, microprocessors, data processors, co-processors, application specific integrated circuits (ASICs), controllers, programmable logic devices, chipsets, field-programmable gate arrays (FPGAs), application specific instruction-set processors (ASIPs), system-on-chips (SoCs), central processing units (CPUs) (e.g., one or multiple cores), microcontrollers, and/or some other type of component that interprets and/or executes instructions and/or data. Processor210may be implemented as hardware (e.g., a microprocessor, etc.), a combination of hardware and software (e.g., a SoC, an ASIC, etc.), may include one or multiple memories (e.g., cache, etc.), etc. Processor210may be a dedicated component or a non-dedicated component (e.g., a shared resource). Processor210may control the overall operation or a portion of operations performed by device200. Processor210may perform operations based on an operating system and/or various applications or computer programs (e.g., software220). Processor210may access instructions from memory/storage215, from other components of device200, and/or from a source external to device200(e.g., a network, another device, etc.). Processor210may perform an operation and/or a process based on various techniques including, for example, multithreading, parallel processing, pipelining, interleaving, etc. Memory/storage215includes one or multiple memories and/or one or multiple other types of storage mediums. For example, memory/storage215may include one or multiple types of memories, such as, random access memory (RAM), dynamic random access memory (DRAM), cache, read only memory (ROM), a programmable read only memory (PROM), a static random access memory (SRAM), a single in-line memory module (SIMM), a dual in-line memory module (DIMM), a flash memory (e.g., a NAND flash, a NOR flash, etc.), and/or some other type of memory. Memory/storage215may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, a solid state disk, etc.), a Micro-Electromechanical System (MEMS)-based storage medium, and/or a nanotechnology-based storage medium. Memory/storage215may include a drive for reading from and writing to the storage medium. Memory/storage215may be external to and/or removable from device200, such as, for example, a Universal Serial Bus (USB) memory stick, a dongle, a hard disk, mass storage, off-line storage, network attached storage (NAS), or some other type of storage medium (e.g., a compact disk (CD), a digital versatile disk (DVD), a Blu-Ray disk (BD), etc.). Memory/storage215may store data, software, and/or instructions related to the operation of device200. Software220includes an application or a program that provides a function and/or a process. Software220may include an operating system. Software220is also intended to include firmware, middleware, microcode, hardware description language (HDL), and/or other forms of instruction. For example, according to an implementation, software220may implement portions of co-management service framework140and customer portal150. Communication interface225permits device200to communicate with other devices, networks, systems, devices, and/or the like. Communication interface225includes one or multiple wireless interfaces and/or wired interfaces. For example, communication interface225may include one or multiple transmitters and receivers, or transceivers. Communication interface225may include one or more antennas. For example, communication interface225may include an array of antennas. Communication interface225may operate according to a protocol stack and a communication standard. Communication interface225may include various processing logic or circuitry (e.g., multiplexing/de-multiplexing, filtering, amplifying, converting, error correction, etc.). Input230permits an input into device200. For example, input230may include a keyboard, a mouse, a display, a button, a switch, an input port, speech recognition logic, a biometric mechanism, a microphone, a visual and/or audio capturing device (e.g., a camera, etc.), and/or some other type of visual, auditory, tactile, etc., input component. Output235permits an output from device200. For example, output235may include a speaker, a display, a light, an output port, and/or some other type of visual, auditory, tactile, etc., output component. According to some embodiments, input230and/or output235may be a device that is attachable to and removable from device200. Device200may perform a process and/or a function, as described herein, in response to processor210executing software220stored by memory/storage215. By way of example, instructions may be read into memory/storage215from another memory/storage215(not shown) or read from another device (not shown) via communication interface225. The instructions stored by memory/storage215cause processor210to perform a process described herein. Alternatively, for example, according to other implementations, device200performs a process described herein based on the execution of hardware (processor210, etc.). FIG.3is a block diagram illustrating some exemplary logical components of co-management service framework140. As shown inFIG.3, co-management service framework140may include a co-management configuration database (DB)300, an API gateway310, a global change manager320, and a co-management platform330. The components ofFIG.3may be implemented, for example, by processor220in conjunction with memory230. Co-management configuration database300may store eligible parameters for enterprise network160that are available to be changed by a customer (e.g., via customer portal150). The type/range of eligible parameters for a given customer may be pre-certified by the service provider. Also, co-management configuration database300may store API profiles with role-based access control and type of operations that can be performed. The API profile may extend to any domain (e.g., SD-WAN, firewall, WAN Optimization, etc.). API gateway310may generally manage the receipt and initial routing of customer requests for the co-managed configuration service. API gateway310may direct requests to other logical components of co-management service framework140. According to an implementation, API gateway310may receive change requests from customer portal310. The change requests may identify a customer name, a customer role, and a network service category for a particular network change. API gateway310may store pre-configured APIs for the customer based on configurations via order planning system130and co-management platform150. In response to change requests from customer portal150, API gateway310may forward API calls to co-management platform150. Global change manager320may log a change history for each transaction processed through the co-managed configuration service. Global change manager320may assign tracking numbers to transactions or groups of transactions, such that every change initiated by a customer is recorded into the change history of GCM320. The change history may be retrieved to enable a network service provider to back trace changes if problems occur in SDN device instances138. Co-management platform330may authenticate user change requests and implement network changes for the co-managed configuration service. Co-management platform330may translate customer credentials to actual device credentials that can be used to implement changes for SDN device instances138. Based on change requests with customer name, network service category, and customer role, co-management platform330may invoke device-specific APIs for different domains and vendor equipment. Co-management platform330may validate if a target device of an API call actually belongs to the customer network, may verify if the API call is to be allowed (e.g., is within the stored co-management parameters), and may validate that incoming API call is same as was approved at initial request. In operation, an enterprise customer wishing to make network configuration changes may use customer portal150to open a change request with co-management service framework140. GCM320may assign a unique ID to the change request, and provide the unique ID to customer portal150. GCM320may also trigger a control plane of co-management platform330to provide access approval to API gateway310. Customer portal150would discover device inventory, such as firewalls, for enterprise network160and use [[devops]] tools to trigger device APIs with additional details of the change request and Inventory ID of the device onto API gateway310. The change request may then processed by co-management platform330, which may validate the change request, confirm that device to be changed is associated to that customer, verify if the API is allowed for the device, and routes the request to the exact device using the inventory ID. Once the change request is served, the API request, response and associated customer access metadata is pushed to GCM320. After a certain inactivity time, the change request may be closed with GCM330. InFIGS.4-6, communications are described for configuring and activating a co-managed configuration service (FIG.4) and implementing policy changes using the co-managed configuration service (FIG.5).FIG.6describes additional implementation details for a co-management platform330configuration that uses a separate data plane and control plane. FIG.4is a diagram illustrating exemplary communications for enabling a co-managed configuration service in a portion400of network environment100. Similar communications as those shown inFIG.4may be used for disabling the co-managed configuration service.FIG.4provides simplified illustrations of communications in network portion400and are not intended to reflect every signal or communication exchanged between devices/functions. As shown inFIG.4, network portion400may include customer portal150, API gateway310, GCM320, co-management platform330, a purchase quoting (PQ) system410, an order management (OM) system420, an order orchestration system430, an enterprise service platform (ESP)440, a virtual network services platform (VNSP)450, a resource orchestrator460, and SDN device instances138. Purchase quoting system410, order management system420, work order (WO) system430, ESP440, VNSP450, and resource orchestrator460may correspond to one or more of order planning system130or network devices120. Collectively, purchase quoting system410, order management system420, work order system430, ESP440, VNSP450, and resource orchestrator460may perform functions to configure a co-managed configuration service for a particular enterprise customer (e.g., for enterprise network160). For example, as shown inFIG.4, purchase quoting system410may provide a selection option for a customer to identify a line of services for the co-managed configuration service. Based on a customer selection462, purchase quoting system410may generate and send a corresponding service order464to order management system420. Order management system420may provide a corresponding work order466to work order system430. Work order system430may divide up work order466into work order470for ESP440and a work order468for VNSP450. Network services platform450may include one or more network devices that sort NF instance information by customer. As indicated at reference472, ESP440may inform VNSP450once services are operational and VNSP450may share service data with co-management platform330, as further described inFIG.6, for example. As indicated by reference474, VNSP450may use resource orchestrator460to manage both physical and virtual resources such as VNFs164, PNFs162, uCPEs166, and deployment platforms such as uCPE, private cloud platform, public clouds, etc. For example, for VNFs164, resource orchestrator460may perform life cycle management and provisioning; while for PNFs162, resource orchestrator460may perform registration and onboarding of physical appliances. Customer APIs for the co-managed configuration service may be provided to API gateway310(e.g., via co-management platform330). With the configuration of the co-managed configuration service completed for enterprise network160, the customer may enable the co-managed configuration service. As further shown inFIG.4, the customer may use customer portal150to submit an enablement change request480. Based on customer input, customer portal150may send change request480to enable the co-managed configuration service. Change request480may include a unique change request reference number (e.g., CHG1) as a unique identifier for the change request to enable the co-managed configuration service. As indicated by reference482, API gateway310may receive change request480and forward the request to GCM320. GCM320may receive forwarded change request482. GCM320may mark the forwarded change request482with a policy co-management label and return a unique response reference number (e.g., CR1) to customer portal150via API gateway310, as indicated by references484and486. The policy co-management label is assigned to ensure that a network services team is able to identify the change request to be associated with the co-managed configuration service if tracking and/or troubleshooting is required. Additionally, GCM320may send a message488to co-management platform330to share the unique response reference number (e.g., CR1) with co-management platform330. Message488may trigger co-management platform330to enable the co-managed configuration service for enterprise network160. In response to message488, co-management platform330may enable the co-managed configuration service and provide a message490to inform VNSP440that the co-managed configuration service is enabled. FIG.5is a diagram illustrating exemplary communications for implementing a co-managed configuration service in portion400of network environment100.FIG.5provides simplified illustrations of communications in network portion400and are not intended to reflect every signal or communication exchanged between devices/functions. Communications inFIG.5may take place, for example, after the communications for enabling a co-managed configuration service described inFIG.4. A customer may use customer portal150to submit a policy change request502. Policy change request502may start invoking/enforcing a policy API in accordance with the limits of eligible parameters for enterprise network160using the co-managed configuration service. Policy change request502may include a unique change request reference number (e.g., CHG2) as an identifier for the transaction. As indicated by reference504, API gateway310may receive policy change request502and forward the request to GCM320. GCM320may receive forwarded change request504. In response to forwarded change request504, GCM320may acknowledge the change request and return a unique response reference number (e.g., CR2) to customer portal150via API gateway310, as indicated by references506and508. Additionally, GCM320may send a message510to co-management platform330to share both the change request reference number (e.g., CHG2) and the unique response reference number (e.g., CR2) with co-management platform330. The change request reference number (e.g., CHG2) and the unique response reference number (e.g., CR2) in message510may be used to track subsequent API calls for the co-managed configuration service for enterprise network160. After receiving the unique response reference number (e.g., CR2)508, customer portal150may invoke one or more policy change512. The policy change512may include the change request reference number (e.g., CHG2) and a DNS entity ID of a controller complex node associated with the device(s) with co-management platform330. Using the DNS entity ID, API gateway310may forward policy change512as message514to co-management platform330. Co-management platform330may check if the change request reference number (e.g., CHG2) is valid. For example, co-management platform330may confirm that a corresponding change request reference number (e.g., CHG2) was received from GCM320in message510. Assuming the change request reference number is valid, co-management platform330may invoke the specific policy APIs for the device(s) under consideration (e.g., SDN devices138), as indicated by reference516. According to an implantation, message514may trigger multiple different API calls516. Based on either message510or message514, co-management platform330may implement an inactivity timer for the change reference number (e.g. CHG2). The inactivity timer may include a defined time period (e.g., 8 hours, 12 hours, etc.) to accommodate multiple change requests while the change reference number remains active. Upon expiration of the inactivity timer, as indicated by message518, co-management platform330may push the request payload of API call516and any responses, mapped to both the change request reference number and the unique response reference number (e.g., CHG2/CR2) to GCM320. For example, communications for reference518may be conducted over a message queue. Message518may additionally inform GMC320to close/deactivate the both the change request reference number and the unique response reference number (e.g., CHG2/CR2). GMC320may receive message518and process the closure of both the change request reference number and the unique response reference number (e.g., CHG2/CR2). As indicated by references520and522, GCM320may send via API gateway310a response to customer portal150to close the API transaction requests FIG.6is a diagram illustrating the separation of data plane and control plane functions for co-management platform330. As shown inFIG.6, co-management platform330may include a data plane605and a control plane610. Control plane610may retrieve configuration information from ESP450, an OPMS635, and/or an OPMS/Provisioning system640. For example, as indicated at references658, control plane610may retrieve data from ESP450to map a change request name (e.g., a customer short name) with a particular director (e.g., NF manager625) for impacted vendor equipment. As indicated by references660and662, control plane610may also receive indication of order events (e.g., via a message bus) from OPMS635and/or obtain NF profiles for each customer and order details from OPMS/provisioning system640. As shown in message664, control plane610may push the vendor director information to data plane605, and data plane605may store a configuration file with the mapped customer short name. Thus, data plane605may store a configuration file that maps a customer short name to a particular vendor director for SDN equipment that is impacted by that change request. As further indicated by message666, control plane610may update the customer role profile as needed, for subsequent validation of incoming change requests. As described above in connection withFIG.5, a customer may initiate a policy change, which may cause GCM320to generate a transaction number (e.g., CR2) and provide the transaction number to control plane610. API gateway310may then provide a change request message670to data plane/API proxy605of co-management platform330. Change request message670may correspond, for example, to message514ofFIG.5. Message670may include the previously assigned transaction number (e.g., CR2) and an API call with a customer ID, a network service category, and a user role based on input from customer portal150(not shown inFIG.6). According to an implementation, API gateway310may provide to customer portal150a list of available network service categories. The network service category may identify a type of service for enterprise network160, such as SDWAN, firewall, LAN, or WLAN, for which co-managed changes may be implemented. In response to change request message670, data plane/API proxy605may validate the access policy of the endpoint (e.g., the impacted SDN device168) via OPA615, as indicated at reference672. Additionally, data plane605may retrieve674from control plane610the transaction number (e.g., CR2) that GCM320will have previously assigned. Assuming role from OPA615is validated, data plane605may map the customer short name to a vendor director IP address (e.g., for NF manager625) and provide an API call676to NF manager625to implement the intent of the change request in message670. Along with providing API call676, data plane605may initiate logging of the co-managed policy transaction. For example, data plane605may post a transaction status680to message bus620, which may be periodically retrieved682by control plan610. Transaction status680may include a record of API call676associated with the transaction number (e.g., CR2) and customer identifier. Control plane610may store684the transaction status in a local cache645along with an inactivity timer, for example. Periodically, or upon expiration of the inactivity timer, control plane610may forward compiled transaction records688to GCM320, which GCM320may use to provide a transaction status or closure (e.g., message520) to API gateway310. FIG.7is a flow diagram illustrating an exemplary process700for invoking a change in using co-managed p configuration policy service, according to an implementation described herein. In one implementation, process700may be implemented by co-management service framework140. In another implementation, process700may be implemented by co-management service framework140in conjunction with one or more other devices in network environment100. Process700may include establishing parameters for a co-managed configuration service (block710), and activating user access for the co-managed configuration service (block720). For example, as described in connection withFIG.4, purchase quoting system410, order management system420, work order system430, ESP440, VNSP450, and resource orchestrator460may perform functions to configure the co-managed configuration service for a particular enterprise customer. The parameters define access and scope of predefined network and security policies that can be managed by an enterprise network customer. As further described inFIG.4, once the parameters are provisioned, the customer may submit an enablement change request480to enable the co-managed configuration service. Process700may further include receiving policy change request (block730), validating the policy change request (block740), and retrieving information for device-specific instructions (block750). For example, as described inFIGS.5and6, API gateway310may receive a customer request and provide message514to co-management platform330. Co-management platform330may validate that the role of the requesting customer has access to change the endpoint. Once validated, co-management platform330may map a customer short name to a vendor director IP address for the impacted SDN device168. Process700may additionally include invoking the policy change on the SDN device (block760) and logging transaction record for policy change (block770). For example, co-management platform330may use the mapped vendor directed information to generate a vendor-specific API call and invoke the requested policy change. Using the transaction number assigned at activation, co-management platform330may generate a transaction log and store transaction records for back tracing, if necessary. The foregoing description of implementations provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while a series of blocks have been described with regard toFIG.7, and message/operation flows with respect toFIGS.4-6, the order of the blocks and message/operation flows may be modified in other embodiments. Further, non-dependent blocks may be performed in parallel. Certain features described above may be implemented as “logic” or a “unit” that performs one or more functions. This logic or unit may include hardware, such as one or more processors, microprocessors, application specific integrated circuits, or field programmable gate arrays, software, or a combination of hardware and software. To the extent the aforementioned embodiments collect, store or employ personal information provided by individuals, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage and use of such information may be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as may be appropriate for the situation and type of information. Storage and use of personal information may be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another, the temporal order in which acts of a method are performed, the temporal order in which instructions executed by a device are performed, etc., but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. In the preceding specification, various preferred embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. | 39,532 |
11863383 | DESCRIPTION OF THE EXAMPLE EMBODIMENTS Overview An exemplary method is disclosed that facilitates the on-demand creation of an exemplary instrumented network device in a cloud infrastructure (or remote server, evaluation platform, or customized testing server) and to form a stack (e.g., using stacking mechanism) between the instrumented network device (as a debug network device) and a target network device (e.g., such as physical network switch or switch fabric equipment that includes base debugging capabilities). The control plane of the target network device then switches over, via a switchover operation (e.g., SSO), to the control plane of the debug network device, while the data-plane of the target network device continues to operate in a hitless or near hitless manner through updates using a control-plane and data-plane transport operation that transports updates from the control plane of the debug network device to the data plane of the target network device. Once switched over, the instrumentation (e.g., hardware or software) of the instrumented control plane or debug network device facilitates the debug, optimization, profile, and/or recovery of the physical network device, even in a live network. The control plane of the target network device may be recovered, in some embodiments, with or without reboot of the target network device. The target network device can be a standalone non-redundant physical system such as a stackable non-redundant switch. The exemplary method is not necessarily restricted to physical network device and may be performed on non-physical network device such as software-based switches. The stack, formed by (i) a cloud server or an instrumented network equipment or server and (ii) the target network device, would operate equivalent to, and function like, a high-availability (HA) system. The instrumented network device provides additional and temporary hardware and software resources for the debugging and profiling of the physical network device (e.g., switch). Similarly, the debugging stack may be implemented in a test or laboratory environment to provide a more robust testing platform to debug or profile network equipment under design or testing. Rather than a traditional stack, the switchover operation and the control-plane and data-plane transport operation facilitate control-plane updates by the instrumented control plane of the debug network device to maintain the operation of the target network device, specifically the data plane of the target network. The control plane of the debug network device may be instrumented or coupled to instruments on the debug network device to execute the control plane function to the data-plane of the physical network device and allow for such control plane function, as well as data plane and network functions, among others, to be evaluated by the instrumentation. The instrumentation of the control plane and, in some embodiments, using instrumented hardware can additionally provide debugging and profiling information of data plane hardware, firmware, and middleware, etc. The debugging and profiling operation may be performed while the target network device continues to operate in a near hitless manner. The debugged or profiled data can be used to adjust the configuration of the control plane, the data plane, or the network, to address the issue in the production network. Once the debugging and/or profiling is completed, the exemplary method and system facilitate the restoration of the control plane operation to the control plane of the target network device. The debugged or profiled data can also be used to indicate or develop patches or fixes (e.g., in the switch OS or applications) to be addressed in a subsequent release of the software. The term “switchover” as used herein (and used herein interchangeably with the term Stateful Switchover Operation or SSO) generally refers to the manual or automatic triggered switching of operation from one network device to a redundant or standby network device (e.g., due to failure of a control-plane process) and is used in the instant embodiments in similar manner for the control planes between the debug network device and the target network device. However, rather than the data plane of the target network device switching to that of the debug network device, the data plane of the target network device is maintained. To this end, during the switchover operation, the control-plane operations of the target network device is switched from the active mode to the standby mode while the control plane of the debug network device is switched from the standby mode to the active mode in which the data plane operation continues to operate at the target network device. Switchover operation, preferably, may rely on conventional and/or existing switchover mechanisms and programs to perform the various hand-shaking operations and the synchronization of control-plane states. The switchover mechanisms are coupled with other operations described herein, including virtualization mechanisms, cloud infrastructure, and/or control-plane and data-plane transport operation, among others. In some embodiments, the control plane states between the debug network device and the target network device may be synchronized using synchronization operation such as those used in Stateful Switchover (SSO) operations or those used in high availability (HA) or In-Service Software Upgrade (ISSU) technologies. The term “control-plane data-plane transport” operation (also referred to herein as “control-plane data-plane interface transport” operation) refers to a virtual transport layer (e.g., implemented by a control-plane data-plane interface transport module described herein, also referred to herein as a “virtual PCI” or “VPCI module”) that is executed in each of the target network device and the debug network device, at the respective edge of the data-plane and control-plane, to transport bus transactions, e.g., associated with control-plane updates and data-plane updates, between the data plane of the target network device to the instrumented control plane of the debug network device over a communication link, e.g., a network tunnel. The switchover operations and control-plane and data-plane transport operations may be used in a selective and temporary manner to provide the control-plane operations to the data-plane of the target network device undergoing debugging or profiling of its control plane image and states, including network states. To this end, the target network device can maintain hitless, or near hitless, operation for its data plane even when its control plane is unavailable (e.g., when being rebooted). The term “selective” is used herein to refer to the selective use of the additional hardware (virtualized or non-virtualized) as a proxy of the control plane of the active target network device. The term ‘temporary” is used herein to refer to the limited duration that the instrumented control plane is used, though, in some embodiments, it is contemplated that the exemplary systems and methods described herein can be used in an on-going manner, say, to monitor the operation of the target network device. The term “data plane” (and data-plane) generally encompasses data-plane processor(s) and data-plane resource(s) configured to route packets from one port of the physical network device to another port. Data-plane processor (also referred to herein as data-plane devices) can include processing units involved in the switching and/or routing of packets in the physical network device such as network processors (NPUs), switching-ASICs (application-specific integrated circuit), switching FPGA (field-programmable gate array), CPLD (complex programmable logic device), and the like. Examples of data-plane resources may include, but are not limited to, MAC address table(s), FIB table(s), ACL table(s), and any other tables, register contents, content addressable memory (CAM) contents, ternary content-addressable memory (TCAM) contents, binary content-addressable memory (BCAM) contents, and memory contents (e.g., non-persistent, volatile, etc.) maintained or used by data-plane processors. The term “control plane” (and control-plane) generally refers to a group of functions and associated control packets or traffic that involve the configuration and management, protocol state-machine, state of the switch, etc., and is usually implemented in a host processor of a switch. Examples of such traffic may include Spanning Tree Protocol (STP), Hot Standby Router Protocol (HSRP), and control packets that are destined to the network device such as a switch, or sent from the network device or application layer protocols such as Secure Shell (SSH) and Simple Network Management Protocol (SNMP) that are typically handled by the host processor. The term “host processor”, as used herein, is used interchangeably with the term “host CPU” and generally refers to cores of a microprocessor or microcontroller, e.g., having RISC or CISC architecture, of a network device or equipment of interest, e.g., of a physical network device (e.g., switch), that are configured to execute computer instructions within the framework of an operating system in a networking device. In some embodiments, the debug network device is instantiated in a cloud or remote infrastructure (e.g., cloud servers or private servers). In some embodiments, rather than cloud or remote infrastructure, the stateful switchover operation is executed with a local computing device or a second physical network device with greater computing resources as compared to the physical network device or equipped with instrumentation. In some embodiments, the local computing device or the second physical network has equivalent processing resource (or even less) as the target network device but is executing only a subset of the system image or application executing on the target network device to provide those comparatively free resource for instrumentation. Indeed, the debug network device does not require its own data-plane components (though it), and generally only need to have the computing capability to manage an instrumented control-plane for the target network device (or is equipped with hardware instrumentation). In some embodiments, the debug network device may be implemented across multiple cores or processing unit, e.g., using hyperthreading capabilities. In some embodiments, the debug network device is configured with the equivalent computer readable instructions associated with the system image (e.g., comprising the operating system and routing application) as that of the physical network device, but instrumented via external software with debugging or profiling operation to execute concurrently with the system image or control plane application. In such embodiments, the modularity also ensures that the stateful switchover operation can be performed for any system image as generated through current development processes and workflow without need for customization or modification of the system image for it be used for this purpose. In some embodiments, the software image and control-plane can be run on top of a virtual machine (VM) or virtual host, e.g., based on virtualization or containerization technology, which are used interchangeably herein. The term “stackable switch” (including “stackable non-redundant switch”) refers to a network switch that is fully functional when operating as a standalone device, but which can also be set up to cooperatively operate together with one or more other network switches as a group in which the group can be configured to show the characteristics of a single switch. Indeed, the stack may have a common set of one or more IP addresses for remote administration of the stack as a whole. Stackable switches, and similar class device, can provide redundant operations to individual switches in switchover operations. Non-stackable switches also refer to switches configured to operate as a standalone device. Stackable and non-stackable switches may be configured with modules to perform stateful switchover operations as described herein. In another aspect, the exemplary method (and associated system) is configured to establish a debugging cloud or remote infrastructure for a network device. The method (of the system) comprises instantiating, at a remote or cloud computing device (e.g., cloud computing platform, evaluation test platform of a target switch, custom computing server), a debug network device with an operating image (e.g., non-instrumented or instrumented production image) of a target network device (e.g., a non-redundant switch or a redundant switch), wherein the debug network device is configured, by software instructions or hardware, to execute one or more debugging processes not executing on the target network device, and wherein the target network device comprises a first control plane and data plane that is receiving and forwarding network traffic in a network; joining the debug network device and the target network device in a stack configuration to synchronize states of the first control plane of the target network device to a second control plane of the debug network device, wherein the first control plane of the target network device is initially executing in an active stacked configuration, and wherein the second control plane of the debug network device is initially executing in a standby stacked configuration; and triggering a switchover operation, wherein the switchover operation switches the first control plane of the target network device from the active stacked configuration to the standby stacked configuration and disconnects the first control plane from updating the data plane of the target network device, wherein the switchover operation switches the second control plane of the debug network device from the standby stacked configuration to the active stacked configuration, and wherein the second control plane in the active stacked configuration is connected to the data plane over a network connection and updates the data plane, wherein the one or more debugging processes are operatively executed concurrently with the second control plane of the debug network device to evaluate at least one of: i) said second control plane, ii) a hardware or firmware configuration of the target network device, and iii) the network (e.g., to profile hardware, firmware, and/or timing characteristics of a live or non-live network device, to profile protocols operations of the network or timing of network devices therein). In some embodiments, the method include updating the data plane of the target network device via the second control plane of the debug network device using a control-plane and data-plane transport operation. In some embodiments, one or more processes of the debug network device are used to restore the target network device from an error or invalid state associated with the data plane and/or the control plane (e.g., ASIC configuration, hardware configuration, firmware configuration, hardware-software interoperation configuration) (e.g., to recover and restore problematic switches in a live network). In some embodiments, the one or more debugging processes are associated with at least one of an analysis tool (e.g., network, ASIC, custom, system analysis tool, e.g., Valgrind, Callgrind), a call graph analyzer, a memory analyzer, and a cache profiler. In some embodiments, instantiating the debug network device is performed dynamically (e.g., automatically or manually to debug a problematic target network device), the method further comprising deleting the second control plane of the debug network device (e.g., at the remote or cloud computing device) following the evaluation and restoration of the target network device. In some embodiments, the one or more debugging processes are used to profile the target network device or the network to optimize data plane configuration and/or control plane configuration of the target network device (e.g., ASIC configuration, hardware configuration, firmware configuration, hardware-software interoperation configuration). In some embodiments, the one or more debugging processes are used to profile the target network device or the network to optimize network operation, a network policy, or a network configuration. In some embodiments, the second control plane in the active stacked configuration provides near-hitless operation for the target network device as the second control plane of the target network device is being debugged by the one or more debug processes executing at the debug network device. In some embodiments, the operating image comprises a non-instrumented production image. In some embodiments, the target network device comprises a non-redundant switch. In some embodiments, the target network device comprises a redundant switch. In some embodiments, the joining operation is performed via stacking technologies. In some embodiments, the switchover operation is performed via stateful switchover (SSO) operation or via that which is included in at least one of high availability (HA) operation or in-service software upgrade (ISSU) technologies. In some embodiments, the debug network device further comprises an instrumented data plane comprising data plane components and additional instrumentation to access the data plane components, and wherein the one or more debugging processes or the additional instrumentation are configured to transmit one or more debug packets into the data plane component, and wherein the instrumentation is configured to provide logging and/or analysis of said one or more debug packets. In some embodiments, the remote or cloud computing device comprises at least one of a cloud server, a remote server, an evaluation platform for the target network device, and a custom computing server comprising one or more debugging or evaluation sub-systems (e.g., FPGA cards, GPU cards, RTL emulators, hardware accelerators, PCIe analyzers). In some embodiments, instantiating the debug network device comprises: retrieving the operating image of the target network device from an image server; and orchestrating a virtualized environment (e.g., container or virtualization) with an operating system and environment using the retrieved operating image (e.g., wherein the operating image can be i) the same as that of the target switch or ii) an instrumented operating system that is equivalent to the same). In some embodiments, the method further includes establishing a tunnel connection or a direct connection between the debug network device and the target network device, wherein the tunnel connection or the direct connection is used as the network connection for the debug network device to update the data plane of the target network device. In another aspect, a system (e.g., non-redundant switch, controller (e.g., DNAC, SDN controller, remote/cloud system, remote terminal) is disclosed comprising a host processor; and a memory having computer readable instructions, wherein execution of the computer readable instruction, cause the host processor to: instantiate, at a remote or cloud computing device (e.g., cloud computing platform, evaluation test platform of a target switch, custom computing server), a debug network device with an operating image (e.g., non-instrumented or instrumented production image) of a target network device (e.g., a non-redundant switch or a redundant switch), wherein the debug network device is configured, by software instructions or hardware, to execute one or more debugging processes not executing on the target network device, and wherein the target network device comprises a first control plane and data plane that is receiving and forwarding network traffic in a network; join the debug network device and the target network device in a stack configuration (e.g., via a stacking protocol) to synchronize states of the first control plane of the target network device to a second control plane of the debug network device, wherein the first control plane of the target network device is initially executing in an active stacked configuration, and wherein the second control plane of the debug network device is initially executing in a standby stacked configuration; and trigger a switchover operation (e.g., stateful switchover (SSO) operation), wherein the first control plane of the target network device is switched from the active stacked configuration to the standby stacked configuration and disconnected from updating the data plane of the target network device, and wherein the second control plane of the debug network device is switched from the standby stacked configuration to the active stacked configuration and connected, over a network connection, to update the data plane of the target network device, wherein the one or more debugging processes is operatively executed concurrently with the second control plane of the debug network device to evaluate at least one of: i) said second control plane (which reflects the operation of the first control plane), ii) a hardware or firmware configuration of the target network device, and iii) the network. In some embodiments, the method includes updating the data plane of the target network device via the instrumented control plane using a control-plane and data-plane transport operation. In some embodiments, the instructions, when executed by the host processor of the debug network device, further cause the host processor to execute the one or more debugging processes to restore the target network device from a detected error or invalidate state associated with the data plane and/or control plane of the target network device (e.g., ASIC configuration, hardware configuration, firmware configuration, hardware-software interoperation configuration) (e.g., recover and restore problematic switches detected in a live/production network). In some embodiments, system (e.g., via a host processor of the target network device or a second processor unit or logic circuit) is configured to: read a set of bus-interconnect transactions (e.g., originating from the data plane) from a bus interconnect of the target network device and transmit the set of bus-interconnect transactions (e.g., via the control-plane and data-plane transport operation) as a set of data-plane transaction messages to the debug network device over a network interface, wherein the debug network device is configured to use the set of data-plane transaction messages to write (e.g., via the control-plane and data-plane transport operation) the set of bus-interconnect transactions to the bus interconnect or to a host processor of the debug network device to update control plane states maintained by the debug network device; and write (e.g., via the control-plane and data-plane transport operation) a second set of bus-interconnect transactions to the bus-interconnect of the target network device based on a second set of data-plane transaction messages received from the debug network device over the network interface, wherein the second set of bus-interconnect transactions updates a portion of a plurality of data-plane-associated tables of the target network device. In another aspect, a computer-readable medium is disclosed with instructions stored thereon, wherein execution of the instructions by a processor (e.g., of a non-redundant switch, controller (e.g., DNAC), remote/cloud system, TAC remote device), cause the processor to: instantiate, at a remote or cloud computing device (e.g., cloud computing platform, evaluation test platform of a target switch, custom computing server), a debug network device with an operating image (e.g., non-instrumented or instrumented production image) of a target network device (e.g., a non-redundant switch or a redundant switch), wherein the debug network device is configured, by software instructions or hardware, to execute one or more debugging processes not executing on the target network device, and wherein the target network device comprises a first control plane and data plane that is receiving and forwarding network traffic in a network; join the debug network device and the target network device in a stack configuration (via a stacking protocol) to synchronize states of the first control plane of the target network device to a second control plane of the debug network device, wherein the first control plane of the target network device is initially executing in an active stacked configuration, and wherein the second control plane of the debug network device is initially executing in a standby stacked configuration; and trigger a switchover operation (e.g., via high availability (HA) operations such as in-service software upgrade (ISSU) operations or stateful switchover (SSO) operation) wherein the first control plane of the target network device is switched from the active stacked configuration to the standby stacked configuration and disconnected from updating the data plane of the target network device, and wherein the second control plane of the debug network device is switched from the standby stacked configuration to the active stacked configuration and connected, over a network connection, to update the data plane of the target network device, wherein the one or more debugging processes is operatively executed concurrently with the second control plane of the debug network device to evaluate at least one of: i) said second control plane (reflecting the first control plane), ii) a hardware or firmware configuration of the target network device, and iii) the network. In yet another aspect, the exemplary system and method facilitate the profiling and debugging of a target system in a live network without affecting performance, throughput and/or functionality of the target device. It can also be done in a hit-less manner (both for the control-plane and data-plane). In yet another aspect, the exemplary system and method enhance the debugging and profiling capability of a target system (for a short or extended time) and providing this capability on-demand by instantiating an instrumented image in the cloud or remote server. In yet another aspect, the exemplary system and method facilitate an on-demand creation of a control-plane on a debug switch (e.g., executing in the cloud or remote server) which can be executing a different software image (instrumented or otherwise) that can control the target node without altering the expected behavior and functionality of the target node. In other words, the debug switch may execute a different software image but one that functionally equivalent and can do so dynamically and in a hit-less, or near hit-less, manner. In yet another aspect, the exemplary system and method facilitate an on-demand creation of a data-plane and the dynamic linkage and association of that data-plane to a debugging or profiling environment that may include various hardware accelerators, emulators, data-models etc., which can be used to debug/profile the data-plane (hardware, software, and/or middleware). In yet another aspect, the exemplary system and method facilitate a near-hitless recovery of a problematic system within a live network. The recovery could be performed without requiring a reboot and without network interruption or disruption. In yet another aspect, the exemplary system and method facilitate operation of a debugging switch (as a shadow system) configured, in some embodiments, to run in passive mode in which the debug switch, when in standby role, receives the same set of control-plane updates, messages, and signals from the target node (which remains in active role). In yet another aspect, the exemplary system and method is configured to employ orchestration mechanisms such as through a network controller (e.g., DNAC) or other controller or centralized control tools. In yet another aspect, the exemplary system and method is configured to provide protection against system failures during live debug session of a target node. In such embodiment, when the active control-plane crashes, then the standby control-plane of the other device is configured to switch over without manual intervention and takes over operation of the data plane of the physical network device. The switchover is performed without any interruption to the system operations. Indeed, failover operation from an active device to a standby device can be performed via SSO mechanism, whether from the debug network device to the target network device, or vice versa. In yet another aspect, the exemplary system and method is configured to execute a debugging switch that runs the control-plane that updates data-plane by at least one of (i) connecting to the data-plane in the target node, (ii) instantiating a separate data-plane locally using software models and/or RTL models that run on the same machine as control-plane, and (iii) implementing a hardware data-plane within the virtual debug switch itself. Example System FIG.1shows a diagram of a debug network device102that is created on-demand to form a stack103with a target network device104located in a network106. The debug network device102executes a control plane108of the control plane110of the target network device104to facilitate the debug, profile, or recovery the target network device104. The control plane108of the debug network device102may be an instrumented control plane—that is, having instrumentation code—or the debug network device102may include additional hardware instrumentation. The stack103between the debug network device102and the target network device104is formed using a stacking mechanism or stacking protocol. Stacking operation synchronizes (e.g., via bulk synchronization and subsequent incremental synchronization) the various system state of the target network device104to the debug network device102instrumented for debugging and/or profiling. When the stack103is formed, the debug network device102is connected to the target network device over a communication link116(shown as network connection116aor a direct connection116b) established between the target network device104, e.g., at a port of the data plane114, and a port118of the debug network device102. In the stack103, the target network device104is initially put in an active mode while the debug network device102is put in standby mode. In the active mode, the control plane110of the target network device104services the data plane114of the target network device104. Once both target network device104and debug network device102are synchronized in states, a switchover operation is triggered to put the control plane108of the debug network device102in active mode, and it then takes over the role of servicing the data plane114of the target network device104. Notably, following the switchover operation, the data plane114maintains operation throughout the debugging or profiling session—thus, there is no change in the network from the switchover operation from the perspective of the network and peer network devices. While in this active state, the debug network device102(e.g., via instrumented control plane, hardware instrumentations, or both) provides instrumentation operation to debug or profile the control plane operation, the data plane operation, and the network operation of the stack and its subcomponents. Most manufacturers have its own proprietary switchover technologies. Examples of switchover operation include stateful switchover (SSO) operation or other such operations as employed in high availability (HA) operations or in-service software upgrade (ISSU) technologies. For the control plane108of the debug network device102to service the data plane114of the target network device104, a virtual transport layer may be implemented, e.g., by a control-plane data-plane interface transport module (also referred to herein as a “virtual PCI” or “VPCI module” and shown inFIG.1as a “control plane interface/data plane interface transport” (CPI/DPI Transport)120and122). The virtual transport layer, in some embodiments, is executed at each of the target network device104and in the debug network device102, at the respective edge of the data-plane and control-plane, to transport bus transactions126, e.g., associated with control-plane or data-plane updates, between the data plane114of the target network device104and the control plane108of the debug network device102over a communication link (116aor116b). Further description of an example control-plane-data-plane transport module is described in U.S. Patent Appl. No. 17/29559, filed Dec. 2, 2020, which is incorporated by reference herein in its entirety. The debug network device102can be one of a different type of machines (e.g., shown inFIGS.4A,4B, and4C) that executes a debug/instrumented image corresponding to the software release running on the target network device104(or includes instrumentation hardware). InFIG.4A, the debug network device102(shown as102a) is a cloud or remote server. InFIG.4B, the debug network device102(shown as102b) is an evaluation switch. InFIG.4C, the debug network device102(shown as102c) is a debugging/evaluation server. In some embodiments, the debug network device102includes additional instrumentation or simulators124. The debug network device102, generally being a high-end computing machine running an instrumented control plane of that of the target network device or instrumented with hardware for debugging/profiling, is configured to execute functionally equivalent operations of the control plane110of the target network device104while in an active mode of the stack103, and thus enabling profiling/debug data to be collected for any environment, including in a production network. Once the debug session is completed, the debug network device102reverses its temporary role in stack103with the target network device104, and the control plane110of the target network device104is put in the active mode in stack103, and the debug network device102may then be disconnected and the instrumented control plane108deleted. In some embodiments, the debug network device102is implemented in a data center, public cloud, or similar environment using general or custom-designed computing hardware and software. Each product line for a given device manufacturer may be instrumented using these general and custom-designed resources. Technical assistance center (TAC) staff and field engineers may use one of the nodes in an on-demand basis to debug a target network device and release when done. In other embodiments, the debug network device102is implemented in a standalone computing platform (e.g., another network device or a custom computing platform) that may be brought onto the site of a given network to which technical assistance center (TAC) staff and field engineers can link to the target network device through a network connection or direct connection to perform the debug/profiling operation described herein. In some embodiments, the debug network device102includes instrumentation124comprising debugging hardware, test boards, simulators to operate in conjunction with the instrumented control plane108. Instrumentation124may include commercially off-the-shelf or custom-built debugging hardware, test boards, simulators, and software. Method to Establish a Debug Network Device FIG.2is diagram showing an exemplary method200to establish a debug network device102in accordance with an illustrative embodiment. The debug network device102is first instantiated (202) with control plane108. In some embodiments, instantiation operation entails loading, or directing the loading of, a system image of the target network device onto the debug network device. The loaded system image may be instrumented, or instrumentation software may be executed, in an application space of the debug network device. In some embodiments, the debug network device102include debugging or profiling hardware, which may be instantiated during process202. The method200then includes establishing (204) a stack103between the debug network device102and the target network device104. The control plane110of the target network device is initially designated to be in active mode of the stack103, and the control plane108of the debug network device is in the standby mode. A connection116, in some embodiments, is established (204) between the debug network device102and the target network device104. The connection116(shown as116a), in some embodiments, is a network tunnel established over the network106(shown as106a) between the debug network device102and the target network device104. In other embodiments, the connection116(shown as116b) is established as a direct communication link between the debug network device102and the target network device104. Direct communication link may include, but not limited to, direct serial communication link such as ethernet cross-over cable, ethernet, USB, FireWire, SVL link, Sonet/SDH, Frame-relay, X.25, T1/E1, and the like, that can sufficiently provide control plane and data plane updates. Direct communication link can also refer to wireless links such as WiFi (e.g., 802.11), LTE 4G/5G, WiMAX, and the like. After the formation of the stack103, bulk synchronization, and/or incremental synchronization, of the stack protocol is performed to synchronize the states (e.g., control plane states) of the target network device104(which is in active mode) with the debug network device102(which is in standby mode). Once the states of both the target network device104and the debug network device102are synchronized, the method200includes switching (206), in a switchover operation, the instrumented control plane108of the debug network device102from standby mode in the stack to an active mode, and switching the control plane of the target network from the active mode to the standby mode. During and after the switchover operation (206), the data plane operation (shown via data-plane114) of the target network device104is maintained and the target network device continues to actively route any traffic that it receives to the appropriate network node. Concurrent with the continued data plane operation (by data plane114) of the target network device104, the control plane108of the debug network device102, while in the active mode, takes over as the control plane operation of target network device104and provides or services, any control plane updates (as well as management plane updates) to the data plane114, e.g., via a virtual transport operation over the connection116. At this point, the instrumented control plane108is established and operating in active mode to control the target network device104. The method200may then include executing (210) debugging and/or profiling operations at the instrumented control plane108of the debug network device102. While the debugging or profiling operation are on-going, as noted above, the control plane108of the debug network device102continues to provide control plane updates to the data plane114of the target network device104. In some embodiments, the instrumented control plane108may generate a notification to a user that the control plane108of the debug network device is in a debug-ready state. In some embodiments, the notification may be a message in a command line interface or a status window. During step210, the debug network device102may be triggered to execute a profiler (e.g., run-time profiler or debuggers) such as cache simulation, branch predictors, call-graph analyzers, Valgrind, etc. The control plane108(instrumented) may generate trace-logs or execute memory analysis tools such as memory leak detectors, memory profilers, dynamic analysis tools, memcheck, and the like. The debugging operation may include adding instructions or commands to the execution of the system image or control plane application. The debugging operation may be performed over the course of a few hours and then active mode is then switched over to the control plane110of the target network device104at the end of the debugging session. In some embodiments, the debugging operation is maintained for an extended period of time (e.g., left running over night and/or over the course of a few days or even for weeks, e.g., for profiling). Concurrent with, prior to, or after the debugging operation210, the control plane108of the debug network device102is configured to update (208) the control plane or system status via control-plane data-plane interface transport module. Indeed, the debug network device102may take over the control plane operation of the target network device104in a hitless or near-hitless manner—that is, without disruption to its switching and protocol operations. Method to Debug/Profile Using the Debug Network Device FIGS.3A,3B, and3Ceach shows a diagram illustrating an exemplary method300(shown as300a,300b, and300crespectively) to perform debugging or profiling operations using the debug network device102in accordance with various illustrative embodiments.FIGS.3Aand3C each shows an example of operation to establish and debug the target network device104using the debug network device102in accordance with an illustrative embodiment.FIG.3Bshows another debugging operation in which the target network device104may be restored without network interruption or disruption in accordance with an illustrative embodiment. Notably,FIG.3Aprovides an example operation that illustrates the benefit of this technology in facilitating the collection and/or profiling of control plane, data plane, and network operations of a target network device in its live network device.FIG.3Bprovides for the same (i.e., debugging capabilities) and further illustrates that the collection and/or profiling can be disruption-less to the target network device via use a second switchover operation to restore the control plane of the target network device as the active device to provide.FIG.3Cprovides another debugging operation, though more limited, using passive mode. Debugging Operation Example #1. InFIGS.3A and3B, the method300a,300b,300ceach includes establishing and executing (shown via202-208) a debug network device102having a control plane108(e.g., an instrumented control plane or the debug network device is equipped with instrumentation hardware) in active mode to the target network device104and is performing a set of debugging operations, as for example described in relation toFIG.2. InFIG.3A, following the debug network device102being initialized and controlling the data plane114of the target network device104(steps202-208) and the debugging operation (step210) being performed, the method300afurther includes adjusting (302) a configuration of the control plane of the target network device using the acquired debug/profile data. The adjustment (302) may be to a configuration of the control plane configuration, the data plane, a network setting, or any of the configurable or reprogrammable features of the target network device, including ASICS and various hardware of the target network device104. In some embodiments, the adjustment (302) includes adjusting the designated boot-up system image of the target network device104as an upgrade to the control plane108. The method300athen includes rebooting (304) the control plane110of the target network device104, and the control plane110booting up without the stack configuration between the target network device104and the debug network device102. Upon being loaded, the target network device104resumes its operation with the bug or misconfiguration issue addressed. Additional debugging and profiling operations may be repeated until a desired result has been achieved. Following or concurrent with the reboot operation (304), the control plane108of the debug network device102may be disabled and, in some instances, deleted (306). During this closing process (306), in some embodiments, the control plane108of the debug network device102and/or its configurations may be stored for later retrieval, e.g., for analysis or usage. Indeed, method300amay be performed while the target network device104continues to operate in a live environment with some, though minimal disruption to the network106(i.e., the time to reboot the target network device). The methods200,300may be similarly performed on a target network device104executing in a controlled testing or laboratory environment (e.g., during the design and/or testing of a network device). Debugging Operation Example #2—recover problematic switches with effectively no service impact. In some embodiments, to recover a network device with minimal service impact, the control plane110of the target network device104may be directed to resume active mode, and recovery can occur (with or without a reboot) without any, or with even less, service disruptions by the target network device104to the network106. InFIG.3B, following an adjustment (step302) of the configuration of the control plane110of the target network device104, for example, as described in relation toFIG.3A, the method300bfurther includes switching308, via a second switchover operation, the instrumented control plane108from the active mode in the stack to the standby mode, and switching the control plane of the target network from the standby mode to the active mode. Indeed, the control plane operation is now restored on the target network device104without any network disruption. Prior to the second switchover operation (step308), the control plane110of the target network device104may be re-booted and reconnected to the stack, still in the standby mode. Then, the second switchover operation (308) can occur to put the control plane110of the target network device104in the active mode. As discussed above, the adjustment operation302may include an adjustment to a configuration of the control plane configuration, the data plane, a network setting, or any of the configurable or reprogrammable features of the target network device including designating a new boot-up system image for an upgrade to the control plane108of the target network device104. Similar toFIG.3A, following the second switchover, the instrumented control plane108may be disabled and, in some instances, deleted (306). In some embodiments, the instrumented control plane108and/or is configurations may be stored for later retrieval, e.g., for analysis or usage. Indeed, if a non-redundant switch in a customer network is affected with uncorrectable/unrecoverable issue (for e.g., partial traffic loss, uncorrectable memory error, hardware issues, control-plane/data-plane inconsistencies etc.), and reloading is the only option, then the exemplary methods can be used to recover the target switch for certain cases with minimal impact. In some embodiments, the debug network device may be spawned in the cloud, which then forms a stack with the target network device (problematic switch) and then takes up “active” role while the target switch can go through reboot. The debug switch may collect additional data from the data-plane of the target network device for analysis before it is reset. In some embodiments, this operation provides near-hitless control-plane functionality and data-plane functionality. In other embodiments, this operation provides near-hitless control plane functionality and a short disruption (e.g., a few seconds or even sub-seconds) of data-plane functionality. The exemplary workflow (e.g.,300a,300b,300c, etc.), and others described herein, may be integrated with a network controller (e.g., DNAC) or other controller or centralized control tools, which may be used to coordinate or trigger the orchestration, switchover to configure the debug network device. In some embodiments. The trigger may be automatic based on pre-defined rules or polices. In some embodiments, while a live-debug session is on-going, and engineers/TAC perform an intrusive debug operation which crashes the debug network device, the same switchover mechanism (e.g., SSO) can automatically trigger a switchover of the control plane110to the active mode to resume normal operation to maintain operation of the stack in an uninterrupted manner. Debugging Operation Example #3—Passive Mode Debugging. In some embodiments, a debug network device is instantiated and configured to run in passive mode to the control plane110of the target network device104. In passive mode, the control plane108of the debug network device102is updated by state changes that occur at the control plane110of the target network device104. To this end, the debug network device102with its additional instrumentation (hardware and software) can still profile state changes of the control plane of the target network device. InFIG.3C, the debug network device is instantiated (202) and a stack is established (204) between the target network device104and the debug network device102. However, unlikeFIGS.3A and3B, there is no switchover operation. Rather, the control plane108of the debug network device102operates in passive mode and receive updates from the control plane110of the target network device104, which maintains control operation of the data plane114. In some embodiments, hardware simulations (e.g., VHDL simulations of the ASIC) may be executed on the debug network device. Example Debug Network Device FIGS.4A,4B, and4Ceach shows an example debug network device102(shown as102a,102b, and102c, respectively) in accordance with an illustrative embodiment. Cloud infrastructure.FIG.4Ashows the debug network device102aimplemented in a remote or cloud computing device (e.g., cloud computing platform). In some embodiments, the remote or cloud computing device may be implemented in Amazon AWS, Microsoft Azure, Cisco Cloud Solutions, Google GCP, or any public/private cloud or local/remote network. InFIG.4A, the remote or cloud computing device102ais preferably a modular stripped-down high-end general-purpose server computer. In some embodiments, the debug network device102ais implemented on a high-end general-purpose computer. The computer may host powerful CPUs (e.g., 64/128/256-cores) with hundreds of GiB of RAM and capable of running custom software models, RTL emulators, simulators etc. for corresponding functionality from the network switches. The exemplary debug network device is generally configured with more computation power than the target network device, e.g., any one of greater number of cores, greater number of memory resources, faster clock speed, larger caches, etc. With scalable cloud system, the resources can be pooled from multiple distributed computing resources. InFIG.4A, the debug network device102aexecutes an instrumented control plane108(shown as108a). Within the instrumented control plane108a, the debug network device102aexecutes a system image402and instrumented code404. The instrumented code404within the instrumented system image402may be used to generate trace-logs and may include command-line function to evaluate various subset of modules of the system image or control plane applications. In some embodiments, the instrumented control plane108aincludes application code406, that executes within the operating system of the network device, and that may include instrumented code408. In yet other embodiments, the instrumented control plane108aincludes debugging or profiling software410that are installed into its application space. In some embodiments, the debug network device102ais implemented in a data center, public cloud, private cloud, or similar environment using general or custom-designed computing hardware and software. Each product line for a given network equipment manufacturer may be instrumented using these general and custom-designed resources. Switch Platform.FIG.4Bshows the debug network device102bimplemented in an evaluation/validation platform. The evaluation/validation platform102bmay implement a debug switch that is functionally equivalent or similar to the target network device104. The debug network device102bmay include instrumented control plane108. In some embodiments, the debug network device102bmay include an instrumented data plane414. Generally, ASIC blocks and embedded microcontrollers under design may include additional I/O debugging pins, which are purposely not exposed or included in the production switches for security concerns but are now exposed or included for debugging. The debug network device102bmay be a custom development platform that are typically used during the development of network equipment and can be configured, to enable these I/O debugging pins in the ASIC blocks and embedded microcontrollers of the debug network device102b. Instrumented data plane414, in some embodiments, includes data plane of the debug network device102bthat is instrumented by external test equipment. The evaluation platform102bmay include instrumentation124comprising debugging hardware, line-cards, test boards, hardware and/or software simulators, hardware accelerators, graphic processor units (GPUs), RTL simulators, PCIe/AXI or various analyzers that can be installed into the debug network device102b(see alsoFIG.4C) to operate in conjunction with the instrumented control plane108. The instrumentation124may be connected to a separate debugging terminal416. In addition, instrumentation hardware and systems418(e.g., oscilloscopes, logic analyzers, EMI evaluating equipment, network test equipment, etc.), as external test equipment, may be used to evaluate the instrumented control108or the instrumented data plane414. The evaluation platform102bmay be used during hardware boot-up and have additional functionality to aid in debugging. In some embodiments, the evaluation platform102bincludes additional debugging pins in the circuit boards and modules of the switch. In some embodiments, for debugging of data plane issues, the exemplary method and system may be configured in which the target network device is configured for selective traffic mirroring operation. For most use cases, a small number of packets may be duplicated to the data plane for debug. The debug network device can pass these packets through an instrumented data plane for detailed analysis. For example, ASIC emulation model in software may be implemented as a data plane implementation. The model may be used to provide for detailed logging and analysis. Current troubleshooting session often are limited to offline debug/analysis by collecting, e.g., over multiple iterations, ASIC states from a target network device (e.g., switch) and replaying it in a lab with the necessary packets. The instrumented control plane facilitates real-time analysis of traffic from a live system. Custom Computing Server.FIG.4Cshows the debug network device102(shown as102c) implemented in a custom computing server designed for a specific product line of a given network device and tightly integrated with hardware accelerators, FPGAs, hardware emulators, etc. In some embodiments, and as shown inFIG.4C, the debug network device102cinclude instrumentation in the form of development tools such as software models402or simulation models404. Data-plane simulation models430(e.g., of the operating system of the ASIC VTP) may be implemented in the instrumented control plane108. In such examples, traffic can be steered (e.g., through a tunnel) and inputted to the network ports of the data-plane simulation model. This debugging operation offer potentially enhanced debugging of the data-plane traffic through elaborate event logs and trace messages from the RTL model. In addition, as described in relation inFIG.4Band now shown inFIG.4C, the debug network device102cmay also include various hardware development and debugging tools such as hardware accelerators420, graphic processor units (GPUs)422, RTL simulators424(e.g., register transfer level (RTL) description, Verilog or HDL simulators), and PCIe analyzers428. The instrumentation124(shown as124b) may be connected to a separate debugging terminal416. In addition, as described in relation inFIG.4Band now shown inFIG.4C, the instrumentation hardware and systems418(shown as418a) may include oscilloscopes, logic analyzers, EMI evaluating equipment, network test equipment, etc., as external test equipment, that may be used to evaluate the instrumented control108(e.g.,108a) (or the instrumented data plane414as shown inFIG.4B). The instrumentation hardware and systems may also include various hardware development and debugging tools such as hardware accelerators420(shown as420a), graphic processor units (GPUs)422(shown as422a), RTL simulators424(e.g., register transfer level (RTL) description, Verilog or HDL simulators) (shown as424a), and PCIe analyzers428(shown as428a). Control-Plane Data-Plane Transport Module FIG.5shows an example method of operation of the control-plane data-plane transport module120,122(shown as “VPCI”120,122) that is used by the instrumented control plane to update, and/or receive updates from, the data plane114of a target network device104in accordance with various illustrative embodiments. InFIG.5, the target network device104and the debug network device102each includes a control-plane-data-plane interface transport module120,122that each provides logical device-access operations such as bus transactions, or a logical equivalent thereof. The control-plane-data-plane interface transport modules120,122are each configured to transport bus transactions between the target network device104(specifically, the data plane114) and the debug network device102(specifically, the instrumented control plane108). The instrumented control plane108of the debug network device102makes control-plane updates received at the data plane114of the target network device104using the control-plane-data-plane transport modules120,122. Data plane updates determined at the instrumented control plane108are also pushed to the data plane114of the target network device104using the control-plane-data-plane interface transport modules120,122. As shown inFIG.5, the debug network device102may or may not include its own data plane501. Example Control-plane updates. Configuration of the data-plane114of the target device104are initiated, in some embodiments, by the control-plane110of the debug network device102. Upon a control plane update packet being received at the target network device, the control-plane data-plane transport module (shown as VPCI)122is configured to implement read and write transactions. That is, it can read write-bus transactions502(e.g., control-plane updates) from the data plane114of the target network device104(intended for its control-plane110) and provides the write transaction502to the network interface504, which transmits that transaction502as a message506over the communication link116(shown inFIG.1as116aor116b) to the debug network device102. The debug network device102receives the messages506and writes, via a corresponding control-plane-data-plane interface transport module120(shown as “VPCI”120), bus transactions508to a bus interconnect510, or a logical equivalent of, of the debug network device102to write512to its control plane108. It is at this point that the control plane for the stack has been updated. An example control plane update is the “punt” packet of an OSPF update. A bus interconnect (e.g.,510,514) is a bus interface such as a PCI, PCIe (PCI-express) bus, AXI, SPI (system packet interface), PCI-X, PCI-express 16×, PCI-express 1×, PCIe 4.0, PCIe 5.0, PCIe 6.0, or the like. Similarly, the control-plane data-plane transport module120can provide data plane updates (e.g., as a result of a control plane update) to the data plane114of the target network device104by taking the update and sending it as a message506over the communication link116to the target network device104. The target network device104receives the messages506and writes, via a corresponding control-plane-data-plane interface transport module122to the bus interconnect502, or a logical equivalent of, of the target network device104to write to the data plane114. Example Data plane updates. Similarly, the control-plane data-plane interface transport module120of the debug network device102is configured to take write bus transactions520, or equivalents thereof, from the instrumented control plane108. The bus transaction520, or its equivalent, is provided to the network interface522of the debug network device102and is transmitted as a message (similar to506) over the communication link116to the target network device104, which reads the message. The control-plane data-plane interface transport module122of the target network device104uses the message (similar to506) to write bus transactions (similar to502) to the bus interconnect514to write to the data plane114. The messages506(for a control plane or data plane update) can be in any format or protocol. In some embodiments, the bus transaction is encapsulated as a payload in an encapsulated packet, which serves as the message. In some embodiments, multiple bus transactions may be encapsulated as the payload in single encapsulated packet. The message506, in some embodiments, includes a tunnel header514, packet header516, and packet payload518. In some embodiments, the message506is a transmitted using existing stack-over operations which may encapsulate the packet header and packet payload with an SVL header520to which the resulting packet is encapsulated a tunnel header associated with the connection. The messages506can nevertheless be in any format or protocol. In some embodiments, the bus transaction is encapsulated as a payload in an encapsulated packet, which serves as the message. In some embodiments, multiple bus transactions may be encapsulated as the payload in single encapsulated packet. In some embodiments, the control-plane-data-plane transport module122is implemented as an integrated component of the target network device104. In other embodiments, the control-plane-data-plane interface transport module122is implemented as a core or logic circuit in an ASIC of the target network device104. In yet other embodiments, the control-plane-data-plane interface transport module122is implemented in a core or logic circuit of an auxiliary card of the target network device. Further description of the control-plane-data-plane transport module is described in U.S. Patent Appl. No. 17/29559, filed Dec. 2, 2020, which is incorporated by reference herein in its entirety. Example Debug Network Device Using Stacking Protocols, Stateful Switch-Over, and Virtual Transport Layer FIG.6illustrates the usage of stacking protocols and stateful switchover operation to establish a debug network device102in accordance with an illustrative embodiment. Stacking operation and stateful switchover operation are complementary concepts, and are not substitutable. Generally, stateful switchover operation does not require a stack (e.g., stateful switchover operation can be performed in any system with multiple control planes, for example, a modular system with dual supervisors), though it is used herein to establish the debug network device102as the active device to control the data plane of the target network device104. Stacking is generally a process by which the members of a stack form a single logical entity managed by one of these entities (called the “active”). The other stack members are the “standby” and if more than two, and the remaining are simply “members”. Stacks are form by a stack protocol. Most switch equipment manufacturers have its own stacking protocol, any of which may be used with the method described herein. Examples of stacking mechanisms include stack cables (e.g., backside stacking) mechanisms or network-based stacking mechanism such stack-wise virtual link (SVL). InFIG.6, the exemplary system comprises a remote server or cloud server602that is orchestrated (or a debugging machine is provided as described herein) and a secure tunnel116ais then established between the standby remote/cloud server102aand the active target network device104to form a stack. Once a stacking configuration is formed between a target network device and a debug network device, bulk synchronization operations (e.g., of the stacking protocol) may be initiated, and the synchronization continues until the configurations of the active target network device are synchronized with the standby debug network device. Incremental synchronization may be then performed for any subsequent updates. Virtual transport layer, e.g., control-plane data-plane transport operation, takes over updates after switchover operation occurs and is generally a separate operation from that of the synchronization of the stacking protocol. Switchover operation such as stateful switchover operation (SSO) is the mechanism by which the “standby” becomes the “active”, either because of a failure of the active member, or because it is operationally forced into that status. Stateful switchover operation is generally used (as shown inFIG.6left side) to provide fault resistance capabilities for an active/primary stackable switch/chassis by employing a redundant supervisor engine (shown as606), on a same or different chassis, having similar or same capabilities to the primary supervisory engine and hardware (shown as604), to take over network operation of the primary supervisor engine (604) when the primary supervisory engine (604) fails or becomes unavailable. Here, the exemplary system and method use stateful switchover operations to put the control plane of the debug network device in active mode and in control of the data plane of the target network device while putting the control plane of the target network device in standby. Stateful switchover operation relies on redundant hardware (e.g.,606) in a standby network device to take over operation of the active network device (e.g.,604) to continue to forward network traffic with no loss of sessions when the control plane of the active network device becomes unavailable. The redundant hardware as the debug network device102ais used to generate an instrumented control plane108to debug, optimize, profile, or recover a network device in a live network. Most switch equipment manufacturers have its own switchover operations. Examples of switchover operation include SSO operations or similar operations as used in high availability (HA) or ISSU technologies. In virtualized high availability operation, like high availability operation, the network devices are joined by a configurable control link and data synchronization link. The control link is used to communicate the status of the network devices. The data synchronization link is used to transfer stateful information to synchronize the stateful database for the calls and media flows. Each pair of redundant interfaces may be configured with the same unique ID number. Virtual transport layer, e.g., control-plane-data-plane transport module (e.g.,120and122, respectively) in each of the network devices (e.g.,102,104), provides bus-transaction transport operations of control plane and data plane updates between the active target network device (e.g.,104) and the virtualized standby debug network device (e.g.,102a). When the control plane108of the debug network device102is in the active mode, the control-plane data-plane transport operation provides any control plane and data plane updates to the data plane110and the instrumented control plane108. That is, the data-plane-control-plane transport modules120,122implements a logical data-plane interface (e.g., for PCI (vPCI), AXI (vAXI), or other bus interconnects) that (i) provides, or can interface to, the device access layer and (ii) provides communication between the data-plane drivers running on the instrumented control plane108of the debug network device102and the data-plane110of the target network device104. Device-access layer is the interface directly above hardware. This is the lowest layer in the ASIC/hardware driver. The data-plane drivers in the debug network device102(e.g.,102a) may be mapped to the underlying data-plane device (or the logical data-plane interface endpoint) and the control plane108of the debug network device102can view and access the entire memory map of the data-plane device (e.g.,114). The data-plane-control-plane transport modules120,122may implement a tunnel (or a socket) using technologies such as GRE, VxLAN, or similar mechanisms. The data-plane-control-plane transport modules120,122may encapsulate a given bus transactions to send through a given tunnel. Raw register/memory access operations are then sent and received over the data-plane-control-plane transport modules120,122. Further description of virtual transport layer operation is present in U.S. Patent Appl. No. 17/29559, filed Dec. 2, 2020, which is incorporated by reference herein in its entirety. Example Debug Network Device Using a Virtualized Standby Switch FIG.7shows an exemplary timing diagram700of a stateful switchover operation between a target network device104and a virtualized remote/cloud standby debug network device (shown as102a) in accordance with an illustrative embodiment. Similar operations may be performed for other embodiments of the debug network device102as described herein. InFIG.7, prior (shown as706) to a debug/profile operation, the target network device104is shown (702,702a) to receive (708a) data packets at a port504, which are routed (708b) by a data plane comprising a forwarding engine709(shown comprising “ASIC/Switch Fabric”709) to another port (still shown as504) using data-plane-associated resources711(shown as “Routing/Forwarding Tables”711via operation708c). Also, inFIG.7, for a control plane packet with a control plane update (e.g., a simple control plane update), the target network device104is shown (704) to receive the control plane packet at a port504, which are routed (710a) through the forwarding engine709and routed (710b) through a bus interconnect514(shown as a “data-plane interface”514) to the control plane110(e.g., comprising a host CPU). In this example, the control plane110then updates (710c) a data-plane resource711by writing (710d) to the data plane interface514. During a debug or profile operation, and as shown inFIG.7, the operation is initiated at step712with a virtualized debug command being received (712) by a debug controller713. In some embodiments, the debug controller713is an application executing on a processing core or logic circuit at the target network device104. In other embodiments, the debug controller713is an application executing on the processing core or logic circuit at an external controller. In yet other embodiments, the debug controller713is a cloud-based application executing in a cloud infrastructure. The debug controller713, in some embodiments, directs (714) the instantiation of a virtualized standby debug network device102ain a remote or cloud infrastructure (e.g., remote or cloud server) (or other debug network device102b,102c) to provide a redundant and instrumented control plane108to the control plane110. In some embodiments, the debug controller713directs the loading, at the debug network device102a, of the system image (e.g., an instrumented version of the system image), control plane applications, or various applications executing on the target network device104. In other embodiments, the debug controller713directs the control plane108to be instantiated with a pre-configured instrumented system image and/or instrumented application. In yet other embodiments, the debug controller713directs the creation of a control plane computing space to which various debugging or profiling software may be manually or subsequently installed by field engineers and/or TAC. In some embodiments, instances of virtualized standby switches are pre-instantiated in the remote or cloud infrastructure to which the debug controller713can then direct the assignment of a pre-instantiated virtualized standby switch to the active target network device104. Referring still toFIG.7, the active target network device104(as a physical switch) and the virtualized standby debug network device102a(or other debug network device102b,102c) form a stack, e.g., using a stacking mechanisms such as SVL or stacking cables.FIG.7shows the active target network device104and the virtualized standby debug network device102abeing directed to join in stacked mode with the active target network device104set in active mode (see716a) and the virtualized standby debug network device102aset in standby mode (716b). The active target network device104then performs bulk synchronization operation of its control-plane state, as well as subsequent incremental synchronization (718), to the instrumented control plane108of the virtualized standby debug network device102a. During the initialization process (720), the control-plane-data-plane transport modules120,122may be initialized (shown as722aand722b) in the respective active target network device104and the virtualized standby debug network device102a. In some embodiments, the debug controller713pushes the instructions for the control-plane-data-plane transport modules120,122, or its equivalent, to the active target network device104and virtualized standby debug network device102a, e.g., as a part of the initialization process. In other embodiments, the system image for the active target network device104and virtualized standby debug network device102aincludes the instructions for the control-plane-data-plane transport modules to which the debug controller713can then initialize. The initialization process (720) ends once both the control plane110of the active target network device104and the control plane108of the debug network device102aare synchronized to have the same control-plane states, and the control-plane-data-plane transport modules120,122are instantiated. With the control-plane-data-plane transport modules120,122executing transport bus interconnect transactions between the physical and virtualized network devices (102a,104), the control plane110of the active target network device104(and not the data plane) then switches, via a switchover operation (shown as723a,723b) directed by the debug controller713, from an active mode to a standby mode, and the control plane108of the debug network device102aswitches from the standby mode to the active mode. With the control plane110of the active target network device104being in the standby mode, it can then be disabled (shown as724) or left running head-less. During the same time, the instrumentation of the control plane108or the instrumentation of the now active debug network device102amay be executed to debug or profile750the control plane, data plane, and/or network operations of the target network device104. During this time, the data-plane114of the target network device104continues its forwarding operations of packets received there at, and any control plane associated updates (e.g., to the data plane tables and resources or the control plane) are made by the control plane108of the debug network device102aby way of the control-plane-data-plane transport modules120,122. In some embodiments, while in the standby mode, the control plane110of the active target network device104may be rebooted and/or upgraded to a new system image. InFIG.7, after the switchover operation723a,723b, the data-plane114of the target network device104continues to service data plane packets received from the network. As shown, upon a data packet arriving (726a) at a port504, the forwarding engine709continues to route (726b) the packet to another port504using (726c) data-plane-associated resources711. And for control plane updates (730), the control plane108of the debug network device102ainitializes the process. InFIG.7, an example is shown in which the target network device104receives (728a) the control plane packet at a port504(e.g., a “punt” packet for a OSPF update), which are routed (728b) through the forwarding engine719to the driver of a bus interconnect514. However, rather than the control plane110of the target network device104reading the bus interconnect514(shown as data-plane interface514), the control-plane-data-plane transport module122(vPCI122) of the target network device104reads (728c) the write bus interconnect at the data plane interface514and transports (728d) the write transaction, as a message, through the network, or communication link, to the control-plane-data-plane transport module120of the control plane108of the debug network device102a. The control-plane-data-plane interface transport module120(shown as vPCI120) then writes (728e) a transaction to the bus interconnect comprising the data plane interface510of the control plane108of the debug network device102a, which is then read (728f) by the control plane108to process (728g) the control plane update of the stack. These data and control-plane packets may be received from peer network devices as well as enterprise-level network controllers (e.g., Cisco Digital Network Architecture (DNA) controller, OpenStack controller, or the like). In instances where the control plane has a data plane update, the instrumented control plane108writes (730a) to the bus interconnect510. The control-plane-data-plane interface transport module120reads (730b) the transaction and transports (730c) the transaction, as a message, through the network, to the control-plane-data-plane transport module122of the control plane110of the target network device104. The control-plane-data-plane interface transport module122then writes (730d) the transaction to the bus interconnect514(as if written by the native control plane), which is written (730e) to the addressed data-plane resources. It can be observed that even though the control plane110of the target network device102ais in standby mode, the data plane114continues to maintain the same active mode. And, while the control plane108of the debug network device (e.g.,102aor102c) is in active mode, the debug network device itself may not have a local data plane. The control plane108of the debug network device serves to temporarily maintain hitless or near-hitless control-plane operations and providing a space to perform the debugging or profiling operations. In addition, the debugging and profiling750may be performed as many times as necessary to acquire the data log of interest or to recover the target network device104, which may be span a few hours, days, or months. The logged data may be used to prepare patches or OS switch upgrades. In some embodiments, the logged data may be used in the design of future network devices. The debugging and profiling750may include the monitoring of any aspects of the various hardware and software components of the debug network device102and the target network device104, e.g., as it handles received data or control plane updates or any other system operations. In some embodiments, a second switchover operation may be performed (not shown) to restore the control plane110of the target network device104to active mode without any network interruption or disruption. This feature allows for the recovery of the target network device104without having to necessarily disrupt the network. The feature may be beneficial for minor updates or changes or to preserve continuous operation in real-time control operation, among other benefits. Virtualized Debug Cloud Infrastructure FIG.8shows a system800configured to perform a debug or profile operation in a cloud server using the virtualized stateful switchover operation in accordance with an illustrative embodiment. As discussed above in relation toFIGS.1-7, the debug or profile operation generally include instantiating a debug network device102(e.g., a virtualized debug network switch102a) on a cloud or a local/remote server (or a debugging server) that has connectivity to a physical network device (e.g., an active stackable or non-stackable switch). The target network device and the debug network device then can form a stack using a stacking protocol and then switchover using SSO operations or like operations as provided in HA or ISSU technologies and the like to set the control plane of the debug network device in control of the data plane of the target network device. In some embodiments, in addition to using switchover operations, the debug network device is further configured to execute both the control-plane software of the physical switch and the data-plane drivers, including, for example, forwarding engine driver (FED), forwarding engine SDK (software development kit), and ASIC drivers. InFIG.8, the control plane software/instructions, as well as switchover instructions, and system images for the control plane of the debug network device, may be stored and retrieved from a stackable switch image library (shown as802) or upgradable switch image library (e.g., for non-stackable switches). In some embodiments, the library802is stored in a remote or cloud server. An example of a remote or cloud image library system is Cisco Software Image Management (SWIM) system. In other embodiment, the library is a computer readable medium (e.g., DVD, CD, compact flash, and other persistent memory device). The images may be retrieved manually in some embodiments. In other embodiments, the images are retrieved in instructions set executing at a debug controller, e.g., as described in relation toFIG.7. Method of Setting Up a Virtualized Debug Switch FIG.9shows an exemplary sequence to configure an exemplary debug network device and to perform debug or profile operation with that device in accordance with an illustrative embodiment. InFIG.9, the process900is shown to include a debug network device102(shown as a virtualized debug switch “S2”102a) being first instantiated (shown as “Create S2”902) in a cloud, local, or remote machine to be used to debug or profile a target network device104(shown as target switch “S1”104a). In other embodiments, other computing devices may be used to host the virtualized standby switch, including portable computing devices, custom server, or evaluation switch, as discussed herein. The physical target switch “S1”104ais shown initially running (904) in standalone mode. InFIG.9, the target switch104is shown to be executing switch image version “16.12.1”. Upon the debugging operation being initialized, a virtual debug switch “S2”102ais created. The creation of the virtual debug switch “S2”102aincludes the instantiation of a container or virtual machine in a cloud infrastructure. The container or VM includes an operating system, drivers, and control-plane application(s) corresponding to those executing on the target switch “S1”104a. In some embodiments, the instantiation is directed by a debug controller, which may be executing on the physical switch “S1” or a remote computing device. After the virtualized switch “S2”102ais instantiated, the debug controller, or the like, may direct the target switch “S1”104aand the debug switch “S2”102ato be joined (908) via a stacking operation in a stack in which the debug switch “S2”102ais initially put in standby mode and the target switch “S1”104ais put in active mode (shown as “S2 joins S1 in stack mode”908). During the stack joining process, bulk synchronization (the start is shown as906and the end is shown as914) is performed. The virtualized standby debug switch “S2”102ais shown executing the same or compatible system image as the target switch “S1”104a, shown as switch image version “16.12.1”. The virtualized standby debug switch “S2”102ais further executing instrumentation in the system image or a control plane application. The bulk synchronization (906) synchronizes the control-plane states between the virtualized debug switch “S2”102aand the target switch “S1”104aso the control-plane states of the two switches “S1” and “S2”102a,104aare the same. Incremental synchronization may also be performed. Once the control-plane states are synchronized to the same states, the debug controller triggers (917) a switchover (SSO) operation, and the debug switch “S2”102ais directed to assume the active role while the target switch “S1”104aassumes the standby role. Once in the active role, the debug switch “S2”102aruns (922) as the control plane for the target switch “S1”104ausing the logical data-plane interface (e.g., vPCI), which may be initiated at this sequence or earlier as discussed herein. The control plane108of the debug switch “S2”102auses the logical data-plane interface to perform data-plane updates (923) to the data plane114of the target switch “S1”104a(shown as “Virtual Transport: DP updates”923). The debugging and/or profiling operation924is then performed on the control plane108of the debug switch “S2”102a. The data-plane114continues to operate in the slave mode, shown for the duration926, in which it is controlled by the control plane108of the debug switch “S2”102auntil the debugging or profiling operation (924) is complete. In the example shown inFIG.9, once the debugging is complete, the debugging controller directs a second switchover operation (928) and the control plane108of the debug switch “S2”102ais put into standby mode while the control plane110of the target switch “S1”104ais put into active mode (930). Target switch “S1”104anow having been fixed can continue to run normally (934) while the virtualized debug switch “S2”102acan be deleted (932). To configure the logical data-plane interface, in some embodiments, the data-plane110of the target switch “S1”104amay be programmed by the control plane110of the target switch “S1”104prior to that control plane110being designated to standby mode and under direction from the control plane108of the debug network device “S2”102a. The programming ensures the association of resources and their addresses are consistent between the control-plane and data-plane on the target switch “S1”104aand the debug switch “S2”102a. Without the programming operation, the data-plane state may have different table indexes or addresses in the data-plane even though the control-plane state of the target and debug switches “S1” and “S2”102a,104amay be identical because the order of operations at the control-plane may not be preserved or performed in the same sequence. To optimize the programming of the data-plane device (e.g.,114) of the target switch “S1”104athe data plane device (e.g.,114) may be programmed for only resources where such change is expected or had been made. In yet another embodiment, the programming may involve generating an index translation table that translates addresses between i) old indexes associated with the data plane resource of the target switch “S1”104aand ii) new indexes associated with the data plane resource of the debug switch “S2”102a. The translation can improve the network/switch availability as the programming may be performed very quickly without having to write over data-plane resources of the target switch “S1”104a. Indeed, once a mapping between old and new indexes are generated, ‘read’ and ‘write’ operations can go through the translation table, and the correct indexes/addresses can be accessed. Discussion and Additional Examples The exemplary system and method have many practical usages as described herein. While the debugging operation is on-going, the target network device can maintain comparable throughput while being serviced by the instrumented control plane, even though latency performance may vary. Network protocols typically have timeout in the order of multiple seconds, and so, the additional latency may not necessarily impact protocol operation. Route updates and MAC learning may take more time, but again, may have limited impact on data plane operations. Indeed, the exemplary system and method provides for the on-demand creation of an instrumented control-plane, e.g., in the cloud or remote server or other platform, and ability to form a stack with a non-instrumented production image on a physical target switch. This setup is equivalent to, and will act like an HA system. In addition, stacking and SSO are generally used for high-availability in switches. The exemplary system and method may use conventional or existing stacking and SSO in a debugging operation, e.g., for debugging and profiling of live systems. In addition, while debug and profiling are often intrusive and performance impacting, the exemplary system and method may be used by TAC/field-support in customer networks, e.g., to evaluate common issues as well difficult issues that are not so readily reproduce-able in labs, without any noticeable impact in performance or latency. In addition, stacking, SSO and CNF (Cache & Flush) operations are used in conjunction with fast software upgrade (FSU/xFSU) operation to reduce outage for non-redundant systems. The exemplary system and method may employ these operations, in addition, to recover and restore problematic switches in live customer networks and in a near hit-less fashion. Once a debug network device is created on-demand, it may form a stack with physical target network device. SSO operation as described herein may be used to synchronize the states of the target network device to the debug network device. At this point, a switchover is performed, and the instrumented control plane of the debug network device is set to active mode. Subsequent control traffic intended for the control plane of the target network device can be redirected (not mirrored) to the debug network device, e.g., via a tunnel. Most of the use-cases do not require mirroring of control-plane traffic, though it can. This would be similar operation to any HA system. Data-plane traffic does not require mirroring as well. Because the data-plane in the physical target switch remains functional, traffic forwarding continues to perform through this hardware. In use cases where debugging or profiling of the data-plane (e.g., NPU and ASIC logic) is desired, the debug network device can be configured to execute its data-plane (e.g., through simulator or emulator models). For data-plane debugging of a target device, a subset of data-plane traffic may be mirrored to the cloud switch. Such a data-plane debug may require only a small set of packets and can provide very detailed functional trace and logs of a packet through the ASIC (for e.g., using cycle-accurate simulator models such as RTL emulators). In addition, the exemplary method and system facilitates the debugging in a passive mode, where the physical target network device is unmodified/untouched, and a debug machine, while still in stack configuration with target network device, is running on separate hardware and in parallel. In this mode, control-plane traffic can be mirrored. The operation may involve processes that are HA-aware and in hot-standby. In addition, on-demand creation of a standby switch in a remote/cloud network can be used for many other applications, e.g., to profile and troubleshoot issues in live customer switches by spawning a standby instrumented control-plane without impacting performance. It can also be used for quick troubleshoot session with the goal of saving cost. The operation can be performed by hitless operation is needed for both control plane and data plane, e.g., in certain real-time control application where real-time network must be maintained. In addition, the exemplary method and system can facilitate the restoration of faulty systems in customer networks (that would normally require a reboot) with near-hitless traffic disruption. In addition, the exemplary method and system can be automated and tied with DNAC or other network management workflow. In addition, the exemplary method and system can used to test a new image/release in customer networks prior to full release. Indeed, if the new image fails, HA infrastructure provides protection and performs a hit-less switchover to the control-plane on the physical switch. This is a near hitless ISSU with rollback, even for non-redundant systems. It should be understood that the various techniques and modules described herein, including the control-plane-data-plane interface transport module may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. Embodiments of the network device may be implemented, in whole or in part, in virtualized network hardware in addition to physical hardware. Although exemplary implementations may refer to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices.While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments but should be defined only in accordance with the following claims and their equivalents. | 94,646 |
11863384 | While each of the figures illustrates a particular embodiment for purposes of illustrating a clear example, other embodiments may omit, add to, reorder, and/or modify any of the elements shown in the figures. DESCRIPTION OF THE EXAMPLE EMBODIMENT(S) In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the example embodiment(s) of the present invention. It will be apparent, however, that the example embodiment(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the example embodiment(s).1.0 GENERAL OVERVIEW2.0 EXAMPLE COMPUTER SYSTEM IMPLEMENTATION2.1 ARTIFACT REPOSITORY2.2 REGIONAL CLUSTERS2.3 CONFIGURATION FILE2.4 AUTOMATION CONTROLLER2.4.1 DETECTING CHANGES TO A CONFIGURATION FILE2.4.2 DERIVATION OF COMMANDS AND PARAMETERS2.4.3 PROPAGATING COMMANDS AND PARAMETERS3.0 EXAMPLE PROCESS AND ALGORITHM4.0 IMPLEMENTATION MECHANISMS— HARDWARE OVERVIEW5.0 IMPLEMENTATION MECHANISMS— SOFTWARE OVERVIEW6.0 OTHER ASPECTS OF DISCLOSURE 1.0 GENERAL OVERVIEW A binary artifact repository comprises a geographically distributed datastore. An automation system implements infrastructure as code, in which markup language configuration files authoritatively and symbolically define permissions and credentials that are to be deployed for specified artifacts, projects or products across all local or remote repositories in local storage or in regional mirrors of the artifact repository system. Each configuration file does not need to define region-specific attributes, as the automation system can derive regional differences based on a more generic configuration. Furthermore, configuration files do not need to explicitly define permissions or other settings in the same terms as used in the artifact repository; instead, the automation system transforms markup code in the configuration file into the specific command(s) and/or parameter value(s) that need to be written into the artifact repository to accomplish the functional result specified in the configuration file. The automation system performs checks on the configuration files, then executes inferential transformations prior to deploying the configuration on each cluster. Derivations are performed to determine what artifacts are visible in an internal repository as compared to an external repository. For example, if a new local repository is created in a particular regional cluster, then in response, the automation system will create a remote repository with the same name in other regional clusters that refers back to the local repository for configuration. Similarly, any change in a particular local repository causes the automation system to immediately transmit equivalent changes to all other corresponding repos in all other regional clusters. A single configuration file for the new local repository defines configuration for that repository that is to be used to derive all settings for corresponding repos in all other regional clusters. Operation of the automation system is triggered when a change, reflected for example in a Github pull request, is merged following approval. Embodiments manage creating, updating and deleting users, groups and permissions for any repo, as well as configuring external visibility of artifacts. For example, embodiments can create users and permissions, derive settings for regional mirrors, inject credentials into a CI system if needed, and establish visibility settings as needed. These operations can be executed serially or in parallel based on using a dependency graph. Embodiments also receive requests from external deployments, proxy the requests, authenticate credentials, and moderate the requests so that visibility of artifacts is provided only to authorized external deployments based on permissions specified in the configuration files. Embodiments also are capable of defining multiple separate but associated YAML files that collectively provide a complete configuration, and the automation system will marshal and process them collectively; this facilitates more efficient data storage and management of very large configuration files. In an embodiment, a data processing method comprises detecting an approval of a change to an electronic configuration document that symbolically identifies one or more configurations of users, groups, and/or permissions relating to access to computer program artifacts that are stored in a first repository of a geographically distributed, replicated artifact repository system; the artifact repository system comprising one or more second repositories that are geographically remote with respect to the first repository and which replicate the first repository; in response to the detecting: obtaining the electronic configuration document and deriving, based on the electronic configuration document, a plurality of regional repository settings values for users, groups, and/or permissions relating to access to the computer program artifacts and for the one or more second repositories; transmitting the one or more settings values to the one or more second repositories and causing injection of the one or more settings values into one or more repository configuration settings of the second repositories; Thus, an automated software system implements infrastructure as code, in which markup language configuration files authoritatively and symbolically define permissions and credentials that are to be deployed for specified artifacts, projects or products across all local or remote repositories in local storage or in regional mirrors of the artifact repository system. Each configuration file does not need to define region-specific attributes, as the automation system can derive regional differences based on a more generic configuration. Furthermore, configuration files do not need to explicitly define permissions or other settings in the same terms as used in the artifact repository; instead, the automation system transforms markup code in the configuration file into the specific command(s) and/or parameter value(s) that need to be written into the artifact repository to accomplish the functional result specified in the configuration file. The automation system performs checks on the configuration files, then executes inferential transformations prior to deploying the configuration on each cluster. 2.0 EXAMPLE COMPUTER SYSTEM IMPLEMENTATION FIG.1illustrates an example automation system in which the techniques described herein may be practiced, according to some embodiments. In the example ofFIG.1, an automation system100comprises a replicated artifact repository system that is programmed or configured to provide automated configuration and deployment of artifact repositories across clusters using one or more configuration file(s). Automation system100may be implemented across one or more physical or virtual computing devices, none of which is intended as a generic computer, since it is loaded with instructions in a new ordered combination as otherwise disclosed herein to implement the functions and algorithms of this disclosure. The example components of automation system100inFIG.1are implemented at least partially by hardware at one or more computing devices, such as one or more hardware processors executing stored program instructions stored in one or more memories for performing the functions that are described herein. Or, one or more virtual machine instances in a shared computing facility such as a cloud computing center may be used. The functions described herein are intended to indicate operations that are performed using programming in a special-purpose computer or general-purpose computer, in various embodiments. Automation system100illustrates only one of many possible arrangements of components configured to execute the programming described herein. Other arrangements may include fewer or different components, and the division of work between the components may vary depending on the arrangement. 2.1 Artifact Repository Automation system100includes a plurality of artifact repositories130A,130B,140A,140B, and/or150. In different embodiments, a different number of repositories and/or different types of repositories may be included or excluded, thus, automation system100is only intended to illustrate the concepts of how the such a system may be configured in one embodiment. An artifact repository is a datastore that may be used to manage, store, and/or retrieve software artifacts and metadata concerning those software artifacts. In an embodiment, the repository may store artifacts and metadata in a defined directory structure. In an embodiment, an artifact repository may include version control of the versions of software artifacts stored in it. A software artifact is any binary data used in a software development process. Examples of software artifacts may include, but are not limited to, executables, installers, JAR files, libraries, application binaries, archives, or any other similar binary data. A software artifact may be added to an artifact repository as part of a product release, as part of a scheduled product build, and/or manually by users with access to the artifact repository. A local repository, such as local repository130A, is an example of a type of artifact repository. A local repository is a private or internal artifact repository that may act as a source of truth. For example, local repository130A may be used by a private enterprise for a private software development project. A local repository130A may serve as a source of truth, as any modifications to the contents of the local repository130A would be propagated to mirrors of the repository. Since a local repository is a private artifact repository, it may be necessary to configure user access permissions to the contents of the local repository and its mirrors so that only those users with appropriate permissions can access the contents of such a repository. An external repository, such as external repository150, is an example of a type of artifact repository. An external repository is a public or third-party artifact repository that may act as a source of truth. For example, external repository150may be owned and/or operated by a third-party and may provide open source or publicly available software libraries or packages. An external repository150may serve as a source of truth, as any modifications to the contents of the external repository150would be propagated to mirrors. A remote repository, such as remote repositories140A,140B, and/or130B, is an example of a type of artifact repository. A remote repository is replicated mirror of another artifact repository. A remote repository may be a replicated mirror of either a local repository or an external repository. For example, remote repository130B is a replicated mirror of local repository130A. However, remote repository140A and140B are replicated mirrors of external repository150. 2.2 Regional Clusters Automation system100includes a plurality of regional clusters110. A regional cluster110is a grouping of one or more artifact repositories that can be used to serve a particular geographic location or region. In the example of automation system100, two regional clusters110A and110B are depicted, however, in other embodiments, a different number of regional clusters110may exist. Contents of the repositories may be mirrored and replicated to other regional clusters. Likewise, as will be described, configurations of users, groups, and/or permissions for each regional cluster110may be implemented by an automation controller160. Thus, if a user needs access to data stored in a particular repository, the data may be accessed in the mirror of the repository in the nearest regional cluster110, thereby providing improvements to system performance and requests, rather than having to access that data in a regional cluster110that is geographically far from the user's physical location. To illustrate, for example, regional cluster110A may be a cluster located in North America and regional cluster110B may be located in Australia. Regional cluster110A may include a local repository130A and a remote repository140A. Local repository130A is an internal and/or private artifact repository. Remote repository140A is a replicated mirror of external repository150. Access to the repositories of regional cluster110A may include configuration of users, groups, and/or permissions. The configuration of users, groups, and/or permissions may be defined, at lest in part, in a symbolic configuration definition120associated with a particular repository. Further details regarding the contents of such a configuration definition120will be described herein. The contents of the repositories of regional cluster110A, as well as the configurations of users, groups, and/or permissions, may be replicated to regional cluster110B. Regional cluster110B includes remote repository130B, which is a replicated mirror of local repository130A. Regional cluster110B includes remote repository140B, which is a replicated mirror of external repository150. The configuration of regional cluster110B, including, but not limited to users, groups, and permissions, may be orchestrated by automation controller160based on the contents of configuration definition120, as will be described herein. Thus, software developer that is located in Australia that requires an artifact in a repository can access that artifact from regional cluster110B instead of regional cluster110A, because the artifact has been mirrored to regional cluster110B and the software developer will have the appropriate permissions to access it from regional cluster110B. By replicating the contents of regional clusters110to different regions, automation system100ensures that systems in different geographic locations have nearby access to the contents of the repositories using appropriate permissions, thereby improving repository connectivity, lag, and network access. 2.4 Symbolic Configuration Definition A configuration definition120is an electronic configuration document that may comprise a file or set of files symbolically specifying instructions, parameters, settings, and/or configurations of users, groups, and/or permissions relating to access to artifacts that are stored in one or more repositories of automation system100. In one embodiment, a configuration definition120may be implemented in any markup language or data format syntax, such as extensible markup language (XML), “YAML Ain′t Markup Language” (YAML), or JavaScript Object Notation (JSON), and is stored in the form of digital data in a storage device or digital memory. In an embodiment, a configuration definition120may be associated with a particular repository, however, in another embodiment, a configuration definition120may be associated with a plurality of repositories. In the example of automation system100, configuration definition120defines the instructions, parameters, settings, and/or configurations of users, groups, and/or permissions relating to access to artifacts in local repository130A. In other embodiments, electronic configuration documents may be functionally equivalent to the configuration definition120described herein but expressed in XML, HTML, conventional programming source code languages, other human-readable symbolic languages or natural language. A user can provide custom details in a configuration definition120to customize the users, groups, and/or permissions relating to access to artifacts in a repository. The configuration definition120thereby authoritatively and symbolically defines permissions and credentials that are to be deployed for specified artifacts, projects or products across all local or remote repositories in local storage or in regional mirrors of the system. Each configuration definition120does not need to define region-specific attributes, as the automation controller160can derive regional differences based on a more generic configuration. Furthermore, a configuration definition120does not need to explicitly define permissions or other settings in the same terms as used in the artifact repository; instead, the configuration definition120may define such permissions in a markup code that may be interpreted and transformed by the automation controller160. In one embodiment, to invoke execution of automation controller160, a pull request is opened against a specified repository with proposed changes to a configuration file. For example, a file named “product-publish.yml” is created in a repository for a particular product. Within the file, a <defaults> block is created having the example form of TABLE 1. TABLE 1EXAMPLE defaults BLOCKdefaults:users:manage_password: truecircle_projects: [ ]password: nullgroups: [readers, sandbox]permissions:principals: [ ]group_principals: [ ] Next, the name of a publish user is specified; permissions will be attached to this user as principal. It is also possible to associate a set of CI projects with the user. TABLE 2 shows an example: TABLE 2EXAMPLE USER DEFINITIONusers:-name: product-publishemail: [email protected]_projects:- “system/product”- “system/product-app”- “system/product-lib” In an embodiment, a next section of the configuration file associates permissions with users. Permissions define which repositories and sub paths the user can publish to. Public locations typically are unique among users. Table 3 illustrates an example block of permissions for a configuration definition120, according to one embodiment, however, the formant, syntax, tags, or other features of such a configuration definition120may vary in different embodiments. TABLE 3name: Publish - Productrepositories: [internal-dist-release, internal-jar-release]whitelist: “com/domain/product/**”blacklist: “”principals:- first_example_user: [d, w, n, r]- second_example_user: [w, r]external-visibility:groups: [external-default] In an embodiment, configuration definition120may identify one or more repositories for which a block of permissions defined in the configuration file applies to. In the example of Table 3, the “repositories” tag indicates that the block of permissions applies to the two repositories “internal-dist-release” and “internal-jar-release”. In an embodiment, configuration definition120may define a whitelist and/or a blacklist that define the paths or subpaths of the repositories for which a block of permissions applies to. In the example of Table 3, the path “com/domain/product/**” is defined as a whitelist path with the “whitelist” tag, therefore, the subsequently defined permissions apply to that particular path on the previously defined repositories “internal-dist-release” and “internal-jar-release”. Likewise, in the example of Table A, the “blacklist” tag indicates that no particular paths are blacklisted for the previously defined repositories “internal-dist-release” and “internal-jar-release”. In an embodiment, configuration definition120may define user-specific permissions for the repositories. For example, in Table A, the “principals” tag defines a set of user-specific permissions for “first_example_user” and “second_example_user”. The permissions include “d” which corresponds to delete permissions, “w” which corresponds to write permissions, “n” which corresponds to annotate permissions, and “r” which corresponds to read permission. This sample list of permissions is merely illustrative, and in other embodiments, additional permission types may be included. In an embodiment, configuration definition120may define group-specific external visibility settings for the repositories that allows groups of users to have read access to newly published or modified artifacts in the repository. In the example of Table 3, the “external-visibility” setting indicates that users that are part of the “external-default” group should have visibility to newly published or modified artifacts in the repositories. External visibility refers to the ability of users or groups of users who do not have explicit user-based permissions to view the contents of a repository. 2.4 Automation Controller Automation system100includes an automation controller160that is programmed or configured to detect changes to one or more configuration definitions120, derive, from the configuration definition120, specific command(s) and/or parameter(s) that need to be written into an artifact repository to achieve the functional result specified in the configuration definition120, and deploy the derived configuration on each regional cluster110. As line102indicates, the automation controller160may receive input in the form of a configuration definition120of regional cluster110A. Automation controller160is programmed to transform the configuration definition into specific commands, parameters or other configuration values in the form of output permissions and settings values104. As indicated by line104, automation controller160is further programmed to transmit, install or inject the settings values to any local repository, remote repository or external repository as appropriate. Thus, automation controller160is programmed or configured to assist in ensuring that the configuration of deployment of repositories is correctly and accurately replicated to all regional clusters based on the configuration file, including the necessary configurations for users, groups, and/or permissions. Automation controller160provides various improvements to the replication of clusters of artifact repositories, including, but not limited to, ensuring the appropriate configuration of repositories in every regional cluster based on the details provided in one or more configuration definition120, such that each regional cluster appears the same from the perspective of users or groups of users accessing any given regional cluster. Further details on the automation controller160will be provided herein. 2.4.1 Detecting Changes to Configuration File Automation controller160is programmed or configured to detect changes made to a configuration definition120. In other embodiments, the contents of a configuration definition120may be implemented in a plurality of individual files, thus, the present techniques may be adapted to detect changes to any individual file. In one embodiment, automation controller160may detect when any modification, update, or deletion that has been made to the contents of configuration definition120. In another embodiment, a change to a configuration definition120may be detected only when a modification to the configuration definition120has been committed, such as via a Github pull request, to a repository in which the configuration definition120is stored (not depicted inFIG.100). In this example, the change may be detected only once the committed change to the configuration definition120has been merged and approved by an appropriate entity with permission to modify the configuration definition120, such as an administrator. In some embodiments, detection of a change to a configuration definition120may cause automation controller160to trigger operations to derive appropriate commands and/or parameter values for the configuration of each regional cluster110, and propagation of such commands and/or parameter values to other regional clusters, as will be described herein. 2.4.2 Derivation of Commands and Parameters Configuration definition120includes markup language that authoritatively and symbolically defines permissions and credentials that are to be deployed for specified artifacts, projects or products across all local or remote repositories in local storage or in regional mirrors of the artifact repository system. Upon detecting a change to a configuration definition120, automation controller160may be programmed or configured to ingest the contents of the configuration definition120and use the content of the configuration definition120to transform the markup code of the configuration definition120into specific command(s) and/or parameter value(s) that need to be written into the artifact repository to accomplish the functional result specified in the configuration file. The result of this operation is that the automation controller160will derive a set of commands and/or parameter values(s) for the configuration of each regional cluster, so that they conform to the functional result specified in configuration definition120. During this derivation process, automation controller160may be programmed or configured to perform various steps to derive the appropriate command(s) and parameter value(s) for configuration of the regional cluster(s). For example, in one embodiment, embodiment, automation controller160may be programmed or configured to check and/or validate the contents of the configuration definition120. If automation controller160detects a validation error, such as improper syntax or some other failure in parsing the configuration definition120, automation controller160may generate an error warning to indicate the validation error. Additionally, during the derivation process, automation controller160is programmed or configured to execute inferential transformations that derive, based on the configuration definition120, various settings for a regional cluster, including, but not limited to: which repositories should exist, whether a repository is a local repository, whether a repository is a remote repository of another local repository in another regional cluster, whether a repository is a remote repository of an external repository, the users and/or groups with access to each repository, the types of access permissions for each users and groups, including, but not limited to particular artifacts, or repository paths that are visible or not visible to the users and groups, and any other configuration setting the is included in the configuration definition120. Based on the configuration specified in the configuration definition120, automation controller160can thus derive and determine the topology of permissions for an existing regional cluster and how that regional cluster should be mirrored to another regional cluster. For example, derivations are performed to determine what artifacts are visible in a local repository130A as compared to remote repository140A. If a new local repository130A is created in a particular regional cluster110A, then in response, the automation controller160will create a remote repository130B with the same name in other regional clusters, such as regional cluster110B that refers back to the local repository130A for configuration. Similarly, any change in a particular local repository130A causes the automation system to immediately transmit equivalent changes to all other corresponding remote repositories in all other regional clusters. Thus, a single configuration file for the new local repository130A defines configuration for that repository that is to be used to derive all settings for corresponding repositories in all other regional clusters, such as remote repository130B in regional cluster110B. The settings include repository-specific settings, user settings, group settings, and any other similar settings as described above with reference to configuration definition120. The output of the derivation process is a set of commands and parameter values for the configuration of a separate regional cluster that conforms to the functional result defined in the configuration definition120. 2.4.3 Propagating Commands and Parameters Once the set of commands and parameter values have been derived by the automation controller160from the configuration definition120, automation controller160is programmed or configured to propagate and deploy these commands and parameter values to all other regional clusters so that they conform to the settings of the configuration definition120. For example, the commands and parameter values may include commands and parameter values for managing, creating, updating, and deleting users, groups and permissions for any repository, as well as configuring external visibility of artifacts. The commands and parameter values can, in some embodiments, create users and permissions, inject credentials into a CI system if needed, and establish visibility settings as needed. Automation controller160can execute these operations in other regional clusters, such as regional cluster110B either serially or in parallel based on using a dependency graph. The result of the propagation of commands and parameters, is that each regional cluster110B is configured using the configuration definition120and includes appropriate replicated mirrors, as well as permissions for users and groups so that a software developer can seamlessly interact to a local regional cluster the same way that they would have been able to interact with any other regional cluster, as all appropriate repository contents, and permission settings have been appropriately mirrored to all regional clusters. 3.0 EXAMPLE PROCESS AND ALGORITHM FIG.2illustrates a flow diagram of an example process for performing automated configuration and replication of repositories. For purposes of illustrating a clear example, process200ofFIG.2is described based on using automation system100, but other embodiments may use systems other thanFIG.1.FIG.2is intended to disclose algorithms or functional descriptions that may be used as a basis of writing computer programs to implement the functions that are described herein, and which cause a computer to operate in the new manner that is disclosed herein. Further,FIG.2is provided to communicate such an algorithm at the same level of detail that is normally used, by persons of skill in the art to which this disclosure is directed, to communicate among themselves about plans, designs, specifications and algorithms for other computer programs of a similar level of complexity. The steps of process200may be performed in any order, and is not limited to the order shown inFIG.2. In general, process200provides for detecting an approval of a change to an electronic configuration document that symbolically identifies one or more configurations of users, groups, and/or permissions relating to access to computer program artifacts that are stored in a first repository of a geographically distributed, replicated artifact repository system, the artifact repository system comprising one or more second repositories that are geographically remote with respect to the first repository and which replicate the first repository; in response to the detecting: obtaining the electronic configuration document and deriving, based on the electronic configuration document, a plurality of regional repository settings values for users, groups, and/or permissions relating to access to the computer program artifacts and for the one or more second repositories; and transmitting the one or more settings values to the one or more second repositories and causing injection of the one or more settings values into one or more repository configuration settings of the second repositories. The process200may begin in step210. In step210, automation controller160is programmed or configured to detect changes to one or more configuration file(s)120. In an embodiment, automation controller160may detect any newly created configuration file, modified configuration file, and/or deletion of a configuration. In one embodiment, automation controller160will only detect a change to one or more configuration files if the changes have been approved and/or committed in a repository, such as by a Git hub pull request that requires user approval. Once automation controller160detects a change to one or more configuration file(s)120, the process200may proceed to step220. In step220, automation controller160is programmed or configured to ingest the one or more configuration file(s). During this step automation controller160may parse the configuration definition120. In an embodiment, automation controller106may be programmed or configured to parse configuration settings from multiple separate, but associated configuration file(s)120that collectively provide a complete configuration. The automation controller160may marshal and process the separate configuration file(s)120and process them collectively, thereby facilitating more efficient data storage and management of very large configuration files. The process200may then proceed to step230. In step230, automation controller160may optionally be programmed or configured to validate the configuration file(s)120ingested in the previous step. Validation may include validating the syntax of the configuration file(s), validating the values of the configuration settings in the configuration file(s) and/or any other check or validation on the contents or structure of the configuration file(s). If automation controller160detects a validation error, such as improper syntax or some other failure in parsing the configuration definition120, automation controller160may generate an error warning to notify an administrator about the validation error. In an embodiment, detection of a validation error may cause the automation controller160to end process200, to allow time for the administrator to correct the cause of the validation error. The process200may then proceed to step240. In step240, automation controller160is programmed or configured to derive a set of commands and/or parameters for the configuration of one or more regional clusters based on the contents of the configuration file(s)120. During this derivation process, automation controller160is programmed or configured to execute inferential transformations that derive, based on the configuration definition120, various configuration commands and configuration parameters for a regional cluster, including, but not limited to: which repositories should exist, whether a repository is a local repository, whether a repository is a remote repository of another local repository in another regional cluster, whether a repository is a remote repository of an external repository, the users and/or groups with access to each repository, the types of access permissions for each users and groups, including, but not limited to particular artifacts, or repository paths that are visible or not visible to the users and groups, and any other configuration setting the is included in the configuration definition120. Based on the configuration specified in the configuration definition120, automation controller160can thus automatically derive and determine the topology of permissions for an existing regional cluster and how that regional cluster should be mirrored to another regional cluster. The output of the derivation process is a set of commands and parameter values for the configuration of a separate regional cluster that conforms to the functional result defined in the configuration definition120. The process200may then proceed to step250. In step250, automation controller160is programmed or configured to propagate and/or apply the commands and/or parameter values derived in step240, to one or more regional clusters in order to configure the one or more regional clusters based on the configuration file(s). The commands and parameter values can include commands and parameter values for managing, creating, updating, and deleting users, groups and permissions for any repository, as well as configuring external visibility of artifacts. The commands and parameter values can, in some embodiments be used by automation controller160to create users and permissions, inject credentials into a CI system if needed, and establish external visibility settings as needed. Automation controller160can execute these operations in other regional clusters, such as regional cluster110B either serially or in parallel based on using a dependency graph. The result of the propagation of commands and parameters, is that each regional cluster110B is configured using the configuration definition120and includes appropriate replicated mirrors of repositories. The process200may then end. 4.0 IMPLEMENTATION MECHANISMS—HARDWARE OVERVIEW Referring now toFIG.3, it is a block diagram that illustrates a computing device300in which the example embodiment(s) of the present invention may be embodied. Computing device300and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s). Other computing devices suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions. Computing device300may include a bus302or other communication mechanism for addressing main memory306and for transferring data between and among the various components of device300. Computing device300may also include one or more hardware processors304coupled with bus302for processing information. A hardware processor304may be a general purpose microprocessor, a system on a chip (SoC), or other processor. Main memory306, such as a random access memory (RAM) or other dynamic storage device, also may be coupled to bus302for storing information and software instructions to be executed by processor(s)304. Main memory306also may be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by processor(s)304. Software instructions, when stored in storage media accessible to processor(s)304, render computing device300into a special-purpose computing device that is customized to perform the operations specified in the software instructions. The terms “software”, “software instructions”, “computer program”, “computer-executable instructions”, and “processor-executable instructions” are to be broadly construed to cover any machine-readable information, whether or not human-readable, for instructing a computing device to perform specific operations, and including, but not limited to, application software, desktop applications, scripts, binaries, operating systems, device drivers, boot loaders, shells, utilities, system software, JAVASCRIPT, web pages, web applications, plugins, embedded software, microcode, compilers, debuggers, interpreters, virtual machines, linkers, and text editors. Computing device300also may include read only memory (ROM)308or other static storage device coupled to bus302for storing static information and software instructions for processor(s)304. One or more mass storage devices310may be coupled to bus302for persistently storing information and software instructions on fixed or removable media, such as magnetic, optical, solid-state, magnetic-optical, flash memory, or any other available mass storage technology. The mass storage may be shared on a network, or it may be dedicated mass storage. Typically, at least one of the mass storage devices310(e.g., the main hard disk for the device) stores a body of program and data for directing operation of the computing device, including an operating system, user application programs, driver and other support files, as well as other data files of all sorts. Computing device300may be coupled via bus302to display312, such as a liquid crystal display (LCD) or other electronic visual display, for displaying information to a computer user. In some configurations, a touch sensitive surface incorporating touch detection technology (e.g., resistive, capacitive, etc.) may be overlaid on display312to form a touch sensitive display for communicating touch gesture (e.g., finger or stylus) input to processor(s)304. An input device314, including alphanumeric and other keys, may be coupled to bus302for communicating information and command selections to processor304. In addition to or instead of alphanumeric and other keys, input device314may include one or more physical buttons or switches such as, for example, a power (on/off) button, a “home” button, volume control buttons, or the like. Another type of user input device may be a cursor control316, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor304and for controlling cursor movement on display312. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. While in some configurations, such as the configuration depicted inFIG.3, one or more of display312, input device314, and cursor control316are external components (i.e., peripheral devices) of computing device300, some or all of display312, input device314, and cursor control316are integrated as part of the form factor of computing device300in other configurations. Functions of the disclosed systems, methods, and modules may be performed by computing device300in response to processor(s)304executing one or more programs of software instructions contained in main memory306. Such software instructions may be read into main memory306from another storage medium, such as storage device(s)310. Execution of the software instructions contained in main memory306cause processor(s)304to perform the functions of the example embodiment(s). While functions and operations of the example embodiment(s) may be implemented entirely with software instructions, hard-wired or programmable circuitry of computing device300(e.g., an ASIC, a FPGA, or the like) may be used in other embodiments in place of or in combination with software instructions to perform the functions, according to the requirements of the particular implementation at hand. The term “storage media” as used herein refers to any non-transitory media that store data and/or software instructions that cause a computing device to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, non-volatile random access memory (NVRAM), flash memory, optical disks, magnetic disks, or solid-state drives, such as storage device310. Volatile media includes dynamic memory, such as main memory306. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, flash memory, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus302. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more software instructions to processor(s)304for execution. For example, the software instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the software instructions into its dynamic memory and send the software instructions over a telephone line using a modem. A modem local to computing device300can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus302. Bus302carries the data to main memory306, from which processor(s)304retrieves and executes the software instructions. The software instructions received by main memory306may optionally be stored on storage device(s)310either before or after execution by processor(s)304. Computing device300also may include one or more communication interface(s)318coupled to bus302. A communication interface318provides a two-way data communication coupling to a wired or wireless network link320that is connected to a local network322(e.g., Ethernet network, Wireless Local Area Network, cellular phone network, Bluetooth wireless network, or the like). Communication interface318sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. For example, communication interface318may be a wired network interface card, a wireless network interface card with an integrated radio antenna, or a modem (e.g., ISDN, DSL, or cable modem). Network link(s)320typically provide data communication through one or more networks to other data devices. For example, a network link320may provide a connection through a local network322to a host computer324or to data equipment operated by an Internet Service Provider (ISP)326. ISP326in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”328. Local network(s)322and Internet328use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link(s)320and through communication interface(s)318, which carry the digital data to and from computing device300, are example forms of transmission media. Computing device300can send messages and receive data, including program code, through the network(s), network link(s)320and communication interface(s)318. In the Internet example, a server330might transmit a requested code for an application program through Internet328, ISP326, local network(s)322and communication interface(s)318. The received code may be executed by processor304as it is received, and/or stored in storage device310, or other non-volatile storage for later execution. 5.0 IMPLEMENTATION MECHANISMS—SOFTWARE OVERVIEW FIG.4is a block diagram of a software system400that may be employed for controlling the operation of computing device300. Software system400and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s). Other software systems suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions. Software system400is provided for directing the operation of computing device300. Software system400, which may be stored in system memory (RAM)306and on fixed storage (e.g., hard disk or flash memory)310, includes a kernel or operating system (OS)410. The OS410manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file input and output (I/O), and device I/O. One or more application programs, represented as402A,402B,402C . . .402N, may be “loaded” (e.g., transferred from fixed storage310into memory306) for execution by the system400. The applications or other software intended for use on device400may also be stored as a set of downloadable computer-executable instructions, for example, for downloading and installation from an Internet location (e.g., a Web server, an app store, or other online service). Software system400includes a graphical user interface (GUI)415, for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the system400in accordance with instructions from operating system410and/or application(s)402. The GUI415also serves to display the results of operation from the OS410and application(s)402, whereupon the user may supply additional inputs or terminate the session (e.g., log off). OS410can execute directly on the bare hardware420(e.g., processor(s)304) of device300. Alternatively, a hypervisor or virtual machine monitor (VMM)430may be interposed between the bare hardware420and the OS410. In this configuration, VMM430acts as a software “cushion” or virtualization layer between the OS410and the bare hardware420of the device300. VMM430instantiates and runs one or more virtual machine instances (“guest machines”). Each guest machine comprises a “guest” operating system, such as OS410, and one or more applications, such as application(s)402, designed to execute on the guest operating system. The VMM430presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. In some instances, the VMM430may allow a guest operating system to run as if it is running on the bare hardware420of device300directly. In these instances, the same version of the guest operating system configured to execute on the bare hardware420directly may also execute on VMM430without modification or reconfiguration. In other words, VMM430may provide full hardware and CPU virtualization to a guest operating system in some instances. In other instances, a guest operating system may be specially designed or configured to execute on VMM430for efficiency. In these instances, the guest operating system is “aware” that it executes on a virtual machine monitor. In other words, VMM430may provide para-virtualization to a guest operating system in some instances. The above-described computer hardware and software is presented for purpose of illustrating the underlying computer components that may be employed for implementing the example embodiment(s). The example embodiment(s), however, are not necessarily limited to any particular computing environment or computing device configuration. Instead, the example embodiment(s) may be implemented in any type of system architecture or processing environment that one skilled in the art, in light of this disclosure, would understand as capable of supporting the features and functions of the example embodiment(s) presented herein. 6.0 OTHER ASPECTS OF DISCLOSURE Although some of the figures described in the foregoing specification include flow diagrams with steps that are shown in an order, the steps may be performed in any order, and are not limited to the order shown in those flowcharts. Additionally, some steps may be optional, may be performed multiple times, and/or may be performed by different components. All steps, operations and functions of a flow diagram that are described herein are intended to indicate operations that are performed using programming in a special-purpose computer or general-purpose computer, in various embodiments. In other words, each flow diagram in this disclosure, in combination with the related text herein, is a guide, plan or specification of all or part of an algorithm for programming a computer to execute the functions that are described. The level of skill in the field associated with this disclosure is known to be high, and therefore the flow diagrams and related text in this disclosure have been prepared to convey information at a level of sufficiency and detail that is normally expected in the field when skilled persons communicate among themselves with respect to programs, algorithms and their implementation. In the foregoing specification, the example embodiment(s) of the present invention have been described with reference to numerous specific details. However, the details may vary from implementation to implementation according to the requirements of the particular implement at hand. The example embodiment(s) are, accordingly, to be regarded in an illustrative rather than a restrictive sense. | 51,971 |
11863385 | The accompanying drawings show simplified representations of devices or parts thereof, as involved in embodiments. Similar or functionally similar elements in the figures have been allocated the same numeral references, unless otherwise indicated. Computerized systems, methods, and computer program products embodying the present invention will now be described, by way of non-limiting examples. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION The following description is structured as follows. General embodiments and high-level variants are described in section 1. Section 2 addresses more specific embodiments and section 3 concerns technical implementation details. Note, the present method and its variants are collectively referred to as the “present methods”. All references Sn refer to methods steps of the flowcharts ofFIGS.2,4,5, and6, while numeral references pertain to devices, components, and concepts used in the present invention. Section 1. General embodiments and high-level variants In reference toFIGS.1A,1B, and2, a first aspect of the invention is now described, which concerns a method of running software inside containers. The method relies on a computerized system5such as depicted inFIG.1A. In this example, the system5is assumed to be in data communication with a cloud computing system3. A user1is assumed to be able to communicate with the system5via a user device2and the cloud3. In variants, the system5may simply form part of the cloud3. In all cases, the method is performed by computerized entities, which include entities10,20,30of the system5itself and may further involve other entities (e.g., from the cloud3and the user) interacting with the system5. The method may also be partly implemented at nodes of a cloud platform3. The system5includes general-purpose hardware10,30, such as processors of central processing units (CPUs), graphics processing unit (GPU), and other electronic circuits, which typically forms part of a server6. Interestingly, the system5is further equipped with a composable disaggregated infrastructure15, which includes specialized, network-attached hardware components20(or NHCs for short). The NHCs20typically include hardware accelerators, such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The components20may notably be specifically designed or configured to execute given libraries, as discussed later in detail. Broadly speaking, the computerized system5is configured to dynamically allocate computerized resources, e.g., via the cloud3. Such resources can be decomposed into general resources and specialized resources. The general resources are enabled by the general-purpose hardware10,30, while the specialized resources are enabled by the NHCs20of the composable disaggregated infrastructure15. The general resources may notably include compute resources involving general-purpose processing units, memory resources, and storage resources, whereas the specialized resources involve specialized NHCs20, which are attached to the infrastructure15and can be reached via a network, i.e., thanks to a network protocol, using network interface means. The system5actually concerns another aspect of the invention, which is discussed later. According to the proposed method, certain tasks required for the containers9to execute at the system5are offloaded to the NHCs20of the composable disaggregated infrastructure15. Running S12software inside a given container requires executing S13, S14functions corresponding to this software and this container. Now, according to the proposed approach, some of these functions (say a first subset of these) are executed S13using the general resources enabled by the general-purpose hardware10,30, whereas other functions (i.e., a second subset of the required functions) are executed S14using the specialized resources, as enabled by the NHCs20. This is achieved by offloading S14the second subset of functions to respective components20of the NHCs20, in accordance S25, S26with specializations of the NHCs20. To that aim, the proposed method maintains a table30(i.e., a registry or a lookup table, called “specialization array” inFIGS.4-6), which captures the specializations of the NHCs20. The method needs to keep track of such specializations, in order to be able to suitably offload the execution of functions to the NHCs20. Note, the table30is not necessarily involved explicitly at runtime, for reasons that will become apparent later. Still, the specialized functions are offloaded S14(at runtime) in accordance with the specializations as captured in this table30. The proposed solution leverages a disaggregated computing system5involving a composable disaggregated infrastructure15, which can be flexibly reconfigured, whereby NHCs20can be added, reconfigured or, more generally, updated S235, to meet the needs of container users1, as in embodiments. Note, in practice, the NHCs20can be provisioned either independently of the provisioning of the rest of the infrastructure of the system5(as assumed inFIG.1A) or using a same infrastructure. Using NHCs20makes the execution of the containers9and the software executing inside the containers more efficient, because some of the functions required for this execution are offloaded to hardware that is specialized, i.e., specifically configured for executing such functions. Moreover, the proposed architecture allows the built time, response time, and memory footprint of the containers9to be substantially decreased, as some of the container workload can be directly offloaded to specialized hardware20, thereby bypassing the technical debt of usual servers6. The technical debt refers to the computational costs incurred by various layers going from hardware level up to the application level, including, the operating system, the virtualization, the drivers, etc. In addition, offloading software functions to the NHCs20makes it possible to shrink the container's image size, while increasing the number of containers9per bare-metal servers6, decreasing the execution time of the containers (application acceleration). In an embodiment, the offloading can be done seamlessly, e.g., using network sockets. Moreover, the NHCs20can easily be integrated next to usual container platforms and simple procedures are proposed herein to build and deploy containers leveraging such NHCs20. The specialization table30may possibly form part of a “hardware repository”32(FIG.1B). The latter is similar in purpose to a software repository31(also known as “container repository”) that is used to build container software. However, the hardware repository32is here meant to contain or provide access to all data (including programming code and configuration parameters) necessary to run the NHCs20. For instance, the hardware repository32may include bitfiles for FPGAs, as well as metadata. In addition, the repository32may store the specialization table30. The control data that are necessary to offload the execution of specialized functions can be included in (or indirectly implied by) the container image. In operation, inputs and outputs (I/O) use network interface connections means to suitably reach the NHCs20. Note, the network interface and connection data needed to connect to the NHCs20can be stored in the hardware repository32too. The offloading operations S14can be managed statically or dynamically, depending on the network protocols, network interfaces, and connection means relied upon. For example, the method may use mechanisms involving pure network sockets, Remote Direct Memory Access (RDMA), Representational state transfer (Rest) API, Remote Procedure Calls (RPC), stream processing/message brokers (e.g., Apache Kafka, Apache Flink, Apache Samza, Apache Spark, RabbitMQ), etc. Various RPC implementations can be contemplated, such as the so-called gRPC, Protocol Buffers, Apache Thrift, Apache Avro, JSON-RPC, and XML-RPC. More generally, various protocols and interfaces exist, which allow to connect to the relevant NHCs20at runtime. A particularly practical approach is for the interface logic to rely on network sockets, which allow to seamlessly reach the NHCs20. Note, such interface logic does not explicitly involve the table30. However, it is designed and built in accordance with specializations as tracked in this table, so as to make it possible to reach the relevant NHCs. In variants, a mechanism similar to a domain name system can be used, to suitably reach the NHCs20. The control paths and data paths are typically managed by the runtime system on execution of the containers9. Note, the containers and software executing inside the containers9may possibly be orchestrated. As evoked above, a static addressing mechanism can be used to address the NHCs20at runtime, especially for what concerns the usual, highly repetitive tasks. However, a dynamic addressing mechanism can be advantageous when reconfigurations of the NHCs20are needed. This way, it will not be necessary to generate new container images that consistently reflect the latest NHC configurations. As said, a dynamic addressing mechanism can for instance be handled using a DNS-like mechanism or any dynamic address management protocol. As per the present approach, executing software inside the containers9causes to perform certain functions on the conventional hardware (e.g., the servers' CPUs10) and to offload the execution of other functions to the NHCs20. The functions performed on the conventional (server) hardware10are typically the most basic functions, while most specialized (typically work intensive) tasks end up on the NHCs20. To that aim, containers “talk” to the NHCs20over a network. I.e., as noted earlier, inputs and outputs use network interface connections means. The application software, however, is typically agnostic to such connection means. In general, the system5may include one or more servers6, where such servers are configured to provide general resources. Similarly, the system5may include one or more composable disaggregated infrastructures15. In the example ofFIG.1A, the system5includes a single server6and a single disaggregated infrastructure15, for simplicity. The computerized system5is further assumed to be reachable via a cloud computing system3, such that users1can request to deploy and run containers9as part of a cloud service. Note, the specialized functions may be directly offloaded S14to respective NHCs20via the cloud computing system3at runtime, so as to bypass the server(s)6. As noted earlier, the NHCs20may possibly have to be reconfigured (to specialize the NHCs in performing specific tasks) and/or new NHCs20may be added in the infrastructure15, as necessary to meet user needs. More generally, one or more of the NHCs20may have to be updated S235to change their specializations, seeFIG.5. Note, such updates can happen any time, i.e., before, during, or after execution of a container. The table30must be consistently maintained S236, i.e., updated S236according to the changed or added specializations of the devices20. In practice, NHCs20may have to be continually updated S235based on the functionalities desired for the containers, which evolve over time. Such functionalities are defined in container files40(such as the so-called Docker files) provided by users1willing to deploy their containers. A container file40is typically a text document that contains all the commands a user could call on the command line to assemble an image. As illustrated inFIGS.3,4, and5, when a user1wants to deploy a container, the user provides a container file40. The file40is accessed and parsed to identify S21those functions that are implied by the functionalities defined in the container file. Based on all the identified functions, it is then possible to identify those functions (i.e., the second subset of functions) to be offloaded to the NHCs20, according to the specializations captured in the table30. The identified functions are then mapped S25onto the computerized resources (be they the general or specialized resources). This way, associations are obtained, which reflects this mapping. In particular, the second subset of functions are mapped onto respective components20in accordance with respective specializations as captured in the table30. Eventually, an image of the container9is built according to the associations obtained, with a view to obtaining a corresponding executable, i.e., the container itself. Thus, software can be subsequently run S12inside this container, based on the image built. Note, the terminology “container” refers to an executable program, executed at runtime, while a “container image” is a set of files used at build time to obtain the executable container. When at rest, the container image consists of one or more files stored in some suitable location, e.g., in a file format used to package software components and dependencies of a containerized software package. Examples of such container image formats are the Docker container images (Docker), Appc, LXD, and Open Container Initiative (OCI). When a user types a command to start a container, the container engine unpacks the required files and metadata, then hands them off to the Linux kernel. In the present case, the container engine may advantageously pull all the required data from distinct repositories31,32, as discussed later in detail. It may not always be possible to directly identify all the required functions (step S21inFIG.4), because some functions are only implicit, i.e., they are implied by other functions. Thus, one may distinguish direct functions (functions that are directly implied by the functionalities defined in the container file40) from indirect functions (functions that are only indirectly implied by the functionalities defined in the container file40). In such cases, one may advantageously rely on a process as now discussed in reference toFIG.5. That is, the identification performed step S21may start with identifying S211the direct functions. This can simply be achieved by parsing the container file40. Next, indirect functions can be identified S212-S214based on the direct functions identified. To that aim, one may for instance simply rely on a lookup table, which aggregates typical dependencies, known from experience gained with previous cases. However, preferred is to identify S212-S214indirect functions by building S212an initial image of the container and executing S214a corresponding container for testing purposes, with a view to identifying residual functions (i.e., does the initial container calls unmapped functions?). Eventually, a final container image is built S212based on both the direct functions and the indirect functions accordingly identified. In other words, the system may dynamically identify residual functions by testing an initial image of the container. The same procedure can be repeated for each container to be deployed. More generally, the procedures described herein can be applied to every container to be deployed on the system5. Typically, the functionalities of a container imply the execution of software libraries. Thus, the system5may advantageously include NHCs20that are specifically configured to execute such libraries (and, in particular, to accelerate the execution of such libraries), starting with the most commonly used libraries, especially those that are the most work intensive. I.e., the NHCs20can be designed to execute such libraries in a more efficient manner than general-purpose hardware10of the system5, hence the benefit of offloading them to the NHCs20. As schematically illustrated inFIG.3, the functionalities of a container can be captured in a container file40. Such functionalities may notably be defined by a business logic and library dependencies. For example, as exemplified inFIG.3, a given container file may require the execution of Python libraries such as “OpenCV” and “NumPy”, within a given business logic. I.e., the functions identified at step S21typically depend on both a business logic and software libraries. In such cases, the second subset of functions (i.e., functions relating to such libraries) can be offloaded S14to respective NHCs20by mapping bindings of the corresponding library dependencies onto gates of respective NHCs20, typically the gates of ASICs and/or FPGAs. This makes it possible to pass the execution of certain libraries most efficiently to respective NHCs20at run time, according to the mapping decided earlier. I.e., it is possible to map some functions directly into FPGA or ASIC gates. Doing so is extremely efficient in practice and markedly improves over execution by general-purpose hardware10. Referring toFIG.6, when the system5receives S20a request to deploy a container9from a user1, the user1can advantageously be guided S20atowards the deployment of this container. As a result, steps S21and S23can be performed as part of a user-assisted process, with a view to eventually serving the deployment of user containers. Preferred user-guided processes are described in detail in section 2. The user-guided processes leads the user to write data S2363to a software repository31. I.e., software packages that are implemented using software containers are typically stored in a software repository31, which may include all components and dependencies required to run each particular software package in each software container. Software repositories are known per se. In addition, a further repository32(here called a “hardware repository”) may advantageously be used to keep track of data required by the NHCs to execute the specialized functions. That is, various NHC-related parameters (e.g., programming and/or configuration parameters of the NHCs) can be written S2362to the hardware repository32, in accordance with specializations of the NHCs20, while application-related data (including dependency data) are written S2363to the software repository31. Eventually, the container image is built S26in accordance with data stored in the hardware repository32and the software repository31. At runtime, the container engine8(i.e., the piece of software that runs the containers9) pulls data from the hardware repository32and from the software repository31to run the container9and the software inside it, as illustrated inFIG.1B. In addition to repositories31,32, container registries may possibly be involved too. A container registry is a service that stores and distributes container images and related artifacts. Docker Hub is an example of a public container registry, which serves as a general catalog of Docker container images. A container repository is a collection of container images or other artifacts (in a registry), which typically have the same name but different tags. For example, successive versions of a given image can be stored in a same repository. Typically, the container registry is used in the context of a domain name and a service that allows users to pull and push container image data. Another aspect of the invention is now described in reference toFIGS.1A and1B. This other aspect concerns a computerized system5for running software inside containers9. Several features of the system5have already been discussed in reference to the present methods. Thus, the system5is only briefly described in the following. The system5comprises general-purpose hardware (e.g., CPUs/GPUs105, memory110, storage120, etc.), as well as a composable disaggregated infrastructure15equipped with several NHCs20, as previously discussed. The system5may possibly include several servers6(each comprising general-purpose hardware), as well several disaggregated structures15, possibly on different sites. The servers6may possibly be delocalized over several computerized entities and may notably include several computerized units101such as shown inFIG.1C. The system5typically executes a system software at one or more entities of the system5; the execution of this system software results in configuring the system5to perform steps as described earlier in reference to the present methods. As a result, and consistently with the present methods, the system5is configured to dynamically allocate computerized resources, i.e., general resources enabled by the general-purpose hardware, as well as specialized resources enabled by the NHCs20. The system5is further configured to run software inside each container by executing corresponding functions. In operation, a first subset of the functions are executed using the general resources, whereas a second subset of the functions are executed using the specialized resources, by offloading the second subset of functions to respective NHCs20, in accordance with the specializations. To that aim, the system5maintains a table30capturing specializations of the NHCs20. As noted earlier, the system5may possibly form part of a cloud computing system, contrary to the assumption made inFIG.1A, where the system is set in data communication with the cloud computing system3. A preferred scenario in one in which users1deploy containers in the system5via user devices2(i.e., computers) and a cloud3. In particular, all required network connections might be passed directly through the cloud3to the NHCs20, hence bypassing the server(s)6. The NHCs20typically include hardware accelerators. The latter are advantageously configured specifically to accelerate the execution of certain libraries, e.g., those libraries that are most frequently required for the execution of user containers and corresponding software, starting with the most work intensive libraries. As noted earlier, bindings of the library dependencies can advantageously be mapped onto gates of the NHCs20. For example, the hardware accelerators may FPGAs and ASICs. In addition, the accelerators may include field-programmable analog arrays, complex programmable logic devices, data processing units (DPUs), digital signal processors, tensor processing units (TPUs), physics processing units, vision processing units, physical neural networks, secure cryptoprocessors, and systems-on-chip. In addition, the NHCs20may include components20that are configured as cryptographic accelerators, artificial intelligence accelerators, data compression accelerators, and quantum computing simulation accelerators. The computerized system5may comprise one or more servers6, where the servers6are equipped with the general-purpose hardware10that enables the general resources. The general-purpose hardware10may notably include or consist of computerized units101such as shown inFIG.1C. Additional aspects of such units101are discussed in section 3. Note, the servers6may possibly include accelerators30too, albeit non network-attached, as assumed inFIG.1A. The accelerators30may notably include GPUs, TPUs, DPUs, FPGAs, and ASICs. For instance, certain deployment scenarios involve a container using general-purpose hardware10and PCIe-attached GPUs30at the server6, as well as network-attached FPGAs20at the infrastructure15. Next, according to another aspect, the invention can be embodied as a computer program product. The latter may notably embody a system software of a computerized system5such as described above, the aim being to be able to run software inside containers9. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by processing means of the computerized system5, causing the system software to perform steps according to the present methods. Additional aspects of such computer program products are discussed in section 3. The above embodiments have been succinctly described in reference to the accompanying drawings and may accommodate a number of variants. Several combinations of the above features may be contemplated. Examples are given in the next section. Section 2. Particularly preferred embodiments Section 2.1 Preferred architecture A preferred architecture is shown inFIG.1A. Users1interact with the system5via a cloud3, using their computerized devices2. The system5includes one or more servers6, as well as one or more composable disaggregated structures15. Each server6relies on general-purpose hardware10(see alsoFIG.1B), as well as accelerators30(albeit not network-attached). Each server6runs one or more virtual machines (VMs, or host operating systems)7, themselves executing one or more container engines8, in order to execute containers9. In operation, the container engines pull up data contained in the repositories31,32. Note, the software application container typically includes a base (read-only) image and a writable layer. The base image includes one or more template images, i.e., layers. The writable layer includes a plurality of libraries and user code reflecting a business logic layer. The business logic depends on those libraries. The template images are stored in a public or private software container registry31. Moreover, a hardware repository32(similar to the software repository31) is used to store all data needed by the NHCs20, as illustrated inFIG.1B. Organizations and/or users can create a template image, starting from a base image and adding libraries, files, source code, application executables, etc. Section 2.2 Preferred flows of operations FIG.2shows a high-level flow of operations according to preferred embodiments. Upon receiving S10a user command to run a container, the container engine pulls S11data from the repositories31,32in accordance with specializations needed for software to execute inside this container and then hands off to the Linux kernel. This causes to run S12the software inside the container9and notably results in instructing to dispatch functions required for the execution, using network interfaces and connection means (e.g., pure network sockets). This, in turn, causes to execute S13a subset of functions via the general-purpose hardware10on the server6, while the executions of specialized functions are offloaded S14to NHCs20. Data exchanges required by the executions of the functions are managed S15by the runtime system, which notably aggregates data returned by the executed functions. FIG.4illustrates how deployment requests can be handled, in embodiments. Upon receiving S20a container deployment request, the system5identifies S21the required functions (including the specialized and indirect functions), by parsing and analyzing a container file40provided by the user. This may prompt the system5to update S23the NHCs20(e.g., add and configure new NHCs and/or reconfigure existing NHCs), as well as the specialization array30, and the repositories31,32. Next, the system5maps S25all the identified functions, as needed, i.e., to the server6and the NHCs20, in accordance with the specializations listed in the array30. Finally, the system5builds S26a container image by pulling data from the repositories31,32, in accordance with the mapping done. In more detail, and as illustrated inFIG.5, the system5identifies S21the required functions by parsing S210the container file40provided by the user1. This way, the system5first identifies S211the functions that are directly implied by functionalities defined in the container file40. Next, an initial container image is built S212in accordance with the identified functions. A container corresponding to the initial image is then executed at step S214to check for indirect functions. All the functions identified (including any indirect function) are logged at step S215. The specialization array30and the repositories31,32are then updated S23as follows. For each function identified, the system5searches S231for a corresponding library in the hardware repository32. If a corresponding library is found (S232: Yes), then the flow goes to step S25, see below. Else (S232: No), the system checks whether it is possible to specialize an NHC20at step S233. If so (S233: Yes), an NHC is accordingly configured (or updated) at step S235. This NHC configuration may need refactoring the software involved in the container (i.e., implementing parts or all of the software functionalities in hardware, notably the functionalities that are not already implemented by an NHC). This can be realized by using vendor-specific solutions that implement the corresponding software library, e.g., implementation of some functions of the OpenCV library, either in a hardware description language (HDL), such as VHDL and Verilog, or an electronic netlist. This can also be realized by leveraging high-level-synthesis techniques that enable the automatic refactoring of high-level programming languages, in which those libraries are written, to HDLs, which can, in turn, be to design, e.g., TPUs, DPUs, ASICs, and/or FPGA bitstreams. Else, a software solution is set up S234, based on the software repository31. The specialization array30, the hardware repository32, and the software repository31are accordingly updated at step S236, as discussed below in more detail. All functions are suitably mapped S25to the server6and NHCs20, in accordance with the specialization array30, prior to building S26a final container image (i.e., the container image is refactored) by pulling all necessary data from the repositories31,32, in accordance with the mapping done. As shown inFIG.6, the system5may provide an interactive service to guide S20athe user1upon receiving S20a deployment request, with a view to building a suitable container image. Various user-guiding processes can be contemplated, as discussed below. The user-guided process notably causes to perform steps S21-S23described earlier, which may cause to update S2361the specialization table30. In addition, this process causes the user to write S2663application-related data to the software repository31, while programming and/or configuration parameters relating to the NHCs20are written S2362to the hardware repository32. Once the final container image has been obtained (step S26,FIGS.4,5), the container engine will, upon receiving a command S10to execute the container, pull S11all necessary data from the repositories31,32, in accordance with the specializations, and hand off to the Linux kernel to perform steps S11-S14. A possible user-guided process is the following. A cloud user1may want to rely on disaggregated container technology, because s/he expects an acceleration of the execution and/or a cheaper service. A cloud vendor may want to rely on this technology to reduce the image sizes of the containers and decrease the container build time. So, the cloud vendor provides an interactive database, from which the user can select and configure the functions to be accelerated. The user1provides a container file40(e.g., a Docker file). Based on this input, the interactive database provides a first code snippet to replace the library import in the application, a second code snippet to update the Docker file, as well as a hardware container configuration (e.g., .xml). The user, as application expert, accordingly, updates his/her application and container. Next, the user uploads her/his container and the hardware container configuration to the hardware and software container repositories31,32, respectively. Finally, the user deploys her/his container on the container platform and the container platform serves the user's container. Another user-guided flow is the following. The cloud vendor provides an interactive service to establish the functions that should be accelerated. The user provides a container file40(e.g., a Docker file). Based on this input, the interactive service statically profiles the container file to obtain information indicating target libraries required by the container to be deployed. The interactive service asks the user to provide complete inputs for the container, so that a dynamic profiling can be done. The user accordingly provides inputs to the containerized application. Upon completion, the service can dynamically profile an initial version of the container at runtime to obtain information indicating target libraries required by the container to be deployed. The service then locates corresponding bitfiles in the hardware repository32and presents an analysis of the libraries found, as well as possible library replacements, to the user1. The user, as application expert, then decides whether the modified application is still correct. If not, the user can manually correct this. Then, the user confirms the suggested modifications or uploads her/his container and the hardware container configuration to the hardware and software container repositories31,32, respectively. Note, the service may optionally modify the container. Finally, the user deploys her/his container on the container platform and the container platform serves the user's container. Section 3. Technical implementation details Section 3.1 Computerized units (FIG.1C) Computerized systems and devices can be suitably designed for implementing embodiments of the present invention as described herein. In that respect, it can be appreciated that the methods described herein are largely non-interactive and automated. In exemplary embodiments, the methods described herein can be implemented either in an interactive, a partly-interactive, or a non-interactive system. The methods described herein can be implemented in software, hardware, or a combination thereof. In exemplary embodiments, the methods proposed herein are implemented in software, as an executable program, the latter executed by suitable digital processing devices. More generally, embodiments of the present invention can be implemented, wherein virtual machines and/or general-purpose digital computers, such as personal computers, workstations, etc., are used, in addition to NHCs20described earlier. For instance, each of the computerized systems2,3, and5shown inFIG.1Amay comprise one or more computerized units101(e.g., general—or specific-purpose computers), such as shown inFIG.1C. Each unit101may interact with other, typically similar units101, to perform steps according to the present methods. In exemplary embodiments, in terms of hardware architecture, as shown inFIG.1C, each unit101includes at least one processor105, and a memory110coupled to a memory controller115. Several processors (CPUs, and/or GPUs) may possibly be involved in each unit101. To that aim, each CPU/GPU may be assigned a respective memory controller, as known per se. One or more input and/or output (I/O) devices145,150,155(or peripherals) are communicatively coupled via a local input/output controller135. The I/O controller135can be coupled to or include one or more buses and a system bus140, as known in the art. The I/O controller135may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processors105are hardware devices for executing software, including instructions such as coming as part of computerized tasks triggered by machine learning algorithms. The processors105can be any custom made or commercially available processor(s). In general, they may involve any type of semiconductor-based microprocessor (in the form of a microchip or chip set), or more generally any device for executing software instructions, including quantum processing devices. The memory110typically includes volatile memory elements (e.g., random-access memory), and may further include nonvolatile memory elements. Moreover, the memory110may incorporate electronic, magnetic, optical, and/or other types of storage media. Software in memory110may include one or more separate programs, each of which comprises executable instructions for implementing logical functions. In the example ofFIG.1C, instructions loaded in the memory110may include instructions arising from the execution of the computerized methods described herein in accordance with exemplary embodiments. The memory110may further load a suitable operating system (OS)111. The OS111essentially controls the execution of other computer programs or instructions and provides scheduling, I/O control, file and data management, memory management, and communication control and related services. Possibly, a conventional keyboard and mouse can be coupled to the I/O controller135. Other I/O devices140-155may be included. The computerized unit101can further include a display controller125coupled to a display130. The computerized unit101may also include a network interface or transceiver160for coupling to a network (not shown), to enable, in turn, data communication to/from other, external components, e.g., other units101. The network transmits and receives data between a given unit101and other devices101. The network may possibly be implemented in a wireless fashion, e.g., using wireless protocols and technologies, such as Wifi, WiMax, etc. The network may notably be a fixed wireless network, a wireless local area network (LAN), a wireless wide area network (WAN), a personal area network (PAN), a virtual private network (VPN), an intranet or other suitable network system and includes equipment for receiving and transmitting signals. Preferably though, this network should allow very fast message passing between the units. The network can also be an IP-based network for communication between any given unit101and any external unit, via a broadband connection. In exemplary embodiments, network can be a managed IP network administered by a service provider. Besides, the network can be a packet-switched network such as a LAN, WAN, Internet network, an Internet of things network, etc. Section 3.2 Computer program products The present invention may be a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing processors to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, Java, Go, Python, Ruby, Scala, Swift, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, systems, and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Section 3.3 Cloud Computing It is to be understood that although this disclosure refers to embodiments involving cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. While the present invention has been described with reference to a limited number of embodiments, variants, and the accompanying drawings, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the present invention. In particular, a feature (device-like or method-like) recited in a given embodiment, variant or shown in a drawing may be combined with or replace another feature in another embodiment, variant, or drawing, without departing from the scope of the present invention. Various combinations of the features described in respect of any of the above embodiments or variants may accordingly be contemplated, that remain within the scope of the appended claims. In addition, many minor modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims. In addition, many other variants than explicitly touched above can be contemplated. | 47,145 |
11863386 | DESCRIPTION OF EMBODIMENTS Terms “first” and “second” mentioned below are merely intended for a purpose of description, and shall not be understood as an indication or implication of relative importance or implicit indication of a quantity of indicated technical features. Therefore, a feature limited by “first” or “second” may explicitly indicate or implicitly include one or more such features. In descriptions of embodiments of this application, words such as “example” or “for example” are used to represent giving an example, an illustration, or a description. Any embodiment or design scheme described as an “example” or with “for example” in embodiments of this application should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Exactly, use of the words such as “example” or “for example” is intended to present a related concept in a specific manner. An influx of electronic devices such as mobile phones and tablets poses great challenges to enterprise IT management. Currently, mobile device management (MDM) is implemented based on a C-S deployment mode. In this deployment mode, a to-be-managed electronic device needs to be connected to a network. However, this causes a result that management and device system upgrade of an electronic device that is of inconvenience in being connected to a network cannot be implemented. In addition, an MDM service provider generally charges for a single device (namely, one to-be-managed electronic device) per month. As a result, an enterprise usually needs to pay a high fee when using a service provided by the MDM service provider to implement management of a large quantity of mobile devices. Embodiments of this application provide a mobile device management method, so that an MDM service can be deployed on an electronic device. An enterprise can implement management and device system upgrade of a to-be-managed electronic device in a local area network or a near field environment by using the electronic device on which the MDM service is deployed, without connecting the to-be-managed electronic device to a network. This resolves a problem that management and device system upgrade of an electronic device that is of inconvenience in being connected to a network cannot be implemented. In addition, the MDM service is deployed on the electronic device to implement device management and device system upgrade, without purchasing a service provided by an MDM service provider, which reduces device management costs. The following describes implementations of embodiments of this application in detail with reference to the accompanying drawings. FIG.1is a schematic diagram of a composition of a mobile device management system according to an embodiment of this application. As shown inFIG.1, the mobile device management system may include at least a first electronic device101, at least one second electronic device102, a first server103, and a second server104. The first electronic device101may serve as a master device, and request, by accessing the first server103, the second server104to deploy an MDM service for the first electronic device101. The at least one second electronic device102is a to-be-managed device. After the first electronic device101successfully applies for deployment of the MDM service, when the at least one second electronic device102and the first electronic device101are in a same local area network or establish a wireless peer-to-peer (P2P) connection, the first electronic device101can provide the MDM service, for example, that may include a management service and a system upgrade service, for the second electronic device102, to implement management and device system upgrade of the least one second electronic device102. In this embodiment, management implemented by the first electronic device101may include at least one of the following: device management, network management, security management, email management, content management, application management, and the like. The first server103may be a server disposed on the Internet, and provides a service interface that can be used to access an extranet for an electronic device on an enterprise intranet, for example, the first electronic device101. The first server103may be provided by a device vendor, and is configured to provide a value-added service for a device. For example, a vendor that produces the second electronic device102, or produces the first electronic device101and the second electronic device102provides the first server103. In this embodiment, the first server103may be referred to as a device management portal (DM portal), or a device management service portal (DM service portal). The DM portal may be a cloud service purchased by an enterprise. The second server104may be a server that is deployed on the Internet and that is configured to provide a device management (DM) service. The second server104may generate a corresponding DM service application (APP) for the first electronic device101by interacting with the first server103, and deliver the DM service application to the first electronic device101through the first server103, to implement deployment of the MDM service on the first electronic device101. In some embodiments, the at least one second electronic device102may be devices that are purchased by an enterprise in batches and that are used by employees of the enterprise. For example, in this embodiment of this application, the first electronic device101and the second electronic device102each may be a mobile phone, a tablet, a desktop computer, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a wearable device, for example, a smartwatch, and a device, for example, a cellular phone, a personal digital assistant (PDA), or an augmented reality (AR)/virtual reality (VR) device. Specific forms of the first electronic device101and the second electronic device102are not specially limited in this embodiment of this application. In addition, in some embodiments, the first electronic device101and the second electronic device102may be electronic devices of a same type. For example, both the first electronic device101and the second electronic device102are mobile phones. In some other embodiments, the first electronic device101and the second electronic device102may be electronic devices of different types. For example, the first electronic device101is a tablet, and the second electronic device102is a mobile phone (as shown inFIG.1). FIG.2is a schematic diagram of a structure of an electronic device according to an embodiment of this application. A structure of the first electronic device101or the second electronic device102or structures of the first electronic device101and the second electronic device102may be shown inFIG.2. As shown inFIG.2, the electronic device may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (universal serial bus, USB) port130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communication module150, a wireless communication module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor module180, a button190, a motor191, an indicator192, a camera193, a display194, a subscriber identity module (subscriber identity module, SIM) card interface195, and the like. The sensor module180may include a pressure sensor180A, a gyro sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a distance sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. It may be understood that the structure shown in this embodiment constitutes no specific limitation on the electronic device. In some other embodiments, the electronic device may include more or fewer components than those shown in the figure, or some components may be combined, or some components may be split, or there may be a different component layout. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. The processor110may include one or more processing units. For example, the processor110may include an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU). Different processing units may be independent components, or may be integrated into one or more processors. The controller may be a nerve center and a command center of the electronic device. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction fetching and instruction execution. A memory may be further disposed in the processor110, and is configured to store instructions and data. In some embodiments, the memory in the processor110is a cache memory. The memory may store instructions or data just used or cyclically used by the processor110. If the processor110needs to use the instructions or the data again, the processor110may directly invoke the instructions or the data from the memory. This avoids repeated access and reduces a waiting time of the processor110, thereby improving system efficiency. In some embodiments, the processor110may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, a universal serial bus (USB) port, and/or the like. The charging management module140is configured to receive a charging input from a charger. The charger may be a wireless or wired charger. In some embodiments of wired charging, the charging management module140may receive a charging input from the wired charger through the USB port130. In some embodiments of wireless charging, the charging management module140may receive a wireless charging input through a wireless charging coil of the electronic device. When charging the battery142, the charging management module140may further supply power to the electronic device by using the power management module141. The power management module141is configured to connect the battery142and the charging management module140to the processor110. The power management module141receives an input of the battery142and/or the charging management module140, and supplies power to the processor110, the internal memory121, an external memory, the display194, the camera193, the wireless communication module160, and the like. The power management module141may further be configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health status (electric leakage or impedance). In some other embodiments, the power management module141may be alternatively disposed in the processor110. In some other embodiments, the power management module141and the charging management module140may be alternatively disposed in a same component. A wireless communication function of the electronic device may be implemented through the antenna1, the antenna2, the mobile communication module150, the wireless communication module160, the modem processor, the baseband processor, and the like. The antenna1and the antenna2are configured to transmit and receive electromagnetic wave signals. Each antenna in the electronic device may be configured to cover one or more communication frequency bands. Different antennas may be multiplexed to improve antenna utilization. For example, the antenna1may be multiplexed as a diversity antenna of a wireless local area network. In some other embodiments, the antenna may be used in combination with a tuning switch. The mobile communication module150may provide a solution that is applied to the electronic device and that includes wireless communication such as 2G, 3G, 4G, and 5G. The mobile communication module150may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like. The mobile communication module150may receive an electromagnetic wave through the antenna1, perform processing such as filtering and amplification on the received electromagnetic wave, and transmit a processed electromagnetic wave to the modem processor for demodulation. The mobile communication module150may further amplify a signal modulated by the modem processor, and convert an amplified signal into an electromagnetic wave for radiation through the antenna1. In some embodiments, at least some function modules of the mobile communication module150may be disposed in the processor110. In some embodiments, at least some function modules of the mobile communication module150and at least some modules of the processor110may be disposed in a same component. For example, in some embodiments, with reference toFIG.1, the first electronic device101may access the first server103by using the mobile communication module150included in the first electronic device101, to request the second server104to deploy an MDM service for the first electronic device101. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium or high-frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transfers the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. After being processed by the baseband processor, the low-frequency baseband signal is transmitted to the application processor. The application processor outputs a sound signal through an audio device (which is not limited to the speaker170A, the receiver170B, or the like), or displays an image or a video through the display194. In some embodiments, the modem processor may be an independent component. In some other embodiments, the modem processor may be independent of the processor110, and disposed in a same component as the mobile communication module150or another function module. The wireless communication module160may provide a solution that is applied to the electronic device and that includes wireless communication such as a wireless local area network (WLAN) (for example, a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field communication (NFC) technology, and an infrared (IR) technology. The wireless communication module160may be one or more components integrating at least one communication processor module. The wireless communication module160receives an electromagnetic wave through the antenna2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor110. The wireless communication module160may further receive a to-be-sent signal from the processor110, perform frequency modulation and amplification on the signal, and convert a processed signal into an electromagnetic wave for radiation through the antenna2. For example, in some embodiments, with reference toFIG.1, the first electronic device101may establish a wireless P2P connection to the second electronic device102by using the wireless communication module160included in the first electronic device101, or access a same local area network with the second electronic device102. For another example, in some embodiments of this application, with reference toFIG.1, the second electronic device102may establish a wireless P2P connection to the first electronic device101by using the wireless communication module160included in the second electronic device102, or access a same local area network with the first electronic device101. In some embodiments, in the electronic device, the antenna1and the mobile communication module150are coupled, and the antenna2and the wireless communication module160are coupled, so that the electronic device can communicate with a network and another device by using a wireless communication technology. The wireless communication technology may include a global system for mobile communications (GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a BeiDou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a satellite based augmentation system (SBAS). The electronic device implements a display function by using the GPU, the display194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display194and the application processor. The GPU is configured to perform mathematical and geometric calculation, and render an image. The processor110may include one or more GPUs that execute program instructions to generate or change display information. The display194is configured to display an image, a video, and the like. The display194includes a display panel. The display panel may be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini-LED, a micro-LED, a micro-OLED, quantum dot light-emitting diodes (QLEDs), or the like. In some embodiments, the electronic device may include one or N displays194, where N is a positive integer greater than 1. The electronic device may implement a photographing function by using the ISP, the camera193, the video codec, the GPU, the display194, the application processor, and the like. The ISP is configured to process data fed back by the camera193. For example, during shooting, a shutter is pressed, light is transmitted to a photosensitive element of the camera through a lens. An optical signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, to convert the electrical signal into a visible image. The ISP may further perform algorithm optimization on noise, brightness, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of a shooting scenario. In some embodiments, the ISP may be disposed in the camera193. The camera193is configured to capture a static image or a video. An optical image of an object is generated through the lens, and is projected onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts an optical signal into an electrical signal, and then transmits the electrical signal to an ISP for converting the electrical signal into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard format, for example, RGB or YUV. In some embodiments, the electronic device may include one or N cameras193, where N is a positive integer greater than 1. The digital signal processor is configured to process a digital signal, and may further process another digital signal in addition to the digital image signal. For example, when the electronic device selects a frequency, the digital signal processor is configured to perform Fourier transform on frequency energy. The video codec is configured to compress or decompress a digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record videos in a plurality of coding formats, for example, moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, and MPEG-4. The NPU is a neural-network (NN) computing processor, quickly processes input information by referring to a structure of a biological neural network, for example, by referring to a transfer mode between human brain neurons, and may further continuously perform self-learning. The NPU can implement applications such as intelligent cognition of the electronic device, for example, image recognition, facial recognition, voice recognition, and text understanding. The external memory interface120may be configured to connect to an external storage card, for example, a micro SD card, to extend a storage capability of the electronic device. The external storage card communicates with the processor110through the external memory interface120, to implement a data storage function. For example, files such as music and a video are stored in the external storage card. The internal memory121may be configured to store computer-executable program code. The executable program code includes instructions. The processor110runs the instructions stored in the internal memory121, to implement various functional applications of the electronic device and data processing. The internal memory121may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (for example, a sound playing function or an image playing function), and the like. The data storage area may store data (for example, audio data or a phone book) created when the electronic device is used, and the like. In addition, the internal memory121may include a high-speed random access memory, and may further include a non-volatile memory, for example, at least one magnetic disk storage component, a flash memory, or a universal flash storage (UFS). The electronic device may implement an audio function, for example, music playing or recording, by using the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into an analog audio signal for output, and is also configured to convert an analog audio input into a digital audio signal. The audio module170may further be configured to encode and decode an audio signal. In some embodiments, the audio module170may be disposed in the processor110, or some function modules of the audio module170are disposed in the processor110. The speaker170A, also referred to as a “loudspeaker”, is configured to convert an audio electrical signal into a sound signal. The electronic device may be used to listen to music or answer a hands-free call by using the speaker170A. The receiver170B, also referred to as an “earpiece”, is configured to convert an audio electrical signal into a sound signal. When a call is answered or voice information is received by using the electronic device, the receiver170B may be put close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call, sending voice information, or needing to trigger, by using a voice assistant, the electronic device to perform some functions, the user may make a sound near the microphone170C through the mouth of the user, to input a sound signal to the microphone170C. At least one microphone170C may be disposed in the electronic device. In some other embodiments, two microphones170C may be disposed in the electronic device, to implement a noise reduction function, in addition to collecting a sound signal. In some other embodiments, three, four, or more microphones170C may be alternatively disposed in the electronic device, to collect a sound signal, implement noise reduction, and identify a sound source, so as to implement a directional recording function and the like. The headset jack170D is configured to connect to a wired headset. The headset jack170D may be the USB port130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface. The pressure sensor180A is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor180A may be disposed on the display194. There are many types of pressure sensors180A, for example, a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor180A, capacitance between electrodes changes. The electronic device determines strength of pressure based on a change of the capacitance. When a touch operation is performed on the display194, the electronic device detects strength of the touch operation by using the pressure sensor180A. The electronic device may further calculate a touch position based on a detection signal of the pressure sensor180A. In some embodiments, touch operations that are performed in a same touch position but have different touch operation intensity may correspond to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on an icon of a messaging application, an instruction for viewing an SMS message is executed. When a touch operation whose touch operation intensity is greater than or equal to the first pressure threshold is performed on the icon of the messaging application, an instruction for creating an SMS message is executed. The gyro sensor180B may be configured to determine a motion posture of the electronic device. In some embodiments, angular velocities of the electronic device around three axes (namely, axes x, y, and z) may be determined by using the gyro sensor180B. The gyro sensor180B may be configured to perform image stabilization during photographing. For example, when the shutter is pressed, the gyro sensor180B detects a jitter angle of the electronic device, calculates, based on the angle, a distance for which a lens module needs to compensate, and enables the lens to offset jitter of the electronic device through reverse motion, to implement image stabilization. The gyro sensor180B may be further used in a navigation scenario and a motion-sensing game scenario. The barometric pressure sensor180C is configured to measure barometric pressure. In some embodiments, the electronic device calculates an altitude by using the barometric pressure measured by the barometric pressure sensor180C, to assist in positioning and navigation. The magnetic sensor180D includes a Hall effect sensor. The electronic device may detect opening and closing of a flip cover by using the magnetic sensor180D. In some embodiments, when the electronic device is a flip phone, the electronic device may detect opening and closing of a flip cover based on the magnetic sensor180D. Further, a feature, for example, automatic unlocking upon opening of the flip cover, is set based on a detected opening or closing state of the flip cover. The acceleration sensor180E may detect magnitude of accelerations of the electronic device in various directions (usually on three axes), and may detect magnitude and a direction of gravity when the electronic device is stationary. The acceleration sensor180E may be further configured to identify a posture of the electronic device, and is used in an application, for example, switching between a landscape mode and a portrait mode or a pedometer. The distance sensor180F is configured to measure a distance. The electronic device may measure a distance through infrared or laser. In some embodiments, in a photographing scenario, the electronic device may measure a distance by using the distance sensor180F, to implement quick focusing. The optical proximity sensor180G may include a light-emitting diode (LED) and an optical detector, for example, a photodiode. The light-emitting diode may be an infrared light-emitting diode. The electronic device emits infrared light by using the light-emitting diode. The electronic device detects infrared reflected light from a nearby object by using the photodiode. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device. When detecting insufficient reflected light, the electronic device may determine that there is no object near the electronic device. The electronic device may detect, by using the optical proximity sensor180G, that a user holds the electronic device close to an ear for a call, to automatically turn off a screen for power saving. The optical proximity sensor180G may also be used in smart cover mode or pocket mode to automatically perform screen unlocking or locking. The ambient light sensor180L is configured to sense ambient light brightness. The electronic device may adaptively adjust brightness of the display194based on the sensed ambient light brightness. The ambient light sensor180L may also be configured to automatically adjust white balance during shooting. The ambient light sensor180L may further cooperate with the optical proximity sensor180G to detect whether the electronic device is in a pocket, to avoid an unintentional touch. The fingerprint sensor180H is configured to collect a fingerprint. The electronic device may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. The temperature sensor180J is configured to detect a temperature. In some embodiments, the electronic device executes a temperature processing policy by using the temperature detected by the temperature sensor180J. For example, when the temperature reported by the temperature sensor180J exceeds a threshold, the electronic device reduces performance of a processor near the temperature sensor180J, to reduce power consumption and implement heat protection. In some other embodiments, when the temperature is lower than another threshold, the electronic device heats up the battery142, to avoid an abnormal shutdown of the electronic device due to a low temperature. In some other embodiments, when the temperature is lower than still another threshold, the electronic device boosts an output voltage of the battery142, to avoid an abnormal shutdown caused by a low temperature. The touch sensor180K is also referred to as a “touch panel”. The touch sensor180K may be disposed on the display194, and the touch sensor180K and the display194constitute a touchscreen, which is also referred to as a “touch screen”. The touch sensor180K is configured to detect a touch operation performed on or near the touch sensor180K. The touch sensor may transfer the detected touch operation to the application processor to determine a type of a touch event. A visual output related to the touch operation may be provided through the display194. In some other embodiments, the touch sensor180K may be alternatively disposed on a surface of the electronic device, and is located at a location different from that of the display194. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal-cord part. The bone conduction sensor180M may be also in contact with a human pulse, and receive a blood pressure beating signal. In some embodiments, the bone conduction sensor180M may be alternatively disposed in a headset, to constitute a bone conduction headset. The audio module170may obtain a voice signal through parsing based on the vibration signal that is of the vibration bone of the vocal-cord part and that is obtained by the bone conduction sensor180M, to implement a voice function. The application processor may parse heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor180M, to implement a heart rate detection function. The button190includes a power button, a volume button, and the like. The button190may be a mechanical button, or a touch button. The electronic device may receive a button input, and generate a button signal input related to user setting and function control of the electronic device. The motor191may generate a vibration prompt. The motor191may be used for an incoming call vibration prompt or a touch vibration feedback. For example, touch operations performed on different applications (for example, shooting and audio playing) may correspond to different vibration feedback effects. For touch operations performed on different areas of the display194, the motor191may also correspond to different vibration feedback effects. Different application scenarios (for example, a time reminder, information receiving, an alarm clock, and a game) may also correspond to different vibration feedback effects. A touch vibration feedback effect may be further customized. The indicator192may be an indicator light, and may be configured to indicate a charging status and a power change, or may be configured to indicate a message, a missed call, a notification, and the like. The SIM card interface195is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface195or removed from the SIM card interface195, to implement contact with or separation from the electronic device. The electronic device may support one or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface195can support a nano-SIM card, a micro-SIM card, a SIM card, and the like. A plurality of cards may be inserted into a same SIM card interface195at the same time. The plurality of cards may be of a same type or different types. The SIM card interface195may also be compatible with different types of SIM cards. The SIM card interface195may also be compatible with an external storage card. The electronic device interacts with a network by using the SIM card, to implement functions such as a call and data communication. In some embodiments, the electronic device uses an eSIM, namely, an embedded SIM card. The eSIM card may be embedded into the electronic device, and cannot be separated from the electronic device. All methods in the following embodiments may be implemented on the electronic device having the foregoing hardware structure. FIG.3AandFIG.3Bare a schematic flowchart of a mobile device management method according to an embodiment of this application. With reference to the mobile device management system shown inFIG.1, as shown inFIG.3AandFIG.3B, the method may include the following steps. The mobile device management method provided in this embodiment of this application may be divided into two phases, for example, referred to as a first phase and a second phase. In the first phase, deployment of an MDM service is mainly implemented, and the following S301to S309may be included. In the second phase, device system upgrade and management of a to-be-managed device are mainly implemented, and the following S310and S311may be included. S301: A first electronic device sends a request message to a first server, where the request message is used to apply for deployment of the MDM service. The request message may carry authorized login account information and a to-be-managed device list. The to-be-managed device list may include an identifier of at least one second electronic device. The second electronic device may be a to-be-managed electronic device. The identifier may be an international mobile equipment identity (IEMI) of a to-be-managed electronic device, or may be another identifier of a to-be-managed electronic device, for example, a media access control (MAC) address. In some embodiments, the authorized login account information and the to-be-managed device list may be configured by a user (for example, an enterprise IT administrator) on the first electronic device. The first electronic device is a device configured to manage another mobile device, for example, may be referred to as a master device. After successfully applying for deployment of the MDM service, the first electronic device may be configured to provide the MDM service, for example, including a system upgrade service and a management service, for a device corresponding to an identifier included in the to-be-managed device list. For example, to facilitate work of enterprise employees, an enterprise may purchase a plurality of mobile devices in batches for the enterprise employees to use. To ensure information security when the enterprise employees use these mobile devices to access enterprise intranet resources, unified security management needs to be performed on these mobile devices. When the enterprise purchases these mobile devices, a device vendor (for example, a device producer or a device seller) may grant authorized login account information and a device information list bound to the authorized login account information to an IT administrator of the enterprise. The device information list bound to the authorized login account information includes identifiers of the mobile devices purchased by the enterprise in batches. A device that successfully applies for the MDM service by using the authorized login account information can perform management and device system upgrade only on a device corresponding to an identifier included in the device information list bound to the authorized login account information. After obtaining the authorized login account information and the device information list bound to the authorized login account information, the IT administrator of the enterprise may obtain the to-be-managed device list based on the device information list. The to-be-managed device list may include all the identifiers in the device information list, or may include some identifiers in the device information list. In other words, the IT administrator can choose to manage some or all of the mobile devices that are purchased in batches. The IT administrator may configure the authorized login account information and the to-be-managed device list on the first electronic device. The first electronic device may be one (for example, any one or a specified one) of the mobile devices purchased by the enterprise in batches, or the first electronic device may not be one of the mobile devices purchased in batches. This is not specifically limited in this embodiment. Then, the first electronic device may send, to the first server, a request message that carries the authorized login account information and the to-be-managed device list, to apply for deployment of the MDM service. For example, the first electronic device is a tablet, and the second electronic device is a mobile phone. An enterprise purchases 1,000 Huawei phones for enterprise employees to use. In addition, during the purchase, a device vendor grants authorized login account information and a device information list (where the device information list includes IEMIs of the 1,000 Huawei phones) bound to the authorized login account information to an IT administrator of the enterprise. The IT administrator configures the authorized login account information and a to-be-managed device list on the tablet. For example, the to-be-managed device list includes the IEMIs of the 1,000 Huawei phones. Then, the tablet may send, to the first server, a request message that carries the IEMIs of the 1,000 Huawei phones and the authorized login account information, to request to deploy an MDM service on the tablet, so as to perform management and device system upgrade on the 1,000 Huawei phones. S302: The first server performs account verification on the authorized login account information from the first electronic device. After receiving the request message from the first electronic device, the first server may perform account verification on the authorized login account information carried in the request message. In some other embodiments, the first server may alternatively delegate another server, for example, a server (for example, which may be referred to as an account verification server) that is configured to perform account verification and that is disposed independent of the first server, to perform account verification on the authorized login account information in the request message from the first electronic device. After completing the account verification, the server may return a verification result to the first server. For example, a server that performs account verification, for example, the first server or the account verification server, may pre-store valid authorized login account information that can be used to apply for deployment of the MDM service, and can implement the account verification performed on the authorized login account information from the first electronic device based on the stored valid authorized login account information, to verify validity of the first electronic device that applies for deployment of the MDM service. If the authorized login account information from the first electronic device is the same as the valid authorized login account information stored in the server, the account verification succeeds, and it may be determined that the first electronic device that applies for deployment of the MDM service is legal. If the authorized login account information from the first electronic device is the different from the valid authorized login account information stored in the server, the account verification fails, and it may be determined that the first electronic device that applies for deployment of the MDM service is illegal. S303: After the account verification succeeds, the first server accesses a second server based on the to-be-managed device list, to obtain a service policy for at least one second electronic device. The service policy may include one or more of the following policies: a management policy, a configuration policy, and an upgrade policy. The management policy may include at least one of the following: a device management policy, a network management policy, a security management policy, an email management policy, a content management policy, an application management policy, and the like. The configuration policy may include a desktop wallpaper setting policy, a startup animation setting policy, a ringtone setting policy, and the like. After the account verification performed on the authorized login account information from the first electronic device succeeds, the first server may access the second server based on the to-be-managed device list from the first electronic device, to obtain the service policy for the at least one second electronic device in the to-be-managed device list. For example, the first server may obtain a device model of each second electronic device based on the identifier (for example, an IEMI) of the at least one second electronic device included in the to-be-managed device list, to obtain a model set of a to-be-managed device. The model set of the to-be-managed device includes at least one device model. The first server may access the second server based on the model set of the to-be-managed device, to obtain a service policy based on each device model, that is, obtain the service policy for the at least one second electronic device in the to-be-managed device list. For example, with reference to the example in S301, after account verification performed by the first server on the authorized login account information from the tablet succeeds, the first server may obtain a model of each of the 1,000 Huawei phones based on the IEMIs of the 1,000 Huawei phones included in the to-be-managed device list, to obtain a model set of to-be-managed devices. For example, the 1,000 Huawei phones include four device models: HUAWEI Mate 20 Pro, HUAWEI Mate 20, HUAWEI Mate 10, and HUAWEI nova 4. In this case, the model set of the to-be-managed devices includes four device models: HUAWEI Mate 20 Pro, HUAWEI Mate 20, HUAWEI Mate 10, and HUAWEI nova 4. The first server may send the model set of the to-be-managed devices to the second server. After receiving the model set of the to-be-managed devices, the second server may send, to the first server, a service policy corresponding to each device model. For example, the service policies sent by the second server include a service policy corresponding to HUAWEI Mate 20 Pro, a service policy corresponding to HUAWEI Mate 20, a service policy corresponding to HUAWEI Mate 10, and a service policy corresponding to HUAWEI nova 4. After receiving the service policy sent by the second server, the first server may obtain a service policy corresponding to each device model, that is, obtain service policies for the 1,000 Huawei phones in the to-be-managed device list. It should be noted that, when accessing the second server, the first server may further carry the authorized login account information from the first electronic device, so that the second server learns of access validity of the first server. S304: The first server sends the service policy for the at least one second electronic device to the first electronic device. After obtaining the service policy for the at least one second electronic device, the first server may send the obtained service policy to the first electronic device, so that the first electronic device displays a corresponding interface (for example, referred to as a setting interface) according to the received service policy for an IT administrator to view and perform related setting, to implement device management and function configuration. For example, still with reference to the example in S303, after obtaining the service policy for the 1,000 Huawei phones in the to-be-managed device list, the first server may send the obtained service policy to the tablet. After receiving the related service policy, the tablet may display a corresponding interface according to the service policy. The IT administrator can perform corresponding management and function configuration on the 1,000 devices on an interface displayed on the tablet. For example, in a service policy for a device whose device model is HUAWEI Mate 20, an upgrade policy is that a version A may be upgraded to a version B, a management policy includes a device management policy, a network management policy, a security management policy, an email management policy, a content management policy, and an application management policy, and a configuration policy includes setting a desktop wallpaper, a startup animation, and a ringtone. The security management policy includes management of some functions after the device whose device model is HUAWEI Mate 20 is upgraded from the version A to the version B, for example, management of whether to disable factory settings restoration, whether to disable developer options, whether to disable location services, reading locations of managed devices, whether to disable system upgrade, whether to disable sleep menus, and whether to disable fingerprint unlocking. As shown inFIG.4, after the IT administrator enters an interface401of an enterprise office configuration console, if a device1in the 1,000 Huawei phones is selected, for example,402shown inFIG.4, the tablet may display related settings403for the device1according to a service policy corresponding to a device model (HUAWEI Mate 20) of the device1, for example, including a device management setting item, a network management setting item, a security management setting item404, an email management setting item, a content management setting item, and an application management setting item. The IT administrator selects a corresponding setting item in the related settings403, to implement corresponding management of the device1. For example, the IT administrator wants to manage whether a location function can be used after the device1is upgraded from the version A to the version B. The IT administrator may perform an operation on the security management setting item404in the related settings403. In response to the operation, as shown inFIG.5, the tablet may display a security management setting interface501of the device1. The security management setting interface501includes functions that can be managed after the device1is upgraded from the version A to the version B, and the functions include: whether to disable factory settings restoration, whether to disable developer options, whether to disable location services, reading locations of managed devices, whether to disable system upgrade, whether to disable sleep menus, and whether to disable fingerprint unlocking. Switch buttons for disabling these functions may be in a disabled state by default. To be specific, after the device1is upgraded from the version A to the version B, a corresponding function can be used by default, for example, the location service can be used. If the IT administrator wants to disable this function, for example, the location service, the IT administrator may perform an operation on a button503corresponding to disabling a location service. In response to the operation, management of disabling the location service function after the device1is upgraded from the version A to the version B can be implemented. The tablet may further display another related setting for the device1according to the service policy corresponding to the device model (HUAWEI Mate 20) of the device1. For example, the tablet displays corresponding configuration interfaces according to the configuration policy, for example, a desktop wallpaper setting interface, a startup animation setting interface, and a ringtone setting interface. In this way, the IT administrator can upload corresponding resources such as a desktop wallpaper, a startup animation, and a ringtone on the corresponding configuration interfaces, to set a desktop wallpaper, a startup animation, a ringtone, and the like for the device1. For devices of different device models, resources, such as a desktop wallpaper, a startup animation, and a ringtone, that are set by the IT administrator may be the same or different. It should be noted that the foregoing example is described by using an example in which the IT administrator separately performs corresponding management and function configuration on the devices purchased in batches. In some other embodiments, the IT administrator may alternatively perform corresponding management and function configuration at the same time on a plurality of devices in the devices purchased in batches. For example, after performing related setting (for example, for a setting interface, refer toFIG.4andFIG.5), the IT administrator may select a device model to which the setting is applicable. As shown inFIG.6, the IT administrator may select, on a shown interface601, the device model to which the setting is applicable, for example, HUAWEI Mate 20 Pro. In this way, corresponding management and function configuration can be performed on devices of these device models at the same time. For another example, after performing related setting (for example, for a setting interface, refer toFIG.4andFIG.5), the IT administrator may select devices to which the setting is applicable. In this way, corresponding management and function configuration can be performed on the selected devices at the same time. In addition, the first electronic device can perform OTA management on all the managed second electronic devices, and may set a corresponding system upgrade policy (or referred to as an upgrade policy) for all the devices, or for a device of a specific model, or for one or more specific second electronic devices based on a requirement of an enterprise, a specific service, or a specific post. Specifically, for example, a system upgrade policy for a device of a device model received by the first electronic device includes: upgrading from a version A to a version B, upgrading from the version A to a version C (where the version C is a version obtained after the version B is updated), and upgrading from the version A to a version D (where the version D is a version obtained after the version C is updated). A related interface may be displayed for the IT administrators to perform management on a system upgrade version, for example, whether to allow the device of the device model to perform system upgrade, and for another example, a version to which the device of the device model is allowed to be upgraded. For example, on the interface, the IT administrator may select that the device of the device model may be upgraded from the version A to the version B. For another example, on the interface, the IT administrator may select that system version upgrade is not allowed on the device of the device model. The first electronic device may generate corresponding configuration information based on a setting of the IT administrator, and send the configuration information to the first server. The first server may return a corresponding DM service APP to the first electronic device based on the configuration information. In the DM service APP, only system upgrade information corresponding to a corresponding second electronic device is provided for the corresponding second electronic device. For example, the IT administrator sets that devices of some models do not need to be upgraded, and only a security patch needs to be installed. In this case, after the first electronic device sends the corresponding configuration information to the first server, in the DM service APP returned by the first server, only security patches corresponding to the devices of these models are sent. In this way, the first electronic device can flexibly manage system upgrade of all the second electronic devices managed by the first electronic device, to avoid a situation in which the second electronic device is either upgraded to a latest version or is not upgraded, and reduce problems that the second electronic device cannot be returned to an appropriate version after the second electronic device is accidentally upgraded to the latest version but the latest version is inappropriate. After the IT administrator completes corresponding management and function configuration of the at least one second electronic device in the to-be-managed device list, for example, the 1,000 devices in the foregoing example, the first electronic device may send, to the first server, related configuration information obtained after the IT administrator performs corresponding management and function configuration. S305: The first server receives configuration information from the first electronic device. S306: The first server sends the configuration information to the second server. The configuration information includes a related configuration parameter generated after management and function configuration for the at least one second electronic device. For example, with reference to the example in S304, the configuration information includes a setting parameter that is used to indicate that the device1whose device model is HUAWEI Mate 20 may be upgraded from the version A to the version B, and after the device1is upgraded from the version A to the version B, the location service function is disabled. The configuration information may further include the desktop wallpaper, the startup animation, and the ringtone that are set for the device1. After receiving the configuration information from the first electronic device, the first server may send the configuration information to the second server. In some other embodiments, after the first server receives the configuration information, a background worker may review resources, such as a desktop wallpaper, a startup animation, and a ringtone, that are included in the configuration information, and review whether these resources comply with policies and regulations. After the review succeeds, the first server sends the configuration information to the second server. S307: The second server generates a DM service APP based on the configuration information. S308: The second server sends the DM service APP to the first server. After receiving the configuration information from the first server, the second server may generate, based on the configuration information, a DM service APP corresponding to the authorized login account information of the first electronic device, and send the generated DM service APP to the first server. For example, with reference to the example in S306, the DM service APP includes a data resource, for example, an upgrade package, and for another example, the desktop wallpaper, the startup animation, and the ringtone that are set for the device1. The DM service APP further includes a configuration for the at least one second electronic device, for example, disabling the location service function for the device1. In some embodiments, after receiving the DM service APP corresponding to the authorized login account information of the first electronic device, the first server may sign the DM service APP by using a preconfigured private key of the first server. In this way, the DM service APP can be prevented from being tampered with. For sensitive data in the DM service APP, for example, the upgrade package, the first server may further encrypt the sensitive data by using an encryption key derived based on a public key of the first electronic device. In this way, it can be ensured that the sensitive data can be successfully decrypted and used only on the first electronic device. S309: The first electronic device obtains the DM service APP from the first server and installs the DM service APP. After the first server obtains the DM service APP corresponding to the authorized login account information of the first electronic device, the first server may deliver the DM service APP to the first electronic device, so that the first electronic device obtains the corresponding DM service APP and installs the DM service APP. In some embodiments, if the first server performs signature and encryption processing on the DM service APP, after obtaining the DM service APP, the first electronic device may verify the signature of the DM service APP by using a preset public key of the first server, and may further decrypt the sensitive data in the DM service APP by using a private key of the first electronic device, to obtain the decrypted DM service APP, and then the first electronic device installs the DM service APP. After the DM service APP is installed on the first electronic device, the MDM service is deployed on the first electronic device. Then, the first electronic device may provide the MDM service for the at least one second electronic device (where a DM client APP is preset on the second electronic device, and is configured to communicate with the first electronic device) in the to-be-managed device list, for example, including the management service and the system upgrade service, to implement management and device system upgrade of the second electronic device. For example, the following S310and S311are included. In this process, neither the first electronic device nor the second electronic device needs to be connected to the Internet. S310: The second electronic device and the first electronic device access a same local area network, or the second electronic device establishes a wireless P2P connection to the first electronic device. S311: The first electronic device provides the MDM service for the second electronic device, to implement management and device system upgrade of the second electronic device. After the second electronic device and the first electronic device access the same local area network, or the second electronic device establishes the wireless P2P connection (for example, a Wi-Fi direct connection, a Bluetooth connection, or an NFC connection) to the first electronic device, the first electronic device and the second electronic device may perform mutual authentication, for example, the authentication may be completed based on a hardware attestation key (Attestation Key). After the mutual authentication succeeds, the first electronic device may provide the MDM service for the second electronic device according to an MDM protocol, to implement management and device system upgrade of the second electronic device. For example, after the mutual authentication between the first electronic device and the second electronic device succeeds, the second electronic device may send a service request to the first electronic device. The service request may include the identifier of the second electronic device. After receiving the service request, the first electronic device may send, to the second electronic device based on the identifier in the service request, resources such as the upgrade package of the device, a set desktop wallpaper, startup animation, and ringtone, and the configuration for the device. After receiving corresponding data, the second electronic device may perform system upgrade, and perform related setting based on the configuration. For example, with reference to the example in S308, after sending the IEMI of the device1to the tablet, the device1may receive, from the tablet, resources such as a corresponding upgrade package, the set desktop wallpaper, startup animation, and ringtone, and a configuration for the device1. The device1may upgrade a system of the device1from the version A to the version B by using the received data, and after the system is upgraded to the version B, the location service function of the device1is disabled. In addition, the device1further performs corresponding setting based on the received desktop wallpaper, startup animation, ringtone, and the like. In this way, the tablet implements management and device system upgrade of the device1. In some other embodiments, after the mutual authentication between the first electronic device and the second electronic device succeeds, the first electronic device may alternatively actively push a service to the second electronic device. For example, after the mutual authentication between the first electronic device and the second electronic device succeeds, the first electronic device actively sends resources such as the configuration for the device, the corresponding upgrade package, and a set desktop wallpaper, startup animation, and ringtone to the second electronic device. In some other embodiments, after the mutual authentication between the first electronic device and the second electronic device succeeds, the first electronic device may alternatively push a service to the second electronic device when determining that the second electronic device needs to update a service, for example, needs to update a device system. The foregoing example is described by using an example of configuring to allow the second electronic device to perform version upgrade. In some other embodiments, it may alternatively be configured that a second electronic device is not allowed to perform system version upgrade. In this embodiment, if a user of the second electronic device chooses to upgrade a system version of the device, the request is not allowed. When receiving the operation, the second electronic device may further display prompt information to prompt the user that the system upgrade is forbidden on the device. According to the mobile device management method provided in this embodiment of this application, an MDM service is deployed on an electronic device, so that an enterprise can implement management and device system upgrade of a to-be-managed electronic device in a local area network or a near field environment by using the electronic device on which the MDM service is deployed, without connecting the to-be-managed electronic device to a network. This resolves a problem that management and device system upgrade of an electronic device that is of inconvenience in being connected to a network cannot be implemented. In addition, the to-be-managed electronic devices do not need to be separately connected to the Internet to download related data, which saves traffic and reduces service costs. In addition, the MDM service is deployed on the electronic device to implement device management and device system upgrade, without purchasing a service provided by an MDM service provider, which reduces device management costs. After the MDM service is deployed on the electronic device, because the electronic device and the to-be-managed electronic device do not need to be connected to the Internet, an enterprise that cannot use a public network to perform system upgrade (OTA upgrade) can implement device system upgrade and other management by using the solution provided in this embodiment. Some other embodiments of this application further provide an electronic device (for example, the first electronic device in the foregoing embodiments), configured to implement the method described in the foregoing method embodiments. The electronic device may include a processor and a memory. The processor is coupled to the memory. The memory is configured to store computer program code. The computer program code includes computer instructions. When the computer instructions are executed by the electronic device, the electronic device is enabled to perform a corresponding step in the foregoing embodiments. Some other embodiments of this application further provide a server (for example, the first server or the second server in the foregoing embodiments), configured to implement the method described in the foregoing method embodiments. The server may include a processor and a memory. The processor is coupled to the memory. The memory is configured to store computer program code. The computer program code includes computer instructions. When the computer instructions are executed by the server, the server is enabled to perform a corresponding step in the foregoing embodiments. Some other embodiments of this application further provide a computer-readable storage medium. The computer-readable storage medium may include computer software instructions. When the computer software instructions are run on an electronic device (for example, the first electronic device in the foregoing embodiments), the electronic device is enabled to perform a corresponding step in the foregoing embodiments. Some other embodiments of this application further provide a computer-readable storage medium. The computer-readable storage medium may include computer software instructions. When the computer software instructions are run on a server (for example, the first server or the second server in the foregoing embodiments), the server is enabled to perform a corresponding step in the foregoing embodiments. Some other embodiments of this application further provide a computer program product. When the computer program product runs on a computer, the computer is enabled to perform a corresponding step performed by the first electronic device, the first server, or the second server in the foregoing embodiments. Some other embodiments of this application further provide an apparatus, configured to implement the method described in the foregoing method embodiments. The apparatus has a function of implementing behavior of the first electronic device in the foregoing embodiments. The function may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or the software includes one or more modules corresponding to the function, for example, a sending unit or module, a receiving unit or module, a wireless connection unit or module, a service providing unit or module, a display unit or module, an input unit or module, and a verification unit or module. Some other embodiments of this application further provide an apparatus, configured to implement the method described in the foregoing method embodiments. The apparatus has a function of implementing behavior of the first server in the foregoing embodiments. The function may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or the software includes one or more modules corresponding to the function, for example, a sending unit or module, a receiving unit or module, a verification unit or module, an obtaining unit or module, and a signature encryption unit or module. The foregoing descriptions about the implementations allow a person skilled in the art to clearly understand that, for convenient and brief description, division into the foregoing function modules is merely used as an example for description. During actual application, the foregoing functions can be allocated to different function modules for implementation based on a requirement. In other words, an inner structure of an apparatus is divided into different function modules to implement all or some of the functions described above. In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the described apparatus embodiments are merely examples. For example, division into the modules or units is merely logical function division, and may be other division in an actual implementation. For example, a plurality of units or components may be combined or may be integrated into another apparatus, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in an electronic, mechanical, or another form. The units described as separate parts may or may not be physically separate, and parts displayed as units may be one or more physical units, may be located in one place, or may be distributed at different places. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of embodiments. In addition, functional units in embodiments of this application may be integrated into one processing unit, each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit. When the integrated unit is implemented in a form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a readable storage medium. Based on such an understanding, technical solutions in embodiments of this application may be implemented in a form of a software product. The software product is stored in a storage medium and includes several instructions for instructing a device (which may be a single-chip microcomputer, a chip, or the like) or a processor to perform all or some of the steps of the methods in embodiments of this application. The foregoing storage medium includes any medium that can store program code, for example, a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement within the technical scope disclosed in this application may fall within the protection scope of this application. | 72,507 |
11863387 | DETAILED DESCRIPTION The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. It is apparent that the described embodiments are not all embodiments but part of embodiments of the present disclosure. All other embodiments obtained by those skilled in the art on the basis of the embodiments in the present disclosure without creative work shall fall within the scope of protection of the present disclosure. It is to be understood that the technical solutions of the embodiments of the present disclosure may be applied to various communication systems, for example, an LTE system, an LTE Frequency Division Duplex (FDD) system, LTE Time Division Duplex (TDD), a system adopting a hybrid duplex mode, a Universal Mobile Telecommunication System (UMTS), and a future 5th-Generation (5G) communication system. It is to be understood that, in the embodiments of the present disclosure, terminal equipment may also be called as user equipment, a Mobile Station (MS), a mobile terminal or the like. The user equipment may communicate with one or more core networks through a Radio Access Network (RAN). For example, the user equipment may be a mobile phone (or called as a cell phone), a computer with a mobile terminal or the like. For example, the user equipment may be a portable, pocket, handheld, in-computer or vehicle-mounted mobile device, and terminal equipment in a future 5G network or terminal equipment in a future evolved Public Land Mobile Network (PLMN). It is also to be understood that, in the embodiments of the present disclosure, network equipment may be equipment configured to communicate with the user equipment. The network equipment may be a Base Transceiver Station (BTS) in a GSM or CDMA, or a NodeB (NB) in a WCDMA system, or an Evolutional Node B (eNB or eNodeB) in an LTE system, or the network equipment may be a relay station, an access point, vehicle-mounted equipment, wearable equipment, network-side equipment in the future 5G network or network equipment in the future evolved PLMN and the like. The embodiments of the disclosure provide at least the following aspects. In a first aspect, there is provided a method for regulating communication parameters, which may include that: a first equipment establishes a communication with a second equipment according to preset configurations of communication parameters; the first equipment regulates the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process; and the first equipment sends communication parameter regulation indication information to the second equipment, the communication parameter regulation indication information indicating a result of regulation performed by the first equipment on the configurations of the one or more of the communication parameters. In a second aspect, there is provided a method for regulating communication parameters, which may include that: a second equipment establishes a communication with a first equipment according to preset configurations of communication parameters; and the second equipment receives communication parameter regulation indication information sent by the first equipment, the communication parameter regulation indication information indicating a result of regulation performed by the first equipment on the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. In a third aspect, there is provided an equipment for regulating communication parameters, which may include: a processing module, configured to establish a communication with a second equipment according to preset configurations of communication parameters, wherein the processing module is further configured to regulate the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process; and a transceiver module, configured to send communication parameter regulation indication information to the second equipment, the communication parameter regulation indication information indicating a result of regulation performed by the equipment on the configurations of the one or more of the communication parameters. In a fourth aspect, there is provided an equipment for regulating communication parameters, which may include: a processing module, configured to establish a communication with first equipment according to preset configurations of communication parameters; and a transceiver module, configured to receive communication parameter regulation indication information sent by the first equipment, the communication parameter regulation indication information indicating a result of regulation performed by the first equipment on the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. FIG.1is a schematic flowchart illustrating a method for regulating communication parameters according to an embodiment of the present disclosure. The method may be executed by network equipment or terminal equipment. As illustrated inFIG.1, the method100includes the following operations. In S110, a first equipment establishes a communication with a second equipment according to preset configurations of communication parameters. In S120, the first equipment regulates the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. In S130, the first equipment sends communication parameter regulation indication information to the second equipment, the communication parameter regulation indication information indicating a result of regulation performed by the first equipment on the configurations of the one or more of the communication parameters. In such a manner, according to the method for regulating the communication parameters in the embodiment of the present disclosure, network equipment, or terminal equipments serving as two parties communication with each other may dynamically regulate the configurations of the communication parameters according to the network state and/or service state in the communication process, so that performance and applicability of a wireless communication system are improved. It is to be understood that, in the embodiment of the present disclosure, the first equipment may be a network equipment, and the second equipment is a terminal equipment, or the first equipment is a terminal equipment and the second equipment is another terminal equipment. Optionally, as illustrated inFIG.2, the method may further include the following operations. In S140, the first equipment receives processing capability indication information sent by the second equipment. The processing capability indication information may indicate that the second equipment is capable of communicating with the first equipment by using the same parameter with different configurations. That is, the first equipment may regulate the communication parameters according to the network state and/or service state in the communication process only when determining that the second equipment is capable of communicating with the first equipment by using the same parameter with different configurations. In other words, in the embodiment of the present disclosure, in the same subcarrier/cell, different communication parameters may be used in time-frequency resource blocks allocated for different users, and therefore transmitters and receivers of the first equipment and the second equipment are required to have capabilities in processing multiple communication parameters simultaneously, and the equipments may be classified into different types according to capabilities of the equipments in simultaneously processing different parameters. Furthermore, the second equipment may report, in an attach process after random access, a capability in communicating with the first equipment by using the same parameter with different configurations. For example, the second equipment may send an attach request message to the first equipment. The attach request message including the processing capability indication information. The second equipment may report a capability in a random access process. For example, the second equipment may contain capability data in a random access message. The second equipment may further report the capability to the first equipment according to equipment capability query information sent by the first equipment after the first equipment transmits the equipment capability query information. However, the present disclosure is not so limited thereto. Optionally, in S110, the communication parameters are multiple access manners for communication and/or basic physical layer parameters corresponding to the multiple access manners. For example, the multiple access manners for communication may be Orthogonal Frequency Division Multiplexing Access (OFDMA)/Single-carrier Frequency-Division Multiple Access (SC-FDMA) and derivative multiple access manners or other multiple access manners probably to be used in a future communication system. Basic physical layer parameters corresponding to the OFDMA/SC-FDMA and derivative multiple access manners may include at least one of a subcarrier spacing, an OFDM symbol length, a Cyclic Prefix (CP) length, a sampling frequency, a reference signal density and pattern configured for purposes of channel estimation, demodulation and the like, a sequence construction of a reference signal and a resource window granularity. That is, the network equipment, or the terminal equipments serving as the two parties communicating with each other may regulate the configurations of one or more of the parameters according to the network state and/or service state of a communication network where the network equipment and the terminal equipments are located. For example, the subcarrier spacing, the OFDM symbol length, the CP length, the sampling frequency, the reference signal density and image may be increased or decreased according to a practical requirement, wherein the reference signal density and image are configured for the purposes of channel estimation, demodulation and the like. For example, a relative motion between the two parties communicating with each other may cause a Doppler frequency shift. The Doppler frequency shift is greater if the relative motion is faster. For ensuring correct demodulation, it is necessary to increase the subcarrier spacing and simultaneously increase the reference signal density. A change in a channel environment may cause phenomena such as delay spread, angular spread, propagation loss, penetration loss and the like. In addition, when a propagation environment is more complicated, when there are more obstacles, and when sizes of the obstacles are larger, transmission delay spread is greater. In such case, the CP length needs to be increased. The network equipment or the terminal equipments serving as the two parties communicating with each other may regulate the sequence construction of the reference signal according to the practical requirement. For example, the network equipment or the terminal equipments serving as the two parties communicating with each other may regulate a sequence function for generating the reference signal from a quasi-orthogonal sequence to a pseudo-random code, a Zadoff-Chu sequence or the like. The network equipment or the terminal equipments serving as the two parties communicating with each other may also regulate the resource window granularity (i.e. minimum sizes of a resource window on a frequency domain and a time domain) according to the practical requirement. Optionally, in S110, the preset configurations of the communication parameters may be default configurations. In other words, the communication parameters may be default parameters. For Device-to-Device (D2D) communication, the terminal equipment may broadcast the default parameters through a discovery channel. And different frequency bands and different geographical regions may have different default parameters. Moreover, optionally, in S110, the preset configurations of the communication parameters may be configurations predetermined by the two parties communicating with each other in the random access process. Specifically, the network equipment or the terminal equipments serving as the two parties communicating with each other may determine the preset configurations of the communication parameters according to at least one of: condition of a wireless channel transmission between the first equipment and the second equipment, communication capabilities of the first equipment and the second equipment and a service type for which the second equipment initiates random access. The first equipment sends the preset configurations of the communication parameters to the second equipment after determining the preset configurations of the communication parameters. Optionally, the first equipment may also obtain the condition of the wireless channel transmission between the first equipment and the second equipment from random access information initiated by other equipment instead of the second equipment before the communication is established with the second equipment. Alternatively, the first equipment may further acquire the condition of the wireless channel transmission between the first equipment and the second equipment from data and signaling sent by the equipment which has communicated with the first equipment, or from channel state indicator information fed back by the equipment. The communication capabilities of the first equipment and the second equipment include, but not limited to the number of transmitting and receiving antennas, transmitted power, receiving sensitivity, and a bandwidth used for communication. Specifically, the network equipment or the terminal equipments serving as the two parties communicating with each other may determine configurations of the communication parameters when the terminal equipments accesses the communication network for the first time, according to information such as the channel condition obtained by random access sequence estimation, a determined distance between the two parties communicating with each other and a type of a service for which the terminal equipment initiates random access. For example, in the random access process, the network equipment may decide to allocate a frequency band for a certain piece of terminal equipment to use according to a service requirement or other information. And the frequency band may be or may be not a frequency band where the terminal equipment initiates random access. Optionally, the first equipment may send the preset configurations of the communication parameters to the second equipment by containing the preset configurations of the communication parameters in a random access response message. The first equipment may also send the preset configurations of the communication parameters to the second equipment by sending another message containing the preset configurations of the communication parameters to the second equipment. Optionally, in S120, the network state may include at least one of: a channel environment between the first equipment and the second equipment, a load and interference of a communication network where the first equipment and the second equipment are located, a requirement of an application on a data rate, and a requirement of the application on energy consumption. Specifically, when the network state changes in the communication process, the network equipment or the terminal equipments serving as the two parties communicating with each other may regulate the configurations of the communication parameters in real time. The network state may change due to a change of the channel environment between the terminal equipment and the network equipment (for example, a base station in service and a neighbor base station) or due to a change of the terminal equipments serving as the two parties communicating with each other, for example, due to a channel change caused by the frequency band used in the communication process, antennae and mobility, or due to a channel environment change caused by mobility of the terminal equipments serving as the two parties communicating with each other. The network state may also change due to changes in load and interference of the network. The network state may also change due to changes in the requirement of the application in the network on the data rate and/or the requirement of the application in the network on the energy consumption. However, the present disclosure is not limited thereto. In the embodiment of the present disclosure, optionally, the network equipment or the terminal equipments serving as the two parties communicating with each other may perform measurement by itself to obtain the network state and/or the service state. The network equipment or the terminal equipments serving as the two parties communicating with each other may also receive state information reported by the terminal equipment or the other terminal equipment serving as one of the two parties communicating with each other, and may acquire the network state and/or the service state according to the state information. Optionally, S120may specifically be implemented as follows: parameter regulation request information for requesting regulation on the configurations of one or more of the communication parameters is received from the second equipment; and the one or more of the communication parameters are regulated according to the parameter regulation request information. That is, when a service is initiated by the second equipment or the service changes or a wireless signal environment changes, the second equipment may apply to the first equipment for parameter regulation. Optionally, the second equipment may send request information to the first equipment to request communication parameter regulation. The first equipment measures the network state by itself after receiving the request information. For example, the first equipment may judge whether the network state has changed or not according to quality of received data sent by the second equipment. Alternatively, the first equipment may judge whether the network state and/or the service state has/have changed or not by receiving a state report indicating the network state and/or the service state from terminal equipment of the same type with the second equipment in the communication network, and determine the specific communication parameters to be regulated. The first equipment may notify the regulation result to the second equipment after completing regulation. Furthermore, a handshaking mechanism may be adopted for parameter regulation. That is, the first equipment is required to send Acknowledgement (ACK) information indicating an ACK of successful reception of the parameter regulation request information (for example, a reply may be given with an ACK frame), after receiving the parameter regulation request information sent by the second equipment. If the second equipment fails to receive from the first equipment a response to the parameter regulation request information within predetermined time period, the second equipment may resend the parameter regulation request information to the first equipment, or the second equipment may communicate with the first equipment still by adopting the configurations of the communication parameters which are used before applying for parameter regulation. The first equipment may receive and transmit information by using regulated and unregulated parameters. Optionally, S120is specifically implemented as follows: state information, which may indicate the network state and/or the service state may be received from the second equipment; and the configurations of one or more of the communication parameters may be regulated according to the state information. Specifically, the second equipment may report the network state and/or the service state, for example, a channel quality change (a measurement result of the abovementioned Doppler frequency shift, transmission loss, the delay spread and the like or a channel quality indicator quantified by the measurement result), to the first equipment. The first equipment actively regulates the communication parameters and notify the regulation result to the second equipment, after receiving the network state and/or service state reported by the second equipment. The second equipment may regularly or irregularly report the network state and/or the service state to the first equipment, which will not be limited in the present disclosure. Furthermore, the second equipment may report the network state and/or service state measured by itself to the first equipment and at the same time may send the parameter regulation request information applying for communication parameter regulation to the first equipment. After receiving the parameter regulation request information, the first equipment regulates the communication parameters according to the network state and/or service state reported by the second equipment, and notifies the regulation result to the second equipment. Optionally, the first equipment may send the parameter regulation indication information to the second equipment through a physical layer control channel. For example, there may be multiple Radio Resource Control (RRC) connections between the two parties communicating with each other. Different RRC connections may have different communication parameters. Regulation of the communication parameters of each connection (carrier or base station) may be notified to the second equipment through a common physical layer control channel (i.e., a physical layer control channel shared by multiple connections), or an independent physical layer control channel for this connection. The physical layer control channel may be a new physical downlink control channel. Alternatively, the first equipment may notify the parameter regulation result to the second equipment through a channel such as a paging channel or a broadcast channel. However, the present disclosure is not limited thereto. Moreover, the first equipment may send only the regulation result of the regulated communication parameters to the second equipment. The first equipment may also send all of the regulated and unregulated communication parameters to the second equipment. Furthermore, the regulation result sent by the first equipment may be represented in form of an absolute value, or in form of a relative value. For example, it is assumed that the subcarrier spacing before regulation is 15 kHz. For weakening influence of the Doppler frequency shift on correct demodulation of a signal, it is necessary to increase the subcarrier spacing. In such case, the first equipment may directly notify the second equipment that the regulated subcarrier spacing is 20 kHz, or notify the second equipment that the regulated subcarrier spacing is 5 kHz larger than the unregulated subcarrier spacing. There are no limits made in the present disclosure. In such a manner, according to the method for regulating the communication parameters in the embodiment of the present disclosure, the network equipment or the terminal equipments serving as the two parties communicating with each other may dynamically regulate the communication parameters according to the network state and/or service state in the communication process, so that the performance and applicability of the wireless communication system are improved. The method for regulating the communication parameters according to the embodiment of the present disclosure is described above in detail on a first equipment side with reference toFIG.1andFIG.2. The method for regulating the communication parameters according to another embodiment of the present disclosure will be described below in detail on a second equipment side in combination withFIG.3andFIG.4. It is to be understood that interaction between the first equipment and the second equipment, related characteristics and functions and the like described on the first equipment side correspond to descriptions on the second equipment side, and for simplicity, descriptions are properly omitted. FIG.3is a method for regulating communication parameters according to another embodiment of the present disclosure. The method may be executed by terminal equipment. As illustrated inFIG.3, the method200includes the following operations. In S210, a second equipment establishes a communication with a first equipment according to preset configurations of communication parameters. In S220, the second equipment receives communication parameter regulation indication information sent by the first equipment. The communication parameter regulation indication information may indicates a result of regulation performed by the first equipment on the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. In such a manner, according to the method for regulating the communication parameters in the embodiment of the present disclosure, the network equipment or terminal equipments serving as two parties communicating with each other may dynamically regulate the communication parameters according to the network state and/or service state, so that performance and applicability of a wireless communication system are improved. Optionally, as illustrated inFIG.4, the method may further include the following operations. In S230, the second equipment sends processing capability indication information to the first equipment. The processing capability indication information may indicate that the second equipment is capable of communicating with the first equipment by using the same parameter with different configurations. Optionally, S230is specifically implemented as follows: an attach request message is sent to the first equipment. The attach request message may include the processing capability indication information. In the embodiment of the present disclosure, optionally, the second equipment receives the preset configurations of the communication parameters from the first equipment. Here, the preset configurations of the communication parameters are determined by the first equipment according to at least one of: condition of a wireless channel transmission between the first equipment and the second equipment, communication capabilities of the first equipment and the second equipment, and a service type for which the second equipment initiates random access. In the embodiment of the present disclosure, optionally, the second equipment may send parameter regulation request information to the first equipment for requesting regulation on the configurations of the one or more of the communication parameters, so as to cause the first equipment to regulate one or more of the communication parameters according to the parameter regulation request information. In the embodiment of the present disclosure, optionally, after the second equipment sends the parameter regulation request information to the first equipment, the second equipment may receive ACK information, which may indicate an ACK of successful reception of the parameter regulation request information, from the first equipment. In the embodiment of the present disclosure, optionally, the second equipment may send state information, which indicates the network state and/or the service state, to the first equipment to enable the first equipment to regulate the configurations of one or more of the communication parameters according to the state information. Optionally, S220is specifically implemented as follows: the parameter regulation indication information sent by the first equipment through a physical layer control channel is received. In the embodiment of the present disclosure, optionally, the communication parameters may be multiple access manners for communication and/or basic physical layer parameters corresponding to the multiple access manners. In the embodiment of the present disclosure, optionally, the network state includes at least one of a channel environment between the first equipment and the second equipment, a load and interference of a communication network where the first equipment and the second equipment are located, a requirement of an application on a data rate, and a requirement of the application on energy consumption. In the embodiment of the present disclosure, optionally, the first equipment is a network equipment and the second equipment is a terminal equipment; or the first equipment is a terminal equipment and the second equipment is another terminal equipment. In such a manner, according to the method for regulating the communication parameters in the embodiment of the present disclosure, the network equipment or the terminal equipments serving as the two parties communicating with each other may dynamically regulate the configurations of the communication parameters according to the network state and/or service state, so that the performance and applicability of the wireless communication system are improved. FIG.5is a schematic block diagram illustrating an equipment for regulating communication parameters according to an embodiment of the present disclosure. As illustrated inFIG.5, the equipment10includes a processing module11and a transceiver module12. The processing module11is configured to establish a communication with second equipment according to preset configurations of communication parameters. The processing module11is further configured to regulate the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. The transceiver module12is configured to send communication parameter regulation indication information to the second equipment. The communication parameter regulation indication information may indicate a result of regulation performed by the processing module on the configurations of one or more of the communication parameters. In such a manner, the equipment for regulating the communication parameters in the embodiment of the present disclosure may dynamically regulate the configurations of the communication parameters according to the network state and/or service state, so that performance and applicability of a wireless communication system are improved. In the embodiment of the present disclosure, optionally, the transceiver module12is further configured to receive processing capability indication information sent by the second equipment. The processing capability indication information may indicate that the second equipment is capable of communicating with the first equipment by using the same parameter with different configurations. In the embodiment of the present disclosure, optionally, the transceiver module12is specifically configured to receive an attach request message sent by the second equipment. The attach request message include the processing capability indication information. In the embodiment of the present disclosure, optionally, the processing module11is further configured to determine the preset configurations of the communication parameters according to at least one of: condition of a wireless channel transmission between the equipment and the second equipment, communication capabilities of the equipment and the second equipment, and a service type for which the second equipment initiates random access. Here, the transceiver module12is further configured to send the preset configurations, determined by the processing module11, of the communication parameters to the second equipment. In the embodiment of the present disclosure, optionally, the transceiver module12is further configured to receive parameter regulation request information from the second equipment for requesting regulation on the configurations of the one or more of the communication parameters, Here, the processing module11is further configured to regulate one or more of the communication parameters according to the parameter regulation request information received by the transceiver module12. In the embodiment of the present disclosure, optionally, the transceiver module12is further configured to send ACK information, which indicates an ACK of successful reception of the parameter regulation request information, to the second equipment. In the embodiment of the present disclosure, optionally, the transceiver module12is further configured to receive state information, which indicates the network state and/or the service state, from the second equipment. Here, the processing module11is further configured to regulate the configurations of one or more of the communication parameters according to the state information received by the transceiver module12. In the embodiment of the present disclosure, optionally, the transceiver module12is specifically configured to send the parameter regulation indication information to the second equipment through a physical layer control channel. In the embodiment of the present disclosure, optionally, the communication parameters are multiple access manners for communication and/or basic physical layer parameters corresponding to the multiple access manners. In the embodiment of the present disclosure, optionally, the network state includes at least one of a channel environment between the equipment and the second equipment, a load and interference of a communication network where the equipment and the second equipment are located, a requirement of an application on a data rate, and a requirement of the application on energy consumption. In the embodiment of the present disclosure, optionally, the equipment is network equipment and the second equipment is a terminal equipment; or the equipment is a terminal equipment and the second equipment is another terminal equipment. In such a manner, the equipment for regulating the communication parameters in the embodiment of the present disclosure may dynamically regulate the configurations of the communication parameters according to the network state and/or service state, so that the performance and applicability of the wireless communication system are improved. It is to be understood that the equipment10according to the embodiment of the present disclosure may correspondingly execute the method100for regulating the communication parameters in the embodiment of the present disclosure, and that the abovementioned and other operations and/or functions of various modules in the equipment10are intended to implement the corresponding flows of various methods inFIG.1andFIG.2respectively, and for simplicity, will not be elaborated herein. It is to be noted that, in the embodiment of the present disclosure, the processing module11may be implemented by a processor, and the transceiver module12may be implemented by a receiver and a transmitter. As illustrated inFIG.6, an equipment100may include a processor101, a receiver102, a transmitter103and a memory104. Here, the memory104may be configured to store codes executed by the processor101and the like. Various components in the equipment100are coupled together through a bus system105. Here, the bus system105includes a data bus, and further includes a power bus, a control bus and a state signal bus. It is to be understood that the equipment100according to the embodiment of the present disclosure may correspond to the equipment10in the embodiment of the present disclosure, and may correspond to a corresponding execution main body in the method according to the embodiment of the present disclosure, and that the abovementioned and other operations and/or functions of various modules in the equipment100are intended to implement the corresponding flows of each method inFIG.1andFIG.2respectively, and for simplicity, will not be elaborated herein. FIG.7is a schematic block diagram illustrating an equipment for regulating communication parameters according to another embodiment of the present disclosure. As illustrated inFIG.7, the equipment20includes a processing module21and transceiver module22. The processing module21is configured to establish a communication with first equipment according to preset configurations of communication parameters. The transceiver module22is configured to receive communication parameter regulation indication information sent by the first equipment. The communication parameter regulation indication information may indicate a result of regulation performed by the processing module on the configurations of one or more of the communication parameters according to a network state and/or service state in a communication process. In such a manner, the equipment for regulating the communication parameter in the embodiment of the present disclosure may receive the configurations of the communication parameters dynamically regulated by network equipment and/or terminal equipment communicating with the equipment according to the network state and/or the service state, so that performance and applicability of a wireless communication system are improved. In an embodiment of the present disclosure, optionally, the transceiver module22is further configured to send processing capability indication information to the first equipment. The processing capability indication information may indicate that the equipment is capable of communicating with the first equipment by using the same parameter with different configurations. In an embodiment of the present disclosure, optionally, the transceiver module22is specifically configured to send an attach request message to the first equipment. The attach request message include the processing capability indication information. In an embodiment of the present disclosure, optionally, the transceiver module22is further configured to receive the preset configurations of the communication parameters from the first equipment. Here, the preset configurations of the communication parameters are determined by the first equipment according to at least one of: condition of a wireless channel transmission between the first equipment and the equipment, communication capabilities of the first equipment and the equipment and a service type for which the equipment initiates random access. In an embodiment of the present disclosure, optionally, the transceiver module22is further configured to send parameter regulation request information to the first equipment for requesting regulation on the configurations of the one or more of the communication parameters, so as to cause the first equipment to regulate one or more of the communication parameters according to the parameter regulation request information. In an embodiment of the present disclosure, optionally, the transceiver module22is further configured to receive ACK information, which indicates an ACK of successful reception of the parameter regulation request information, from the first equipment. In an embodiment of the present disclosure, optionally, the transceiver module22is further configured to send state information, which indicates the network state and/or the service state, to the first equipment, so as to enable the first equipment to regulate the configurations of one or more of the communication parameters according to the state information. In an embodiment of the present disclosure, optionally, the transceiver module22is specifically configured to: receive the parameter regulation indication information sent by the first equipment through a physical layer control channel. In an embodiment of the present disclosure, optionally, the communication parameters are multiple access manners for communication and/or basic physical layer parameters corresponding to the multiple access manners. In an embodiment of the present disclosure, optionally, the network state includes at least one of: a channel environment between the first equipment and the equipment, a load and interference of a communication network where the first equipment and the equipment are located, a requirement of an application on a data rate, and a requirement of the application on energy consumption. In an embodiment of the present disclosure, optionally, the first equipment is a network equipment and the equipment is terminal equipment; or the first equipment is a terminal equipment and the equipment is another terminal equipment. It is to be understood that the equipment20according to the embodiment of the present disclosure may correspondingly execute the method200for regulating the communication parameter in the embodiment of the present disclosure, and the abovementioned and other operations and/or functions of various modules in the equipment20are intended to implement the corresponding flows of each method inFIG.3andFIG.4respectively, and for simplicity, will not be elaborated herein. It is to be noted that, in the embodiment of the present disclosure, the processing module21may be implemented by a processor, and the transceiver module22may be implemented by a receiver and a transmitter. As illustrated inFIG.8, an equipment200may include a processor201, a receiver202, a transmitter203and a memory204. Here, the memory204may be configured to store codes executed by the processor201and the like. Various components in the equipment200are coupled together through a bus system205. Here, the bus system205includes a data bus, and further includes a power bus, a control bus and a state signal bus. It is to be understood that the equipment200according to the embodiment of the present disclosure may correspond to the equipment20in the embodiment of the present disclosure, and may correspond to a corresponding execution main body in the method according to the embodiment of the present disclosure, and the abovementioned and other operations and/or functions of various modules in the equipment200are intended to implement the corresponding flows of various methods inFIG.3andFIG.4respectively, and for simplicity, will not be elaborated herein. Those skilled in the art may realize that the units and algorithm steps of various examples described in conjunction with the embodiments disclosed in the present disclosure may be implemented by electronic hardware, computer software or a combination of the two. Whether these functions are executed in a hardware or software manner depends on specific applications and design constraints of the technical solutions. Those skilled in the art may realize the described functions for each specific application by virtue of different methods, but such realization shall fall within the scope of the present disclosure. Those skilled in the art may clearly appreciate that specific working processes of the system, device and unit described above may refer to the corresponding processes in the method embodiment for convenient and brief description and will not be elaborated herein. In some embodiments provided by the present disclosure, it is to be understood that the disclosed system, device and method may be implemented in another manner. For example, the device embodiment described above is only schematic. For example, division of the units is only logic function division, and other division manners may be adopted during practical implementation. For example, multiple units or components may be combined or integrated into another system, or some characteristics may be omitted or not executed. In addition, coupling or direct coupling or communication connection between various components as illustrated or as discussed may be indirect coupling or communication connection, implemented through some interfaces, of the device or the units, and may be electrical and mechanical or adopt other forms. The units described as separate parts may or may not be physically separated, and parts illustrated as units may or may not be physical units, and namely may be located in the same place, or may be distributed to multiple network units. Part or all of the units may be selected to achieve the purpose of the solutions of the embodiments according to a practical requirement. In addition, various function units in various embodiment of the present disclosure may be integrated into a processing unit, or various units may exist independently, or two or more than two units may be integrated into a unit. When being implemented in form of software function unit and sold or used as an independent product, the function may also be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present disclosure substantially or parts making contributions to the conventional art or part of the technical solutions may be embodied in form of software product. The computer software product is stored in a storage medium, including a plurality of instructions configured to enable a computer device (which may be a personal computer, a server, network equipment or the like) to execute all or part of the steps of the method in each embodiment of the present disclosure. The abovementioned storage medium includes: various media capable of storing program codes such as a U disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk. The above is only the specific implementation mode of the present disclosure and not intended to limit the scope of protection of the present disclosure. Any variations or replacements apparent to those skilled in the art within the technical scope disclosed by the present disclosure shall fall within the scope of protection of the present disclosure. Therefore, the scope of protection of the present disclosure is defined by the scope of protection of the claims. | 46,291 |
11863388 | Corresponding reference characters indicate corresponding components throughout the several figures of the drawings. Elements in the several figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures might be emphasized relative to other elements for facilitating understanding of the various presently disclosed embodiments. In addition, common, but well-understood, elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure. DETAILED DESCRIPTION In response to the issues described above, devices and methods are discussed herein that allow for managing networks with an energy-aware configuration. By generating and implementing energy-aware configurations, various embodiments can choose paths for traffic and modify the configuration of devices within the network that have a better environmental impact. This is in contrast to traditional methods that attempt to steer traffic and configurations over the best performing paths and devices in order to reduce metrics such as jitter, delay, latency, drop, etc. However, other metrics such as power source type, or energy efficiency can also be considered in various embodiments to generate network configurations that can be more environmentally sustainable. As described in more detail below, the energy-aware configurations can be generated based on an element energy coefficient, or a feature energy coefficient. These two coefficients can be generated to better provide data sufficient to generate a more energy-aware configuration. In many embodiments, an element energy coefficient can be generated for each element within a network device, a portion of elements, or a grouping of elements as needed. Each element can provide their current state and energy usage data to a device that can generate an element energy coefficient. In similar embodiments, each device and/or element in a network device can be configured with various attributes or capabilities that may be sustainability-related. These capabilities can include operating at a lower power level or shutting down one or more elements or components when not needed or in use. A feature energy coefficient can be generated for each element, device, or portion/combination thereof that evaluates each combination of capabilities and their resultant energy usage. Once an EEC or FEC is generated, one or both can be utilized to generate an energy-aware configuration for the network. Other data may be considered such as link aggregation data, and other loop avoidance data, such as through spanning tree protocols, etc. Additional load balancing and configuration data can also be suitable as input for generating an energy-aware configuration. The generation of the energy-aware configuration can occur on a single device, such as the device that also generates the EEC and/or FEC but may be generated by a separate and/or specialized device in communication with the network. Additionally, as those skilled in the art will recognize, these energy-aware configurations can be generated for various levels of topology such as L2 and L3 topologies. As a result, topology data and other data relating to the network and the generation of the energy-aware configuration can span multiple levels of topologies. Upon generation, the energy-aware configuration can be broadcast or otherwise sent to the network to implement the determined modifications. For example, data traffic paths that may have previously only been selected the overall costs of transporting bits or the available bandwidth may instead be moved to a different router or network switch that is being powered by a sustainable power source. Additionally, it may be determined that various paths within the network are not necessary based on the current traffic load and/or set of inputs. Therefore, one or more elements, such as a line card, port, etc. may enter a sleep mode, or shut down for a predetermined interval of time and/or in response to an event. Additionally, other elements may be utilized to generate or support the generation of an energy-aware configuration. By way of non-limiting example, a path computing element can be provided a plurality of inputs, including sustainability-related metrics, that can result in an output that can either be an energy-aware configuration, or be input into a process that generates an energy-aware configuration. In this way, one or more machine learning methods can be utilized in the generation of an energy-aware configuration. Aspects of the present disclosure may be embodied as an apparatus, system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, or the like) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “function,” “module,” “apparatus,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more non-transitory computer-readable storage media storing computer-readable and/or executable program code. Many of the functional units described in this specification have been labeled as functions, in order to emphasize their implementation independence more particularly. For example, a function may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A function may also be implemented in programmable hardware devices such as via field programmable gate arrays, programmable array logic, programmable logic devices, or the like. Functions may also be implemented at least partially in software for execution by various types of processors. An identified function of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified function need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the function and achieve the stated purpose for the function. Indeed, a function of executable code may include a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, across several storage devices, or the like. Where a function or portions of a function are implemented in software, the software portions may be stored on one or more computer-readable and/or executable storage media. Any combination of one or more computer-readable storage media may be utilized. A computer-readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, but would not include propagating signals. In the context of this document, a computer readable and/or executable storage medium may be any tangible and/or non-transitory medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, processor, or device. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Python, Java, Smalltalk, C++, C#, Objective C, or the like, conventional procedural programming languages, such as the “C” programming language, scripting programming languages, and/or other similar programming languages. The program code may execute partly or entirely on one or more of a user's computer and/or on a remote computer or server over a data network or the like. A component, as used herein, comprises a tangible, physical, non-transitory device. For example, a component may be implemented as a hardware logic circuit comprising custom VLSI circuits, gate arrays, or other integrated circuits; off-the-shelf semiconductors such as logic chips, transistors, or other discrete devices; and/or other mechanical or electrical devices. A component may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. A component may comprise one or more silicon integrated circuit devices (e.g., chips, die, die planes, packages) or other discrete electrical devices, in electrical communication with one or more other components through electrical lines of a printed circuit board (PCB) or the like. Each of the functions and/or modules described herein, in certain embodiments, may alternatively be embodied by or implemented as a component. A circuit, as used herein, comprises a set of one or more electrical and/or electronic components providing one or more pathways for electrical current. In certain embodiments, a circuit may include a return pathway for electrical current, so that the circuit is a closed loop. In another embodiment, however, a set of components that does not include a return pathway for electrical current may be referred to as a circuit (e.g., an open loop). For example, an integrated circuit may be referred to as a circuit regardless of whether the integrated circuit is coupled to ground (as a return pathway for electrical current) or not. In various embodiments, a circuit may include a portion of an integrated circuit, an integrated circuit, a set of integrated circuits, a set of non-integrated electrical and/or electrical components with or without integrated circuit devices, or the like. In one embodiment, a circuit may include custom VLSI circuits, gate arrays, logic circuits, or other integrated circuits; off-the-shelf semiconductors such as logic chips, transistors, or other discrete devices; and/or other mechanical or electrical devices. A circuit may also be implemented as a synthesized circuit in a programmable hardware device such as field programmable gate array, programmable array logic, programmable logic device, or the like (e.g., as firmware, a netlist, or the like). A circuit may comprise one or more silicon integrated circuit devices (e.g., chips, die, die planes, packages) or other discrete electrical devices, in electrical communication with one or more other components through electrical lines of a printed circuit board (PCB) or the like. Each of the functions and/or modules described herein, in certain embodiments, may be embodied by or implemented as a circuit. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive and/or mutually inclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise. Further, as used herein, reference to reading, writing, storing, buffering, and/or transferring data can include the entirety of the data, a portion of the data, a set of the data, and/or a subset of the data. Likewise, reference to reading, writing, storing, buffering, and/or transferring non-host data can include the entirety of the non-host data, a portion of the non-host data, a set of the non-host data, and/or a subset of the non-host data. Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps, or acts are in some way inherently mutually exclusive. Aspects of the present disclosure are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and computer program products according to embodiments of the disclosure. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor or other programmable data processing apparatus, create means for implementing the functions and/or acts specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated figures. Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. In the following detailed description, reference is made to the accompanying drawings, which form a part thereof. The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. The description of elements in each figure may refer to elements of proceeding figures. Like numbers may refer to like elements in the figures, including alternate embodiments of like elements. Referring toFIG.1, a conceptual diagram of a network suitable for energy-aware traffic forwarding and loop avoidance in accordance with an embodiment of the disclosure is shown. The network100can include a plurality of devices, e.g., routers110,130,140and150, which can be in communication with each other and/or a remote server, such as a cloud-based server120. The network100depicted inFIG.1is shown as a simplified, conceptual network. Those skilled in the art will understand that a network100can include a large variety of devices and be arranged in a virtually limitless number of combinations based on the desired application and available deployment environment. Additionally, it is recognized that the terms “power” and “energy” are often used interchangeably in many colloquial settings but have distinct differences. Specifically, energy is accepted as the capacity of a system or device to do work (such as in kilowatt-hours (kWh)), while power is the rate at which energy is transferred (often in watts (W)). Power represents how fast energy is being used or produced. With this in mind, it should be understood that various elements of the present disclosure may utilize common terms like “power lines,” “power grids,” “power source,” “power consumption,” and “power plant” when describing energy delivery and utilization, even though those skilled in the art will recognize that those elements are delivering or processing energy (specifically electricity) at a certain rate of power. References to these terms are utilized herein specifically to increase the ease of reading. Traditionally, devices operating within a network100have not considered various aspects of operation that can relate to the overall sustainability of the network. For example, devices in communication networks have often used grid-supplied energy as a primary power source. This grid-supplied energy can regularly provide energy that has been generated by a negative environmental impacts-heavy power source such as a coal-powered power plant. However, modern power grids often have more diverse and cleaner energy sources for the provided generated energy. Some devices can still be powered by power sources that utilize fossil fuels, such as the router R4140as depicted inFIG.1. Alternatively, some devices can operate by using renewable sources of energy, such as the router R3150which is conceptually depicted as being powered by solar power. Those skilled in the art will recognize that the generation of electricity within the various power plants often creates some pollution or, more generally, one or more negative environmental impacts, which can often come in the form of emissions. However, these negative environmental impacts can come in a variety of forms including, but not limited to, land use, ozone depletion, ozone formation inhibition, acidification, eutrophication (freshwater, marine, and terrestrial), abiotic resource depletion (minerals, metals, and fossil fuels), toxicity, water use, negative soil quality change, ionizing radiation, hazardous waste creation, etc. As such, these negative environmental impact measurements can be measured with specific units to quantify these changes. Various aspects of energy use can be associated with one or more of these negative environmental impacts and classified as one or more sustainability-related attributes. In the embodiment depicted inFIG.1, the operation of a coal-powered power plant will create a sizeable amount of negative environmental impacts in the form of carbon emissions and the like. Contrast that with a solar array which may not create emissions when generating electricity, but may negative environmental impacts, such as carbon emission generation, associated with the production and/or disposal of the solar array. Various methods of measuring these negative environmental impacts may occur. One measurement may be to examine the waste products created by the power generated (such as nuclear waste, vs. solar array e-waste, etc.). Another measurement of negative environmental impacts that can be utilized when comparing power sources is to determine the amount of greenhouse or carbon emissions released per unit of electricity generated. Specifically, various embodiments described herein may utilize the CO2e kg/kWh metric which measure the amount of kilowatt hours produced per kilogram of carbon dioxide gases released into the environment. Therefore, when discussing a negative environmental impacts-heavy power source compared to a clean(er) power source, the clean power source can, for example, have a better CO2e kg/kWh rating compared to the negative environmental impacts-heavy power source. Utilizing a cleaner power source thus provides for a more sustainable network operation. In order the maximize the overall sustainability of a network, it may be desirable to increase the use of cleaner power sources with a lower overall negative environmental impact as opposed to power sources with a higher overall negative environmental impact when operating the network. Thus, there can be a need to be aware of the source of energy provided at each device along the route of data travel. Additionally, other factors such as the attributes unique to each device can be factored in, along with the current and/or expected traffic, etc. Once known, an optimal method of traversing the data may need to be calculated. As discussed in more detail, this path algorithm can be utilized to better optimize the locations selected within a network for data travel. Other methods may be utilized to increase sustainability in network operations. In many embodiments, the network devices themselves may have one or more features or other capabilities that can allow for a more efficient operation. For example, a network router may be operated in a lower power mode or be powered off entirely for a specific period of time or until an event occurs. Additional embodiments may utilize various other power-saving capabilities that can be turned on or off remotely or in response to an event or predetermined threshold being exceeded. Often, operations performed by the network devices can be utilized in scenarios where network performance will not be affected or is affected such that no loss in user experience occurs. By utilizing less power during operation, a higher level of sustainability can be achieved. Together, the type of power source providing electricity to a network device, along with the various sustainability-related capabilities of the router can be understood as the sustainability-related attributes of that network device. During operation, one or more devices within the network may seek and collect the sustainability-related attributes of various network devices, which can provide insight into both the type of power source providing power to the device, but also the various capabilities of the network device that may be activated to provide more efficient operation. Additionally, when generating various scores, metrics, or other evaluations of the network devices within a network100, the sustainability-related attributes can vary based on a variety of factors such as the time of day, current network traffic, expected network traffic, and historical usage patterns. For example, a network router may receive energy from a solar power source during the day but receives energy from a coal-powered power plant at night. In these instances, an averaged score may be used, or a unique score may be generated at the time of operation. In another example, network traffic may be such that removing one or more network devices from the optimal sustainable data paths may negatively affect user experiences, such as when a sporting event occurs. As such, scores may be generated at numerous times depending on the desired application. Often, the act of measurement may negatively affect sustainability such that determining the proper amount of measurements for a given outcome may be determined. Although a specific embodiment for a network100is described above with respect toFIG.1, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the network could be broken into a plurality of partitions, wherein each partition could have specific needs, service level agreements, etc. that can alter sustainability-optimization. The elements depicted inFIG.1may also be interchangeable with other elements ofFIGS.2-8as required to realize a particularly desired embodiment. Augmented protocols to carry out these described processes are described below. Although a specific embodiment for a network100is described above with respect toFIG.1, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the network could be broken into a plurality of partitions, wherein each partition could have specific needs, service level agreements, etc. that can alter sustainability-optimization. The elements depicted inFIG.1may also be interchangeable with other elements ofFIGS.2-10as required to realize a particularly desired embodiment. Augmented protocols to carry out these described processes are described below. Referring toFIG.2, a conceptual diagram of an element energy coefficient (EEC)250in accordance with an embodiment of the disclosure is shown. In many embodiments, the element energy coefficient250can be a multi-layer network element that may collect and analyze L2 and L3 topologies along with sustainability-related attributes. By collecting this data, the EEC250can be configured to be utilized for the generation of an energy-aware configuration. The collected data can be combined and consolidated in order to focus on a selection of various energy-efficient, or energy-aware priorities. Generally, an EEC250can be generated based on the energy source and/or power consumption of each device within the network. The embodiment depicted inFIG.2shows that the generated EEC250has been computed for the path through three routers R1210, R2220, and R3230. Subsequently, an additional EEC can be generated for the path from R1210to R4240. In various embodiments, these generated EECs can be compared against each other when generating an energy-aware configuration. Each device within the EEC250can be generated based on a variety of factors. For example, the embodiment depicted inFIG.2shows that energy usage data is available on both a daily and yearly scale. As those skilled in the art will recognize, the EEC250can be generated based on a number of factors depending on the availability of the data and the application desired. It is contemplated that other data inputs, and scales of time can be utilized. For example, the EEC can also consider data related to the type of power source, such as the different power sources depicted inFIG.1. In additional embodiments, the EEC250can be generated for each individual network device, or elements within that those devices such as, but not limited to line cards, ports, links, potential link aggregation groups, etc. As discussed in more detail below, various embodiments can utilize this data within a separate logic, sometimes in coordination with other input data, to generate an energy-aware configuration for one or more devices within the network. In some embodiments, the EEC250can be generated continuously and passed to a receiving logic. However, in further embodiments, the EEC250can be generated after a predetermined period of time, or in response to an event such as, but not limited to, a request from an external device. Referring toFIG.3, a conceptual diagram of a feature energy coefficient320in accordance with an embodiment of the disclosure is shown. In a number of embodiments, the feature energy coefficient (FEC)320can be generated based on the energy consumption for different feature combinations. Many network devices can be equipped with a variety of different features or capabilities that can be turned on or off. Often, these network devices can be equipped with a plurality of different sustainability-related capabilities. By way of non-limiting example, a router may be equipped with the capability to turn a particular line card or port on or off. Alternatively, they may also enter a lower power or “sleep” mode. Other sustainability-related capabilities may be present and can be turned on or off in a variety of combinations. Thus, an FEC320can be generated for each possible combination. However, in certain embodiments, the FEC320may be limited to being generated for a limited set of feature combinations, such as those that are most likely to yield positive results, or by those that are typically used by neighboring network devices. In the embodiment depicted inFIG.3, a router310has a plurality of features that can be turned off and on. Specifically, there are two features, feature 1 and feature 2, which have an energy consumption chart calculated which can be representative of an estimated power consumption over a given time. In additional embodiments, the features can be a sustainability-related capability that when activated, provides one or more increased operational efficiency, and/or reduced power or shut down mode, among others. The FEC320can be configured to calculate the energy results of engaging a variety of combinations of the available features or capabilities of the given device, such as the router310. In some embodiments, the FEC can be calculated for all possible combination of features. However, in additional embodiments, only a certain number of combinations may be calculated. In still more embodiments, one or more machine learning methods may be utilized to converge on potential combinations to calculate, wherein the machine learning processes can be trained on an administrator-provided training set and/or historical data. Referring toFIG.4, a conceptual block diagram of a network device400suitable for energy-aware traffic forwarding and loop avoidance in accordance with an embodiment of the disclosure is shown. The device400can include a processor (not shown) and a memory (not shown) communicatively coupled to the processor which can execute a plurality of logics including an energy-aware topology logic450and an orchestrator logic460. Typically, a number of elements are present which can include, but is not limited to, communication ports (not shown) which can be configured to connect to various networks410. In a number of embodiments, a network410will comprise a plurality of devices such as switches, routers, etc. The network410can generate or otherwise comprise an augmented multi-layer topology420. In various embodiments this augmented topology can include various types of data including sustainability-related attributes/capabilities, energy source data, and/or power consumption data. Based on that augmented multi-layer topology data420, one or more EECs430and FECs440can be generated and provided to the device400. Specifically, the EECs430and FECs440can be passed to an energy-aware topology logic450. During operation of the network410, various data may be collected, captured, or otherwise provided to the device400. In the embodiment depicted inFIG.4, the device400can process link aggregation data and/or multichassis link aggregation group control plane data (shown as LAG/MC-LAG control plane470). The embodiment of the device400is also shown processing loop avoidance data480, and load balancing status and configuration data490. Loop avoidance data can be associated with one or more spanning tree protocols (STP, RSTP, MSTP, etc.). Load balancing status and configuration data can indicate what type of load balancing is used (ECMP, etc.) and the current status of one or more devices in the network410in relation to that configuration. In many embodiments, the energy-aware topology logic450can receive all of the available data to generate an energy-aware configuration that can be sent or otherwise provided to the network410to modify one or more aspects of that network410. The energy-aware configuration can be configured to operate the network410in a manner that can take sustainability-related metrics into account. In some embodiments, this can include modifying the paths of traffic flow such that network devices that are powered with a sustainable power source (solar, wind, geo-thermal, etc.) are preferred compared to network devices powered by more polluting sources of power (coal, gas, etc.). In more embodiments, the energy-aware configurations can activate or deactivate certain sustainability-related attributes/capabilities within devices of the network410. By way of non-limiting example, certain devices may be powered down, or put in a lower power “sleep” mode. In certain embodiments, the device400may include an orchestrator logic460which can direct various operations related to the energy-aware configuration. For example, the orchestrator logic460can collect data needed to generate the energy-aware configuration. In further embodiments, the orchestrator logic460can pass, broadcast, or otherwise direct the generated energy-aware configuration to the network410for implementation. As those skilled in the art will recognize, the orchestrator logic460may be executed within a controller (not shown) for the device400. Although a specific embodiment for a device400suitable for energy-aware configuration generation is described above with respect toFIG.4, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the device400may be configured with one or more controller that directs various elements, logics, or other components of the device400to perform various operations. The elements depicted inFIG.4may also be interchangeable with other elements ofFIGS.1-3, and5-10as required to realize a particularly desired embodiment. An alternative method of generating an energy-aware configuration via a path computation element is shown below. Referring toFIG.5, a conceptual illustration of a path computation element510suitable for energy-aware traffic forwarding and loop avoidance in accordance with an embodiment of the disclosure is shown. In some embodiments, an element can determine one or more paths suitable for energy-aware traffic forwarding, load balancing, etc. These path computation elements510can be utilized in conjunction or in place of the generation of EECs or FECs described above. In the embodiment depicted inFIG.5, the path computation element510can generate an energy-aware configuration520based on a number of inputs530-570. In certain embodiments, the output of the path computation element510may be utilized as another input in an energy-aware topology logic which can then generate an energy-aware configuration. The inputs that can be utilized may include, but are not limited to, topology data530, EECs540, FECs550, traffic load data560, and/or power source type data570. In various embodiments, the topology data530can be related to understanding the L2, L3 or other topologies of a given network, or portion of the network. In additional embodiments, EECs540and FECs550may be similar to what is described above with reference toFIGS.2-3. In further embodiments, the traffic load data560can be data related to the amount of traffic that is traversing across the network devices within a given network. In more embodiment, the traffic load data560can be associated with the input and/or output traffic of the device associated with the path computation element510. Finally, the power source type data570can be configured to indicate what type of power source is supplying the power being provided to one or more network devices within the network. Referring toFIG.6, a flowchart depicting a process600for managing a network with sustainability-related energy usage measurements in accordance with an embodiment of the disclosure is shown. In a number of embodiments, the process600can collect topology data (block610). As discussed above, the topology data can be collected in a variety of ways and may involve data related to multiple levels of the topology. The topology data may be associated with an entire network or a portion/partition of a network. In more embodiments, the process600can receive an element energy coefficient (EEC) (block620). As discussed above, the EEC may include data related to energy consumption and/or power source type. The EEC can be calculated for one or more elements within a device. In some embodiments, the EEC may be generated for all elements within a device, and each device within the network may have elements that require the generation of an EEC. However, in certain embodiments, the EEC may only be generated for a portion of the elements. The selection of this portion of elements for EEC generation may be selected based on, in part, historical data, a pre-determined selection, devices that satisfy one or more predetermined thresholds, and/or elements selected from one or more machine learning methods, etc. In various embodiments, the process600can generate an energy-aware configuration (block630). The energy-aware configuration can be generated based at least on the received EEC data. As described in more detail above, the energy-aware configuration can be configured to modify one or more of the network devices within a network. The generated energy-aware configuration can be subsequently passed to at least one network device within the network (block640). In most embodiments, the modifications can allow for a more sustainable operation of the network. This can often mean that less power is utilized to operate the network and/or cleaner sources of power are utilized. In still more embodiments, the generation of the EEC can be coupled with other types of data to create an even more robust energy-aware configuration. Although a specific embodiment for a process600to manage a network with EEC data is described above with respect toFIG.6, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the process600may be generated in response to a request for generation or may generate after a predetermined time interval occurs. The aspects described inFIG.6may also be interchangeable with other elements ofFIGS.1-5, and7-10as required to realize a particularly desired embodiment. Managing a network based on FEC data is described below. Referring toFIG.7, a flowchart depicting a process700for managing a network with sustainability-related capability measurements in accordance with an embodiment of the disclosure is shown. In a number of embodiments, the process700can collect topology data (block710). As discussed above, the topology data can be collected in a variety of ways and may involve data related to multiple levels of the topology. The topology data may be associated with an entire network or a portion/partition of a network. In further embodiments, the process700can receive sustainability-related attributes of one or more network devices (block720). As discussed previously, each device within a network can have a plurality of features, attributes, and/or capabilities that can be turned on or off based on the desired application. Often, these capabilities can be remotely activated or deactivated, such as through received configuration data. In more embodiments, the one or more of the plurality of capabilities can be sustainability related. In still further embodiments, the capabilities can be broadcast or otherwise transmitted to other devices on the network via a bitmap or other messaging means. In additional embodiments, the process700can receive a feature energy coefficient (FCC) (block730). As detailed above within the discussion ofFIG.3, FECs can indicate the potential power usage associated with a variety of feature combinations on a per-element or per-device basis. In many embodiments, the FEC is generated based on the received sustainability-related attributes of the device and/or elements under analysis. The process700can subsequently generate an energy-aware configuration (block740). The energy-aware configuration can be configured to remotely activate and/or deactivate a number of capabilities, attributes, and/or features within at least one network device or element within the network. When the energy-aware configuration has been generated, the process700can pass that configuration to at least one network device (block750). As those skilled in the art will appreciate, the energy-aware configuration can be deployed and implemented by many devices within a network, such as those that have one or more capability or attribute modified by the energy-aware configuration. Although a specific embodiment for a process700to manage a network with FEC data is described above with respect toFIG.7, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the process700may be generated in response to a request for generation or may generate after a predetermined time interval occurs. Additionally, the FEC may, in certain embodiments, be generated in the same device that generates the energy-aware configuration, thus negating the need to receive the FEC data. The aspects described inFIG.7may also be interchangeable with other elements ofFIGS.1-6, and8-10as required to realize a particularly desired embodiment. Managing a network based on both EEC and FEC data is described below. Referring toFIG.8, a flowchart depicting a process800for managing an energy-aware network based on a plurality of sustainability-related data in accordance with an embodiment of the disclosure is shown. In a number of embodiments, the process800can collect topology data (block810). As discussed above, the topology data can be collected in a variety of ways and may involve data related to multiple levels of the topology. The topology data may be associated with an entire network or a portion/partition of a network. In further embodiments, the process800can receive sustainability-related attributes of one or more network devices (block820). As discussed previously, each device within a network can have a plurality of features, attributes, and/or capabilities that can be turned on or off based on the desired application. Often, these capabilities can be remotely activated or deactivated, such as through received configuration data. In more embodiments, the one or more of the plurality of capabilities can be sustainability related. In still further embodiments, the capabilities can be broadcast or otherwise transmitted to other devices on the network via a bitmap or other messaging means. In still more embodiments, the process800may receive service level object (SLO) data (block830). An SLO can often be related to a service-level agreement (SLA) between a service provider and a customer. As discussed above, an SLO can be an agreed upon method of measuring the performance of the service provider such that disputes between the service provider and the customer can be avoided. The SLO may, in certain embodiments, be comprised of one or more quality of service (QoS) measurements. In yet additional embodiments, the process800can receive load balancing status and configuration data (block840). Often, load balancing can be accomplished through one or more methods, such as, but not limited to, ECMP as described above. Details about this type of load balancing, or the type of load balancing being utilized within one or more network devices can be compiled into load balancing and configuration data. In this way, the process800can acquire visibility about the current state of the load balancing. In even more embodiments, the process800can receive loop avoidance information data (block850). As those skilled in the art will recognize, various methods can be utilized to avoid and mitigate looping within a network. Data related to those methods can be transmitted out to various network devices, such as a network device that can generate an energy-aware configuration. In many embodiments, the loop avoidance data can include, but are not limited to, current protocols utilized (STP and variants, etc.), as well as the current spine and leaf switching architectures, etc. In this way, the device generating the energy-aware configuration can configure it to maintain a loop avoidance architecture. In still additional embodiments, the process800can generate an EEC (block860). As discussed above, the EEC may include data related to energy consumption and/or power source type. An EEC may be generated for each element within a device. However, in certain embodiments, the EEC may only be generated for a portion of the elements. The selection of this portion of elements for EEC generation may be selected based on, in part, historical data, a pre-determined selection, devices that satisfy one or more predetermined thresholds, and/or elements selected from one or more machine learning methods, etc. In some embodiments, the generation of the EEC can be based on all received data available to the process800as discussed above. In yet further embodiments, the process800can receive an FEC (block870). As detailed above within the discussion ofFIG.3, FECs can indicate the potential power usage associated with a variety of feature combinations on a per-element or per-device basis. In many embodiments, the FEC is generated based on the received sustainability-related attributes of the device and/or elements under analysis. In various embodiments, the process800can generate an energy-aware configuration based on the previously received data and generated EECs and FECs (block880). Again, the energy-aware configuration can be configured to modify one or more of the network devices within a network. Typically, the generated energy-aware configuration is generated based on one or more sustainability goals such as, but not limited to, reducing power, selecting traffic paths based on the power source types that are powering the network devices, etc. Upon generation, the process800can pass the energy-aware configuration to at least one network device within the network (block890). Upon enactment of the energy-aware configuration, the network can begin functioning in a more sustainable mode of operation. Although a specific embodiment for a process800to manage a network by generating an energy-aware configuration with various received and generated data is described above with respect toFIG.8, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the process800may receive an EEC or FEC that is externally generated by a different network device. In this way, the energy-aware configuration can still be generated in the same manner and with the same data. The aspects described inFIG.8may also be interchangeable with other elements ofFIGS.1-7, and9-10as required to realize a particularly desired embodiment. Managing a network device that can be modified via an energy-aware configuration is described below. Referring toFIG.9, a flowchart depicting a process900for operating a network device within an energy-aware network in accordance with an embodiment of the disclosure is shown. In a number of embodiments, the process900can provide network device data (block910). The network device data can include, but is not limited to, topology data, loop avoidance data, data related to the current configuration of the device, the number, and types of elements within the device, the number of attributes/features/capabilities etc. available, the current configuration of those capabilities, which capabilities are sustainability-related capabilities, and power consumption data. It is contemplated that any data that can be utilized to generate a more efficient energy-aware configuration can be gathered and provided to another device within the network. The process900can subsequently receive an energy-aware configuration (block920). In additional embodiments, the energy-aware configuration can be similar to the configurations described in the discussions ofFIGS.6-8as well as other potential embodiments. Upon reception, the process900can parse the energy-aware configuration (block930). Parsing may be required within embodiments that have an energy-aware configuration that includes modifications to multiple types of devices or elements or that may include various rules or heuristics of when to implement the configuration. When the energy-aware configuration has been parsed, the process900can subsequently modify one or more elements within a device based on the energy-aware configuration (block940). As discussed above, the modification can include a variety of actions. However, typically an energy-aware configuration will modify one or more sustainability-related capabilities of the device, reduce or stop using power, wait to use power until a more sustainable power source is available, pass traffic to more sustainable devices within the network, etc. In more embodiments, the process900can update the network data (block950), often in response to the modifications based on the received energy-aware configuration. In certain embodiments, the network data can be stored as a particular set of data within the device, which can be updated in response to events such as upon modification of various elements or capabilities of the device. In optional embodiments, the process900can receive a request to provide updated network device data (block960). For example, the device that generates the energy-aware configuration can poll various devices to determine if the network is in a sufficient state. In further embodiments, a request may be received for the current network device data after a predetermined time interval, or in response to a specific event. Eventually, the process900can provide the updated network device data (block970). In various embodiments, this process900can repeat. Although a specific embodiment for a process900to manage a device suitable for receiving energy-aware configurations is described above with respect toFIG.9, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the process900may generate an EEC or FEC internally as part of the network device data. The aspects described inFIG.9may also be interchangeable with other elements ofFIGS.1-8, and10as required to realize a particularly desired embodiment. Utilizing a path computation element to generate data for an energy-aware configuration is described below. Referring toFIG.10, a flowchart depicting a process1000for utilizing a path configuration element to generate an energy-aware configuration in accordance with an embodiment of the disclosure is shown. As disclosed above in the discussion ofFIG.5, a path configuration element can be utilized in certain embodiments to generate data associated with an energy-aware configuration. Specifically in the embodiment process depicted inFIG.10, the path computation element is utilized to generate a path selection. However, as described above in the discussion ofFIG.5, a path computation element can be utilized to generate an energy-aware configuration in certain embodiments. The process1000can collect topology data (block1010). As discussed above, the topology data can be collected in a variety of ways and may involve data related to multiple levels of the topology. The topology data may be associated with an entire network or a portion/partition of a network. In many embodiments, the process1000can receive EEC and FEC data (block1020). The EEC and FEC data can be received from an external device, however in certain embodiments, it may also be generated within the same device as the path computation element. In additional embodiments, the process1000can receive current traffic data (block1030). Traffic data may comprise, but is not limited to, data related to traffic input and output of a plurality of devices, or any data related to projected traffic. Additionally, the process1000may receive power source data (block1040). As described above, the type of power source can include what type of power source is being used to power the devices within the network. In further embodiments, the process1000can determine, via a path computation element, an energy-aware path selection based on the collected and received data (block1050). In this way, the path computation element can determine paths based on energy-aware related data such as EECs and FECs. When the energy-aware path selection is determined, the process1000can generate an energy-aware configuration based on that determined energy-aware path selection (block1060). Although a specific embodiment for a process1000to utilize a path computation element as part of energy-aware configuration generation is described above with respect toFIG.10, any of a variety of systems and/or processes may be utilized in accordance with embodiments of the disclosure. For example, the process1000may be implemented on each device within the network, a portion of the devices, or in a single device that also generates the energy-aware configuration. The aspects described inFIG.10may also be interchangeable with other elements ofFIGS.1-9as required to realize a particularly desired embodiment. Although the present disclosure has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. In particular, any of the various processes described above can be performed in alternative sequences and/or in parallel (on the same or on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application. It is therefore to be understood that the present disclosure can be practiced other than specifically described without departing from the scope and spirit of the present disclosure. Thus, embodiments of the present disclosure should be considered in all respects as illustrative and not restrictive. It will be evident to the person skilled in the art to freely combine several or all of the embodiments discussed here as deemed suitable for a specific application of the disclosure. Throughout this disclosure, terms like “advantageous”, “exemplary” or “example” indicate elements or dimensions which are particularly suitable (but not essential) to the disclosure or an embodiment thereof and may be modified wherever deemed suitable by the skilled person, except where expressly required. Accordingly, the scope of the disclosure should be determined not by the embodiments illustrated, but by the appended claims and their equivalents. Any reference to an element being made in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described preferred embodiment and additional embodiments as regarded by those of ordinary skill in the art are hereby expressly incorporated by reference and are intended to be encompassed by the present claims. Moreover, no requirement exists for a system or method to address each and every problem sought to be resolved by the present disclosure, for solutions to such problems to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. Various changes and modifications in form, material, workpiece, and fabrication material detail can be made, without departing from the spirit and scope of the present disclosure, as set forth in the appended claims, as might be apparent to those of ordinary skill in the art, are also encompassed by the present disclosure. | 55,279 |
11863389 | When practical, like labels are used to refer to same or similar items in the drawings. DETAILED DESCRIPTION The development of a software application may often be divorced from the subsequent deployment, testing, and maintenance of the software application. For instance, a software application may be developed in one environment by a team of software engineers before being deployed to another environment where the software application is tested and/or maintained by a separate team of information technology (IT) professionals. The absence of communication and collaboration between the software developers and the information technology professionals may result in the development and delivery of a software application that is difficult to deploy, test, and/or maintain. Deploying the software application may require configuring an enterprise's information technology infrastructure to host the software application including, for example, by provisioning, modifying, and/or de-provisioning one or more hardware resources, software resources, network resources, and/or the like. It should be appreciated that the enterprise's information technology infrastructure may include private resources owned and operated by the enterprise for exclusive use by the enterprise. Alternatively and/or additionally, the enterprise's information technology infrastructure may include public resources owned and operated by a third party provider including, for example, an infrastructure-as-a-service (IaaS) provider, a platform-as-a-service (PaaS) provider, a software-as-a-service (SaaS) provider, and/or the like. Accordingly, the configuration of the enterprise's information technology infrastructure may include provisioning, modifying, and/or de-provisioning private resources and/or public resources to support the operations of the software application. Moreover, the enterprise's information technology infrastructure may require continuous monitoring and/or updates in order to ensure that the performance of the software application meets a threshold metric such as, for example, a service level objective (SLO) and/or the like. In some example embodiments, an information technology (IT) infrastructure controller may be configured to provide lifecycle management for the information technology infrastructure of an enterprise. As noted, the information technology infrastructure of the enterprise may be configured to host a software application and/or ensure that the performance of the software application meets a threshold metric (e.g., a service level objective (SLO) and/or the like). For example, the enterprise's information technology infrastructure may be configured by at least provisioning, modifying, and/or de-provisioning one or more resources (e.g., hardware resources, software resources, network resources, and/or the like) within the information technology infrastructure in order to accommodate the deployment, testing, and/or maintenance of the software application. Accordingly, the information technology infrastructure controller may manage the provisioning, modification, and/or de-provisioning of the one or more resources engendered by the deployment, testing, and/or maintenance of the software application. FIG.1Adepicts a system diagram illustrating an information technology (IT) infrastructure management system100, in accordance with some example embodiments. Referring toFIG.1A, the information technology infrastructure management system100may include an information technology infrastructure controller110, a first client120a, a second client120b, and a version controller140. Furthermore, the information technology infrastructure management system100may include one or more information technology infrastructures including, for example, a first information technology infrastructure130a, a second information technology infrastructure130b, and/or the like. AsFIG.1Ashows, the information technology infrastructure controller110, the first client120a, the second client120b, the first information technology infrastructure130a, the second information technology infrastructure130b, and/or the version controller140may be communicatively coupled via a network150. The network150may be any wired and/or wireless network including, for example, a local area network (LAN), a wide area network (WAN), a public land mobile network (PLMN), the Internet, and/or the like. Referring again toFIG.1A, each of the first information technology infrastructure130aand the second information technology infrastructure130bmay include a plurality of resources from one or more different providers including, for example, physical equipment, virtual machines, and/or the like. To further illustrate,FIG.1Ashows the first information technology infrastructure130aas including, for example, hardware resources135a, software resources135b, network resources135c, and/or the like. Moreover,FIG.1Ashows that the first information technology infrastructure130amay include resources from multiple providers including, for example, a first provider150a, a second provider150b, and/or the like. For example, at least one of the first provider150aand the second provider150bmay be a private provider such that at least a portion of the hardware resources135a, the software resources135b, and/or the network resources135care private resources owned and operated by an enterprise for exclusive use by the enterprise. Alternatively and/or additionally, at least one of the first provider150aand/or the second provider150bmay be a third party provider including, for example, an infrastructure-as-a-service (IaaS) provider, a platform-as-a-service (PaaS) provider, a software-as-a-service (SaaS) provider, and/or the like. As such, at least a portion of the hardware resources135a, the software resources135b, and/or the network resources135cmay be public resources shared amongst multiple enterprises. In some example embodiments, the information technology infrastructure controller110may be configured to provide lifecycle management for one or more information technology infrastructures including, for example, the first information technology infrastructure130a, the second information technology infrastructure130b, and/or the like. For example, the information technology infrastructure controller110may provide lifecycle management for the first information technology infrastructure130aby at least managing the provisioning, modifying, and/or de-provisioning of one or more of the hardware resources135a, the software resources135b, and the network resources135c. The provisioning, modifying, and/or de-provisioning of one or more of the hardware resources135a, the software resources135b, and the network resources135cmay be engendered by the deployment, testing, and/or maintenance of a software application. In some example embodiments, the information technology infrastructure controller110may provision, modify, and/or de-provision one or more resources in the first information technology infrastructure130aand/or the second information technology infrastructure130bas part of configuring the first information technology infrastructure130aand/or the second information technology infrastructure130bto host the software application and/or to ensure that the performance of the software application meets a threshold metric (e.g., a service level objective (SLO) and/or the like). However, it should be appreciated that the first information technology infrastructure130aand/or the second information technology infrastructure130bmay be configured and/or reconfigured to achieve any information technology objective including, for example, support for multi-tier software applications, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. In some example embodiments, at least a portion of the first information technology infrastructure130aand/or the second information technology infrastructure130bmay be configured using infrastructure as code (IaC). That is, instead of and/or in addition to physical hardware configuration, the first information technology infrastructure130aand/or the second information technology infrastructure130bmay be configured via software using, for example, one or more configuration files specifying the configurations to apply to the first information technology infrastructure130aand/or the second information technology infrastructure130bas well as one or more corresponding variables. For instance, in order to support the deployment, testing, and/or maintenance of a software application at the first information technology infrastructure130a, the first information technology infrastructure130amay be configured based on a first configuration file125aand/or a second configuration file125bcreated respectively, for example, by a first user145aat the first client120aand a second user145bat the second client120b. As shown inFIG.1A, the first user145aat the first client120aand the second user145bat the second client120bmay be associated with a same organization, for example, an organization155. However, it should be appreciated that the first user145aat the first client120aand the second user145bat the second client120bmay be associated with different organizations. The first configuration file125aand the second configuration file125bmay each include a programming code-based representation of the hardware resources135a, the software resources135b, and/or the network resources135cin the information technology infrastructure130. For example, the first configuration file125aand/or the second configuration file125bmay be rendered in a configuration language (e.g., HashiCorp Configuration Language (HCL) provided by HashiCorp, San Francisco, CA) and/or a data interchange language (e.g., JavaScript Object Notation (JSON)) that is human readable and editable as well as machine readable. Moreover, the first configuration file125aand/or the second configuration file125bmay specify one or more configurations to apply to the first information technology infrastructure130aincluding, for example, the provisioning, modification, and/or de-provisioning of the hardware resources135a, the software resources135b, and/or the network resources135c. To further illustrate, Table 1 below depicts the syntax of a configuration language such as, for example, HashiCorp Configuration Language (HCL). TABLE 1# An AMIvariable “ami” {description = “the AMI to use”}/* A multiline comment. */resource “aws_instance” “web” {ami = “${var.ami}”count = 2source_dest_check = falseconnection {user = “root”}} Table 2 below depicts the syntax of a data interchange language such as, for example, JavaScript Object Notation (JSON). TABLE 2{“variable”: {“ami”: {“description”: “the AMI to use”}},“resource”: {“aws_instance”: {“web”: {“ami”: “${var.ami}”,“count”: 2,“source_dest_check”: false,“connection”: {“user”: “root”}}}}} In some example embodiments, the information technology infrastructure controller110may be configured to generate, based at least on the first configuration file125aand/or the second configuration file125b, an execution plan for applying, to the information technology infrastructure130, the one or more configurations specified in the first configuration file125aand/or the second configuration file125b. For example, the first configuration file125aand/or the second configuration file125bmay be sent to the version controller140before being transferred to the information technology infrastructure controller110. The version controller140may be configured to manage and/or reconcile different versions of the first configuration file125aand/or the second configuration file125b. It should be appreciated that the version controller140may be any version control system, revision control system, and/or source control system capable of tracking and managing changes made to a configuration file by one or more users. For instance, the version controller140may be Github, Github Enterprise, GitLab, GitLab EE and CE, Bitbucket Cloud, Bitbucket Server, and/or the like. Alternatively and/or additionally, the version controller140may be the private and/or proprietary version control system implemented for exclusive use by an enterprise. FIG.1Bdepicts a block diagram illustrating the information technology infrastructure controller110, in accordance with some example embodiments. Referring toFIGS.1A-B, the information technology infrastructure controller110may include a plan engine160, a validation engine170, and a state controller180. As shown inFIG.1B, in some example embodiments, the information technology infrastructure controller110may be configured to generate an execution plan190for applying, to the first information technology infrastructure130a, one or more configurations specified, for example, in the first configuration file125aand/or the second configuration file125b. Referring again toFIG.1B, the plan engine160may include one or more workspaces including, for example, a first workspace165a, a second workspace165b, and a third workspace165c. Each of the first workspace165a, the second workspace165b, and the third workspace165cmay be configured to maintain the configurations for at least a portion of the first information technology infrastructure130a. Alternatively, the first workspace165a, the second workspace165b, and/or the third workspace165cmay be configured to maintain configurations for different information technology infrastructures, each of which associated with a different organization. For instance, the first workspace165amay maintain the configurations for at least a portion of the first information technology infrastructure130aassociated with one organization while the second workspace165bmay maintain the configurations for at least a portion of the second information technology infrastructure130bassociated with a different organization. When the first configuration file125aand/or the second configuration filed125bare pushed and/or pulled from the version controller140, the plan engine160may merge the first configuration file125aand/or the second configuration file125binto the first workspace165a, the second workspace165b, and/or the third workspace165c. In some example embodiments, the first workspace165a, the second workspace165b, and the third workspace165cmay each maintain a different iteration of configurations for at least a portion of the first information technology infrastructure130a. For example, the first workspace165a, the second workspace165b, and the third workspace165cmay each maintain the configurations that are applied to the first information technology infrastructure130ain order to configure the first information technology infrastructure130ato support a production environment, a staging environment, and a development environment for a software application. Accordingly, the first workspace165amay maintain the configurations associated with a production environment, the second workspace165bmay maintain the configurations associated with a staging environment, and the third workspace165cmay maintain the configurations associated with a development environment. Alternatively and/or additionally, each of the first workspace165a, the second workspace165b, and the third workspace165cmay be associated with the configurations for a specific portion the first information technology infrastructure130a. For examples, the first workspace165amay maintain the configurations for to the hardware resources135aof the first information technology infrastructure130a, the second workspace165bmay maintain the configurations for the software resources135bof the first information technology infrastructure130a, and the third workspace165cmay maintain the configurations for the network resources135cof the first information technology infrastructure130a. In some example embodiments, the first workspace165a, the second workspace165b, and the third workspace165cmay each be associated with a different set of variables. Each set of variables may correspond to a different iteration of configurations for the first information technology infrastructure130a(e.g., production environment, staging environment, development environment, and/or the like). Alternatively and/or additionally, each set of variables may correspond to the configurations for a different portion of the first information technology infrastructure130a(e.g., the hardware resources135a, the software resources135b, the network resources135c, and/or the like). At least some of these variables may be set and/or modified by the merging of the first configuration file125aand/or the second configuration file125binto the first workspace165a, the second workspace165b, and the third workspace165c. The first workspace165a, the second workspace165b, and the third workspace165cmay be associated with one or more organizations including, for example, the organization155. However, as noted, the first workspace165a, the second workspace165b, and the third workspace165cmay be associated with multiple organizations, each of which having a distinct information technology infrastructure. Moreover, the first workspace165a, the second workspace165b, and the third workspace165cmay each be associated with a team of one or more users from the organization155. For example, the first workspace165amay be associated with a first team of users that includes the first user145aat the first client120awhile the second workspace165bmay be associated with a second team of users that includes the second user145bat the second client120b. Each team of users may be accorded exclusive access to the corresponding workspace. Moreover, different users within a team of users may afforded different access privileges with respect to a corresponding workspace. For example, the first user145amay be provided read access, write access, and/or administrative access to the first workspace165awhile the second user145bmay be provided read access, write access, and/or administrative access to the second workspace165b. However, the first user145amay be prevented from accessing the second workspace165bif the first user145is not a member of the second team of user having exclusive access to the second workspace165b. Likewise, the second user145bmay be prevented from accessing the first workspace165aif the second user145bis not a member of the first team of users having exclusive access to the first workspace165a. In some example embodiments, the first user145amay access the first workspace165aby at least merging the first configuration file125ainto the first workspace165a. For example, the information technology infrastructure controller110may register, at the version controller140, a webhook. The webhook may be a hypertext transfer protocol (HTTP) callback configured to post, to the information technology infrastructure controller110, a notification when the first user145acommits the first configuration file125aat the version controller140. Meanwhile, the information technology infrastructure controller110may respond to the notification by at least pulling the first configuration file125afrom the version controller140and merging of the first configuration file125ainto the first workspace165a. As noted, merging the first configuration file125ainto the first workspace165amay set and/or modify at least some of the variables associated with the first workspace165a. Moreover, by merging the first configuration file125ainto the first workspace165a, the first user145amay modify the configurations specified for at least a portion of the first information technology infrastructure130a. For instance, merging the first configuration file125ainto the first workspace165amay modify the configurations specified for the hardware resources135aof the first information technology infrastructure130ain order to provide a production environment for a software application. According to some example embodiments, two or more of the first workspace165a, the second workspace165b, and/or the third workspace165cmay be linked such that updating a variable in one workspace may trigger an update to the same variable at the linked workspaces. Alternatively and/or additionally, the second user145bmay access the second workspace165bby at least merging the second configuration file125binto the second workspace165b. The information technology infrastructure controller110may pull, from the version controller140, the second configuration file125bin response to a notification from the webhook at the version controller140. Merging the second configuration file125binto the second workspace165bmay modify the configurations specified for at least a portion of the first information technology infrastructure130aby at least setting and/or modifying at least some of the variables associated with the second workspace165b. For example, merging the second configuration file125binto the second workspace165bmay modify the configurations specified for to the software resources135bof the first information technology infrastructure130ain order to provide a staging environment for a software application. The information technology infrastructure controller110may generate, based at least on the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165c, the execution plan190. The execution plan190may include one or more operations to provision, modify, and/or de-provision resources at the first information technology infrastructure130ain order to apply, to the first information technology infrastructure130a, the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165c. In some example embodiments, the information technology infrastructure controller110may generate the execution plan190by at least consolidating the configurations associated with the first workspace165a, the second workspace165b, and the third workspace165c. That is, the execution plan190may be generated to achieve a combination of the different iterations of the configurations for the first information technology infrastructure130aand/or the configurations for different portions of the first information technology infrastructure130a. Alternatively and/or additionally, the information technology infrastructure controller110may generate the execution plan190based on some but not all of the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165c. For example, the execution plan190may be generated to achieve only some iterations of the configurations for the first information technology infrastructure130aand/or the configurations for only a portion of the first information technology infrastructure130a. In some example embodiments, the first workspace165a, the second workspace165b, and/or the third workspace165cmay be marked for automatic destruction. For example, the first workspace165a, the second workspace165b, and/or the third workspace165cmay persist for a period of time (e.g., 24 hours), after which the information technology infrastructure controller110may be configured to automatically destroy the first workspace165a, the second workspace165b, and/or the third workspace165c. The first workspace165a, the second workspace165b, and/or the third workspace165cmay be persisted for a limited period of time in order to configure the first information technology infrastructure130ato provide a temporary environment or disposable environment (e.g., a demo environment). The information technology infrastructure controller110may generate the execution plan190including by creating a corresponding dependency graph (e.g., a directed acyclic graph (DAG) and/or the like) having a plurality of nodes, at least some of which being interconnected by interconnected by one or more directed edges.FIG.2depicts an example of a dependency graph200, in accordance with some example embodiments. To apply the configurations associated with the execution plan190to the first information technology infrastructure130a, the information technology infrastructure controller110may traverse the corresponding dependency graph. For instance, the information technology infrastructure controller110may perform a depth-first traversal of the dependency graph in order to determine the resources that the execution plan190indicates as requiring provisioning, modification, and/or de-provisioning. The information technology infrastructure controller110may further identify, based on the dependency graph, independent resources that may be provisioned, modified, and/or de-provisioned in parallel. It should be appreciated that the information technology infrastructure controller110may be configured to maximize parallelization when applying, to the first information technology infrastructure130a, the configurations associated with the execution plan190. Table 3 below depicts examples of nodes that may be present in the dependency graph corresponding to the execution plan190. TABLE 3Type of NodeDescriptionResource NodeRepresentative of a single resource such as, for example, a hardwareresource, a software resource, a network resource, and/or the like.Provider NodeRepresentative of a provider of one or more resources including, forexample, hardware resources, software resources, network resources,and/or the like. Each provider node may include the time required tofully configure a corresponding provider to provide the correspondingresources.ResourceRepresentative of a group of resources including, for example, one orMeta Nodemore hardware resources, software resources, network resources,and/or the like.Data NodeRepresentative of data needing to be fetched, retrieved, and/orgenerated for purposes of configuring other resources and/or providers. The information technology infrastructure controller110may generate the dependency graph by at least adding, to the dependency graph, one or more resource nodes corresponding to individual resources including, for example, one or more hardware resources135a, software resources135b, network resources135c, and/or the like. The one or more resource nodes may be mapped to the corresponding provider nodes, for example, to identify the first provider150aand/or the second provider150bas being the provider of the resources associated with each of the resource nodes. Moreover, the information technology infrastructure controller110may generate the dependency graph by at least inserting one or more edges to interconnect, for example, the resource nodes and the provider nodes. An edge interconnecting a resource node to a provider node may identify the provider associated with the provider node as being a provider of the resource associated with the resource node. Meanwhile, an edge interconnecting two resource nodes may indicate a dependency between the resources associated with the two resource nodes. To represent resources that require de-provisioning, the dependency graph may include one or more “orphan” resource nodes, which may be disconnected from the provider nodes and other resource nodes in the dependency graph. Alternatively and/or additionally, in order to represent the modification of an existing resource within the first information technology infrastructure130a, the information technology infrastructure controller110may generate the dependency graph by at least splitting the corresponding resource node into a first resource node and a second resource node. The first resource node may correspond to the existing resource, which may be de-provisioned when the configurations specified in the execution plan190are applied to the first information technology infrastructure130a. Meanwhile, the second resource node may correspond to the modified resource, which may be provisioned when the configurations specified in the execution plan190are applied to the first information technology infrastructure130a. Referring again toFIG.1B, the validation engine170may be configured to validate the execution plan190before the information technology infrastructure controller110applies the corresponding configurations to the information technology infrastructure130. In some example embodiments, the validation engine170may be configured to perform a multitier validation of the execution plan190in order to determine whether the configurations associated with the execution plan190satisfy one or more requirements including, for example, valid configurations, proper permissions, cost compliance, and/or the like. For instance, the validation engine170may perform a first tier of validation by at least determining the structural validity of the configurations associated with the execution plan190including, for example, the syntactic validity and/or semantic validity of the configurations associated with the execution plan190. If the configurations associated with the execution plan190successfully passes the first tier of validation, the validation engine170may perform a second tier of validation by at least determining whether the configurations comply with one or more policies including, for example, a first policy175a, a second policy175b, and/or the like. The first policy175aand/or the second policy175bmay impose limitations on the resources allocated by the configurations associated with the execution plan190. Upon determining that the configurations associated with the execution plan190comply with the one or more policies, the validation engine170may perform a third tier of validation by at least determining whether the configurations associated with the execution plan190meet one or more cost quotas including, for example, a first quota175c, a second quota175d, and/or the like. The first quota175cand/or the second quota175dmay impose target values and/or limits on the projected costs of the configurations associated with the execution plan190. In some example embodiments, a programming code based representation of the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be used to provide the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dto the validation engine170. Furthermore, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be input by the first user14aat the first client120aand/or the second user145bat the second client120b. Alternatively and/or additionally, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be retrieved from a repository such as, for example, the version controller140and/or the like. In some example embodiments, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be custom configured, for example, by the first user145aand/or the second user145bbased at least on the first user145aand/or the second user145bhaving the necessary access privileges (e.g., administrative access and/or the like) for setting and/or modifying a policy at the validation engine170. Moreover, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be custom configured to have limited applicability. For example, each of the first workspace165a, the second workspace165b, and the third workspace165cmay be associated with attributes including, for example, environment, application type, region, cloud, and/or the like. Whether a policy or a cost quota is applicable to each of the first workspace165a, the second workspace165b, and/or the third workspace165cmay be determined based on the corresponding attributes. That is, the validation engine170may identify the policies and/or cost quotas that are applicable to a workspace by at least filtering a broader set of policies and/or cost quotas based on the attributes of the workspace. Accordingly, the first policy175aand/or the first quota175cmay be configured to apply only to configurations associated with a staging environment while the second policy175band/or the second quota175dmay be configured to apply only to configurations associated with a production environment. Alternatively and/or additionally, the first policy175aand/or the first quota175cmay be configured to apply only to configurations associated with one portion of the first information technology infrastructure130a(e.g., the hardware resources135a) while the second policy175band/or the second quota175dmay be configured to apply only to configurations associated with a different portion of the first information technology infrastructure130a(e.g., the network resources135c). In some example embodiments, the execution plan190may be validated against requirements that are classified as advisory, mandatory, and/or semi-mandatory. For example, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be classified as advisory, mandatory, and/or semi-mandatory. Applying a requirement that is classified as advisory may merely trigger a notification (e.g., an informative output displayed at the first client120aand/or the second client120b) indicative, for example, of the configurations associated with the execution plan190as failing to comply with the requirement. By contrast, applying a requirement that is classified as mandatory and/or semi-mandatory may prevent the configurations associated with the execution plan190from being applied at the first information technology infrastructure130ain the event the configurations fail to satisfy the requirement. Moreover, while advisory requirements and semi-mandatory requirements may be overridden, a mandatory requirement must be satisfied before the configurations associated with the execution plan190may be applied at the first information technology infrastructure130a. In some example embodiments, the first policy175a, the validation engine170may invoke an externally configured service in order to verify whether the execution plan190satisfies one or more externally configured policies and/or quotas. For example, the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be configured externally by a web hook mechanism. The result of the external validation (e.g., a pass and/or fail status) may be returned to the validation engine170via an application programming interface (API). The one or more externally configured policies and/or quotas may also be classified as advisory, mandatory, and/or semi-mandatory. Accordingly, failure of an external policy and/or quota classified as mandatory and/or semi-mandatory may prevent the execution plan190from being applied at the first information technology infrastructure130a. Contrastingly, failure of an external policy and/or quota classified as advisory may trigger instead a notification (e.g., an informative output displayed at the first client120aand/or the second client120b) indicative, for example, of the configurations associated with the execution plan190as being non-compliant. The information technology infrastructure controller110may apply, to the information technology infrastructure130, the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165cby at least performing the operations included in the execution plan190, for example, to provision, modify, and/or de-provision one or more resources at the first information technology infrastructure130a. According to some example embodiments, the information technology infrastructure controller110may be configured to implement the execution plan190based at least on the execution plan190having been successfully validated by the validation engine170. The validation engine170may be configured to provide an indication of the execution plan190as having been successfully or unsuccessfully validated by the validation engine170. Alternatively and/or additionally, the validation engine170may provide an indication of the execution plan190as having passed or failed each of the first policy175a, the second policy175b, the first quota175c, the second quota175d, and/or the like. As noted, one or more of the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be classified as advisory and/or semi-mandatory. These policies and/or quotas may be overridden and/or excluded from the validation of the execution plan190. Alternatively, one or more of the first policy175a, the second policy175b, the first quota175c, and/or the second quota175dmay be classified as mandatory. Mandatory policies and/or quotas may not be overridden and/or excluded from the validation of the execution plan190. Instead, the configurations associated with the execution plan190may be required to satisfy all mandatory policies and/or quotas before the configurations may be applied at the first information technology infrastructure130a. In some example embodiments, instead of and/or in addition to the information technology infrastructure controller110ingesting, from the version controller140, the first configuration file125aand/or the second configuration file125bbefore merging the first configuration file125aand/or the second configuration file125binto the first workspace165a, the second workspace165b, and/or the third workspace165cto generate the execution plan190, the first user145aat the first client120aand/or the second user145bat the second client120bmay upload the execution plan190directly to the information technology infrastructure controller110, for example, via an application programming interface (API). Furthermore, the first user145aat the first client120aand/or the second user145bat the second client120bmay remotely execute the execution plan190, for example, to provision, modify, and/or de-provision resources in the first information technology infrastructure130a. In some example embodiments, the state controller180may be configured to track the changes that are applied to the configurations of the first information technology infrastructure130a. For example, the state controller180may generate and store a state file prior to implementing an execution plan such as, for example, the execution plan190. The state file may capture a current state at the first information technology infrastructure130a, including one or more existing configurations at the first information technology infrastructure130a, prior to the application of the configurations associated with the execution plan190. The information technology infrastructure controller110may determine, based on one or more state files generated and stored by the state controller180, a previous state of the first information technology infrastructure130aincluding, for example, one or more previous configurations at the first information technology infrastructure130a. Alternatively and/or additionally, the information technology infrastructure controller110may restore, based at on the one or more state files generated and stored by the state controller180, the first information technology infrastructure130ato a previous state. For instance, asFIG.1Bshows, the state controller180may generate and store a plurality of state files including, for example, a first state file185a, a second state file185b, and/or the like. The first state file185aand the second state file185bmay capture successive states of the first information technology infrastructure130a. For example, the first state file185amay capture the configurations at the first information technology infrastructure130aat a first time t1prior to the implementation of a first execution plan while the second state file185bmay capture the configurations at the first information technology infrastructure130aat a second time t2prior to the implementation of a second execution plan. The information technology infrastructure controller110may generate, based at least on the first state file185aand the second state file185b, a delta file or a difference file showing the difference between the configurations at the first information technology infrastructure130aat the first time t1and the configurations at the first information technology infrastructure130aat the second time t2. Moreover, the information technology infrastructure controller110may restore, based at least on the first state file185a, the first information technology infrastructure130ato a state at the first time t1. Alternatively and/or additionally, the information technology infrastructure controller110may restore, based at least on the second state file185b, the first information technology infrastructure130ato a state at the second time t2. It should be appreciated that by restoring the first information technology infrastructure130ato an earlier state, the information technology infrastructure controller110may reverse subsequent changes to the configurations of the first information technology infrastructure130a. Table 4 below depicts an example of a state file. As Table 4 shows, the state controller180may generate, prior to implementing an execution plan, a state file to capture a current state of the first information technology infrastructure130a, including one or more existing configurations at the first information technology infrastructure130a. TABLE 4aws_instance.example:id = i-32cf65a8ami = ami-2757f631availability_zone = us-east-1ainstance_state = runninginstance_type = t2.microprivate_ip = 172.31.30.244public_dns = ec2-52-90-212-55.compute-1.amazonaws.compublic_ip = 52.90.212.55subnet_id = subnet-1497024dvpc_security_group_ids.# = 1vpc_security_group_ids.3348721628 = sg-67672003 Referring again toFIG.1B, the state controller180may also maintain a run log187tracking, for example, various runs of one or more execution plans including, for example, the execution plan190. As used herein, “running” the execution plan190may include generating the execution plan190, validating the execution plan190, applying the configurations associated with the execution plan190, canceling the execution plan190, discarding the execution plan190, and/or the like. Accordingly, each run of an execution plan may be associated with a run status including, for example, planning, planned, error, confirmed, applying, applied, canceled, discarded, pending, policy checking, policy checked, policy override, and/or the like. The run log187may be configured to track the runs of one or more execution plan including, for example, by storing a corresponding run status for each of the runs. In some example embodiments, the state controller180may maintain state files and run logs for each individual workspace. For example, the first state file185a, the second state file185b, and the run log187may be associated with the first workspace165awhile the state controller180may maintain additional state files and run logs for the other workspaces including, for example, the second workspace165b, the third workspace165c, and/or the like. However, it should be appreciated that the first state file185a, the second state file185b, and the run log187may be associated with the first information technology infrastructure130aas a whole instead of any individual workspace associated with the first information technology infrastructure130a. FIG.1Cdepicts a block diagram illustrating a module registry115, in accordance with some example embodiments. Referring toFIGS.1A-C, the module registry115may include a plurality of infrastructure modules including, for example, a first module116a, a second module116b, and/or the like. The first module116aand the second module116bmay each include the configurations that may be applied to an information technology infrastructure (e.g., the first information technology infrastructure130a, the second information technology infrastructure130b, and/or the like) to achieve, at least partially, an information technology objective such as, for example, support for a software application, a multi-tier software application, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. Referring again toFIG.1C, the first user145amay create the first module116aand/or the second module116bwhile creating the first configuration file125aat the first client120a. The first user145amay publish the first module116aand/or the second module116bsuch that the second user145bmay add, to the second configuration file125b, the first module116aand/or the second module116bwhile the second user145bis creating the second configuration file125bat the second client120b. Adding the first module116aand/or the second module116bto the second configuration file125bmay incorporate, into the second configuration file125b, the configurations included in first module116aand/or the second module116b. For example, the first module116aand/or the second module116bmay include the provisioning, modification, and/or de-provisioning of one or more of the hardware resources135a, the software resources135b, and/or the network resources135cat the first information technology infrastructure130ato support the deployment, testing, and/or maintenance of a software application. Accordingly, adding to the first module116aand/or the second module116bto the second configuration file125bmay incorporate, into the second configuration file125b, the provisioning, modification, and/or de-provisioning of the same resources, for example, at a different information technology infrastructure such as the second information technology infrastructure130b. In some example embodiments, the first module116aand/or the second module116bmay be published directly to the module registry115by adding, to the module registry115, a version of the first module116aand/or the second module116b. Alternatively and/or additionally, the first module116aand/or the second module116bmay be published via the version controller140. Publishing the first module116aand/or the second module116bvia the version controller140may include registering, at the version controller140, a webhook (e.g., a hypertext transfer protocol (HTTP) callback) configured to post, to the information technology infrastructure controller110, a notification whenever a different version of the first module116aand/or the second module116bis committed to the version controller140. Accordingly, instead of storing static versions of the first module116aand/or the second module116b, the information technology infrastructure controller110may update the module registry115whenever, for example, the first user145acreates another version of the first module116aand/or the second module116b. In doing so, the second user145bmay have access to multiple versions of the first module116aand/or the second module116bincluding, for example, the most recent versions of the first module116aand/or the second module116b, when creating the second configuration file125b. In some example embodiments, the module registry115may be associated with the organization155such that only users from the organization155(e.g., the first user145aat the first client120aand/or the second user145bat the second client120b) may have access to the module registry115, for example, to publish modules, consume modules, and/or the like. For example, the first user145amay publish the first module116aand/or the second module116bto the module registry115and the second user145bmay consume the first module116aand/or the second module116bfrom the module registry115based at least on the first user145aand the second user145bbeing associated with the organization155. A user who is not associated with the organization155may be prevented from accessing the module registry115. That is, a user who is not associated with the organization155may neither publish nor consume an infrastructure module from the module registry115. Table 5 below depicts programming code for an example of a module named “consul.” This module may be sourced from a public registry, a private registry, and/or version control system. In some example embodiments, the module may be associated with a version constraint to ensure that a specific version of the module is fetched from the public registry, private registry, and/or version control system. The module may require additional configuration such as, for example, the quantity of servers. These additional configurations may be optional in some instances and mandatory in others. TABLE 5module “consul” {source = “hashicorp/consul/aws”version = “~> 0.0.5”servers = 3}resource “aws_instance” “client” {ami= “ami-408c7f28”instance_type= “t1.micro”availability_zone= “${module.consul.server_availability_zone}”} FIG.3Adepicts a flowchart illustrating a process300for managing the information technology infrastructure130, in accordance with some example embodiments. Referring toFIGS.1A-C,2, and3A, the process300may be performed by the information technology infrastructure controller110to manage an information technology infrastructure such as, for example, the first information technology infrastructure130a. For example, the management of the first information technology infrastructure130amay include the provisioning, modification, and/or de-provisioning of one or more of the hardware resources135a, the software resources135b, and/or the network resources135cto achieve an information technology objective such as, for example, support for a software application, a multi-tier software application, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. Nevertheless, it should be appreciated that the information technology infrastructure controller110may also perform the process300to manage other information technology infrastructures including, for example, the second information technology infrastructure130band/or the like. The information technology infrastructure controller110may generate a first workspace configured to maintain a first set of configurations for the first information technology infrastructure130a(302). Furthermore, the information technology infrastructure controller110may generate a second workspace configured to maintain a second set of configurations for the first information technology infrastructure130a(304). In some example embodiments, the information technology infrastructure controller110, for example, the plan engine160, may generate the first workspace165a, the second workspace165b, and/or the third workspace165c. The first workspace165a, the second workspace165b, and the third workspace165cmay each maintain a different iteration of configurations for at least a portion of the first information technology infrastructure130a. For example, the first workspace165a, the second workspace165b, and the third workspace165cmay each maintain the configurations that are applied to the first information technology infrastructure130ain order to configure the first information technology infrastructure130ato support a production environment, a staging environment, and a development environment for a software application. Alternatively and/or additionally, each of the first workspace165a, the second workspace165b, and the third workspace165cmay be associated with the configurations for a specific portion the first information technology infrastructure130a. For instance, the first workspace165amay maintain the configurations for to the hardware resources135aof the first information technology infrastructure130a, the second workspace165bmay maintain the configurations for the software resources135bof the first information technology infrastructure130a, and the third workspace165cmay maintain the configurations for the network resources135cof the first information technology infrastructure130a. As noted, different workspaces may also be generated to maintain configurations for different information technology infrastructures. For example, the first workspace165amay be associated with configurations for at least a portion of the first information technology infrastructure130awhile the second workspace165bmay be associated with configurations for at least a portion of the second information technology infrastructure130b. The information technology infrastructure controller110may merge, into the first workspace and/or the second workspace, a configuration file specifying one or more configurations to apply to the first information technology infrastructure130a(306). In some example embodiments, the information technology infrastructure controller110may register, at the version controller140, a webhook (e.g., a hypertext transfer protocol (HTTP) callback) configured to post, to the information technology infrastructure controller110, a notification the first configuration file125aand/or the second configuration file125bare committed at the version controller140. The information technology infrastructure controller110may respond to the notification from the webhook at the version controller140by at least pulling the first configuration file125aand/or the second configuration file125bfrom the version controller140. Furthermore, the information technology infrastructure controller110may merge the first configuration file125ainto the first workspace165aand the second configuration file125binto the second workspace165b. As noted, merging the first configuration file125ainto the first workspace165amay set and/or modify at least some of the variables associated with the first workspace165a, for example, to modify the configurations specified for the hardware resources135aof the first information technology infrastructure130a. Meanwhile, merging the second configuration file125binto the second workspace165bmay set and/or modify at least some of the variables associated with the second workspace165b, for example, to modify the configurations specified for the software resources135bof the first information technology infrastructure130a. The information technology infrastructure controller110may generate, based at least on the first workspace and/or the second workspace, an execution plan that includes one or more operations to apply, to the first information technology infrastructure130a, the one or more configurations specified in the configuration file (308). In some example embodiments, the information technology infrastructure controller110may generate the execution plan190by at least consolidating the configurations associated with the first workspace165a, the second workspace165b, and the third workspace165c. Alternatively and/or additionally, the information technology infrastructure controller110may generate the execution plan190based on some but not all of the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165c. The information technology infrastructure controller110may apply, based at least on the execution plan, the one or more configurations including by at least provisioning, modifying, and/or de-provisioning one or more resources at the first information technology infrastructure130a(310). In some example embodiments, to apply the configurations associated with the execution plan190to the first information technology infrastructure130a, the information technology infrastructure controller110may generate and traverse a corresponding dependency graph. For example, the information technology infrastructure controller110may generate the dependency graph200, which may include a plurality of resource nodes and provider nodes, at least some of which being interconnected by one or more directed edges. The information technology infrastructure controller110may traverse the dependency graph200to at least identify independent resources that may be provisioned, modified, and/or de-provisioned in parallel. As noted, the information technology infrastructure controller110may be configured to maximize parallelization when applying, to the first information technology infrastructure130a, the configurations associated with the execution plan190. FIG.3Bdepicts a flowchart illustrating a process320for running an execution plan, in accordance with some example embodiments. Referring toFIGS.1A-C,2, and3B, the process320may be performed by the information technology infrastructure controller110, for example, to perform a multitier validation of the execution plan190before the configurations associated with the execution plan190are applied to the first information technology infrastructure130a. As noted, the execution plan190may include one or more operations, which may be applied to the first information technology infrastructure130ain order to realize one or more configurations for achieving an information technology objective such as, for example, support for a software application, a multi-tier software application, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. The multitier validation of the execution plan190may include determining whether the configurations associated with the execution plan190satisfy one or more requirements including, for example, advisory, mandatory, and/or semi-mandatory requirements. The information technology infrastructure controller110may perform a first tier validation of the execution plan190by at least determining a structural validity of one or more configurations associated with the execution plan190(322). In some example embodiments, the information technology infrastructure controller110, for example, the validation engine170, may determine whether the configurations associated with the execution plan190are free from syntactic errors (e.g., typographical errors, syntax errors, formatting errors, and/or the like) and/or semantic errors that would prevent the configurations from being processed. For example, the information technology infrastructure controller110may detect a syntactic error if the configurations associated with the execution plan190requests a negative quantity of resources and/or if the quantity of resources are defined using a string value instead of a numeric value. Alternatively and/or additionally, the information technology infrastructure controller110may detect a semantic error if a mismatch in dependent resources is present in the configurations associated with the execution plan190. The information technology infrastructure controller110may determine that the configurations associated with the execution plan190are structurally valid (323-Y). Accordingly, the information technology infrastructure controller110may perform a second tier validation of the execution plan190by at least determining whether the one or more configurations associated with the execution plan190comply with at least one policy (424). For example, in some example embodiments, the information technology infrastructure controller110, for example, the validation engine170, may further validate the execution plan190by at least determining whether the first information technology infrastructure130awould satisfy the requirements imposed by the first policy175aand/or the second policy175bif the configurations associated with the execution plan190are applied to the first information technology infrastructure130a. The first policy175aand/or the second policy175bmay each impose one or more limitations on the resources allocated for the first information technology infrastructure130a. For instance, the first policy175amay impose a maximum and/or a minimum on a quantity of a resource allocated for the first information technology infrastructure130a. Meanwhile, the second policy175bmay specify that an X instance type may only be built during a Y period in a Z region of the first information technology infrastructure130a. The information technology infrastructure controller110may determine that the configurations associated with the execution plan190comply with the at least one policy (325-Y). As such, the information technology infrastructure controller110may perform a third tier validation of the execution plan190by at least determining whether the one or more configurations of the execution plan190meet at least one cost quota (326). In some example embodiments, the information technology infrastructure controller110, for example, the validation engine170, may further validate the execution plan190by at least determining whether the first information technology infrastructure130awould satisfy the requirements imposed by the first quota175cand/or the second quota175dif the configurations associated with the execution plan190are applied to the first information technology infrastructure130a. The first quota175cand/or the second quota175dmay impose limitations of the projected costs of the configurations associated with the execution plan190. Accordingly, the information technology infrastructure controller110may determine whether the first information technology infrastructure130awould exceed these limitations on projected costs if the configurations associated with the execution plan190are applied to the first information technology infrastructure130a. The information technology infrastructure controller110may determine that the configurations associated with the execution plan190meet the at least one cost quota (327-Y). As such, the information technology infrastructure controller110may apply, to the first information technology infrastructure130a, the one or more configurations associated with the execution plan190(328). For example, the information technology infrastructure controller110may implement the execution plan190based at least on the configurations associated with the execution plan190having successfully passed the multitier validation. Implementing the execution plan190may include applying, to the first information technology infrastructure130a, the configurations associated with the execution plan190. For example, applying, to the first information technology infrastructure130a, the configurations associated with the execution plan190may include provisioning, modifying, and/or de-provisioning one or more of the hardware resources135a, software resources135b, and/or network resources135cassociated with the information technology infrastructure130. As noted, the information technology infrastructure controller110may perform a multitier validation of the execution plan190. The configurations associated with the execution plan190may be applied at the first information technology infrastructure130aif the configurations associated with the execution plan190successfully passes the multitier validation including, for example, by being structurally valid, complying with at least one policy, and meeting at least one cost quota. By contrast, the information technology infrastructure controller110may also determine that the execution plan190fails to pass at least a portion of the multitier validation. For example, the information technology infrastructure controller110may determine that the configurations associated with the execution plan190are not structurally valid (323-N). Alternatively, the information technology infrastructure controller110may determine that the configurations associated with the execution plan190do not comply with at least one policy (325-N). The information technology infrastructure controller110may also determine that the configurations associated with the execution plan190do not meet at least one cost quota (327-N). In the event the information technology infrastructure controller110determines that the execution plan190fails to pass any portion of the multitier validation, the information technology infrastructure controller110may determine if the failed requirement is mandatory (329). In some example embodiments, as part of the multitier validation, the execution plan190may be validated against requirements classified as advisory, mandatory, and/or semi-mandatory. For example, the structural validity of the execution plan190may be classified as a mandatory requirement. By contrast, the policy compliance of the execution plan190may be classified as a semi-mandatory requirement whereas the cost quota compliance of the execution plan190may be classified as an advisory requirement. As noted, while advisory requirements and semi-mandatory requirements may be overridden, a mandatory requirement must be satisfied before the configurations associated with the execution plan190may be applied at the first information technology infrastructure130a. Accordingly, if the information technology infrastructure controller110determines that the execution plan190failed a mandatory requirement (329-Y), the information technology infrastructure controller110may prevent the one or more configurations associated with the execution plan190from being applied to the first information technology infrastructure130a(330). In some example embodiments, the information technology infrastructure controller110may provide an indication of the execution plan190as having been successfully or unsuccessfully validated by the validation engine170. Alternatively and/or additionally, the information technology infrastructure controller110may provide an indication of the execution plan190as being structurally invalid and/or having passed or failed each of the first policy175a, the second policy175b, the first quota175c, and/or the second quota175d. These indications may include any form of notification including, for example, an email, a slack message, a webhook, and/or the like. Alternatively, if the information technology infrastructure controller110determines that the execution plan190failed a non-mandatory requirement (329-N), the information technology infrastructure controller110may determine whether the requirement is overridden (331). If the information technology infrastructure controller110determines that the execution plan190failed a non-mandatory requirement that is overridden (331-Y), the information technology infrastructure controller110may apply, to the first information technology infrastructure130a, the one or more configurations associated with the execution plan190. By contrast, if the information technology infrastructure controller110determines that the execution plan190failed a non-mandatory requirement that is not overridden (331-N), the information technology infrastructure controller110may prevent the one or more configurations associated with the execution plan190from being applied at the first information technology infrastructure130a(330). FIG.3Cdepicts a flowchart illustrating a process350for configuring the information technology infrastructure130, in accordance with some example embodiments. Referring toFIGS.1A-C,2, and3C, the process350may be performed by the information technology infrastructure controller110to enable the generation of the first configuration file125aand/or the second configuration file125b. As noted, the first configuration file125aand/or the second configuration file125bmay be merged into the first workspace165a, the second workspace165b, and/or the third workspace165c. Meanwhile, the execution plan190may be generated based at least on the configurations associated with the first workspace165a, the second workspace165b, and/or the third workspace165c. The information technology infrastructure controller110may receive, from the first user145aat the first client120a, a first indication to publish an infrastructure module (352). For example, while creating the first configuration file125aat the first client120a, the first user145amay create the first module116aand/or the second module116b. The first module116aand the second module116bmay each include the configurations that may be applied to the information technology infrastructure130to achieve, at least partially, an information technology objective such as, for example, support for a software application, a multi-tier software application, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. By publishing the first module116aand/or the second module116b, the first user145amay render the first module116aand/or the second module116bavailable for use by other users including, for example, the second user145bat the second client120b. The information technology infrastructure controller110may respond to the first indication by at least storing, in the module registry115, a first version of the infrastructure module pulled from the version controller140(354). Furthermore, the information technology infrastructure controller110may respond to the first indication by at least registering, at the version controller140, a webhook configured to post, to the information technology infrastructure controller110, a notification when a second version of the infrastructure module is committed to the version controller140(356). In some example embodiments, the first module116aand/or the second module116bmay be published may be published via the version controller140. Publishing the first module116aand/or the second module116bvia the version controller140may include registering, at the version controller140, a webhook (e.g., a hypertext transfer protocol (HTTP) callback) configured to post, to the information technology infrastructure controller110, a notification whenever a different version of the first module116aand/or the second module116bis committed to the version controller140. As such, the information technology infrastructure controller110may be able to update the module registry115whenever, for example, the first user145acreates another version of the first module116aand/or the second module116b. The update to the module registry115may include, for example, incrementing the version number associated with the first module116aand/or the second module116b. Moreover, the second user145bmay have access to multiple versions of the first module116aand/or the second module116bincluding, for example, the most recent versions of the first module116aand/or the second module116b. It should be appreciated that the module registry115may be associated with the organization155such that only users from the organization155may have access to the module registry115, for example, to publish modules, consume modules, and/or the like. As such, the first user145amay be able to publish the first module116aand/or the second module116bonly if the first user145ais associated with the organization155. Alternatively and/or additionally, access to the module registry115may be role and/or permission based such that the first user145amay publish the first module116aand/or the second module116bto the module registry115only if the first user145ais associated with the appropriate role and/or permissions The information technology infrastructure controller110may receive, from the second user145bat the second client120b, a second indication selecting the first version and/or the second version of the infrastructure module (358). The information technology infrastructure controller110may respond to the second indication by at least sending, to the second client120b, the first version and/or the second version of the infrastructure module for insertion into a configuration file being created at the second client120b(360). For example, while creating the second configuration file125bat the second client120, the second user145bmay select to add, to the second configuration file125b, the first module116aand/or the second module116b. The second user145bmay select to add the first module116aand/or the second module116binstead of and/or in addition to creating the corresponding configurations. The first module116aand/or the second module116bmay be added to the second configuration file125bin order to achieve, at least partially, an information technology objective such as, for example, support for a software application, a multi-tier software application, self-service clusters, software demonstrations, disposable environments (e.g., production environments, staging environments, and/or the like), software defined networking, resource schedulers, multi-cloud deployment, and/or the like. As noted, the module registry115may be associated with the organization155such that only users from the organization155may have access to the module registry115, for example, to publish modules, consume modules, and/or the like. As such, the second user145bmay access the module registry115to consume the first module116aand/or the second module116bonly if the second user145bis associated with the organization155. Alternatively and/or additionally, access to the module registry115may be role and/or permission based such that the second user145bmay access the module registry115to consume the first module116aand/or the second module116bonly if the second user145bis associated with the appropriate role and/or permissions. The role and/or permissions associated with the second user145bmay further determine whether the second user145bis able to consume certain modules from the module registry115. For example, the role and/or permissions associated with the second user145bmay allow the second user145bto consume the first module116abut not the second module116b. The information technology infrastructure controller110may provide, to the first client120a, a third indication of the first version and/or the second version of the infrastructure module being selected for insertion into the configuration file (362). In some example embodiments, the information technology infrastructure controller110may generate and/or update a user interface to display, at the first client120a, an indication that the first module116aand/or the second module116bhave been selected for insertion into the second configuration file125b. For example, the indication may identify the second user145bat the second client120bas having selected the first module116aand/or the second module116b. Moreover, if the second configuration file125bis merged into the first workspace165a, the indication from the information technology infrastructure controller110may further identify the first workspace165aas having the first module116aand/or the second module116b. FIGS.4A-Ndepict examples of user interfaces for creating and configuring a workspace, in accordance with some example embodiments. Referring toFIGS.1A-Cand4A-P, the first user145aat the first client120aand/or the second user145bat the second client120bmay interact with the user interfaces shown inFIGS.4A-Nto create and/or configure the first workspace165a, the second workspace165b, and/or the third workspace165c. Referring toFIG.4A, a new workspace may be created by clicking on a tab2270. Alternatively and/or additionally, a workspace may also be imported by clicking on a second tab2420shown inFIG.4B, which may be trigger the migration of legacy environments to a new organization while preserving their existing state and settings. A workspace may also be created using a configuration designer3010shown inFIGS.4H-J. Referring again toFIG.4B, a workspace name may be entered in a field2440. The workspace name may be unique and selected by combining one or more distinguishing attributes including, for example, the resources being managed, the environment in which the resources run, the region into which the resources are provisioned, and/or the like. The user interface shown inFIG.4Bmay include a selection of sources2460, which may include, for example, the version controller140. Meanwhile, as shown inFIGS.4B-C, the user interface may further provide a selection of repositories2520at, for example, the version controller140from which the information technology infrastructure controller110may pull the first configuration file125aand/or the second configuration file125b. Referring toFIG.4D, the first user145aat the first client120aand/or the second user145bat the second client120bmay select a directory2620in which the information technology infrastructure controller110may run the execution plan190. The directory2620may be specified as a relative path from a root of the repository2510and set to a subdirectory matching a particular environment (e.g., production, staging, development, and/or the like) if multiple environments exist within the same repository2510. A version control branch2640of the repository for the workspace may also be selected, which may refer to a production branch, a staging branch, and/or a development branch. Furthermore, the first user145aat the first client120aand/or the second user145bat the second client120bmay indicate, by selecting a box2660, whether to recursively clone all of the submodules within the repository2510when fetching a configuration. Referring toFIGS.4E-G, a workspace may include different types of variables. For example, the workspace may include variables may defining the parameters for a configuration and environment variables affecting a behavior of the information technology infrastructure controller110. Alternatively and/or additionally, the workspace may include shell environment variables used, for example, by the first provider150aand/or the second provider150b, for credentials and/or other data. If a required variable is missing, an execution plan in the workspace may fail and a corresponding run log may be updated accordingly. Variables in the workspace may be identified in any manner including, for example, by reviewing programming code and/or documentation. Variables in a workspace may be edited either via the example of the user interface shown inFIGS.4E-Gand/or via an application programming interface (API). Variables may also be uploaded via, for example, the first configuration file125aand/or the second configuration file125b. For large quantities of complex variables, a command line interface (CLI) tool may be used to update the variables in the workspace using a local variables file. FIGS.4E-Gfurther depict how the variables of the workspace may be edited using a first button2710and/or a second button2740. In particular,FIGS.4F-Gdepict examples of the user interface when a first variable2810and a second variable2910are in an editing mode. New variables may also be added by completing the field2890and/or the field2990before clicking the button2830and/or the button2930. Variables may also be removed by clicking on the button2820and/or the button2920. Where the field2860and/or the field2960contain sensitive values (e.g., passwords, keys, and/or the like), these values may be securely stored by checking the box2840and/or the box2940. It should be appreciated that marking a variable as sensitive may limits how the first user145aat the first client120aand/or the second user145bat the second client120bmay interact with the variable. For example, no user including, for example, the user who created and/or modified the variable, may view and/or modify the value of the variable, whether displayed in a user interface and/or retrieved via an application programming interface (API). Instead, modifying a sensitive variable may require deleting the existing variable and creating a new variable. The values of at least some variables may be encrypted prior to being stored, for example, as part of the workspace. FIG.4Hdepicts the configuration designer3010used to outline a configuration for a new workspace, which may include, selecting from the module registry115, the first module116a, the second module116b, and/or the like. The variables of the selected module may be listed as a fillable hypertext markup language (HTML) form, with a helper interface for finding interpolatable values. Once completed, the configuration designer3010may return the first configuration file125aand/or the second configuration file125b, which may subsequently merged into the workspace. To select and/or add, for example, to the second configuration file125b, the first module116aand/or the second module116b, the second user145bat the second client120bmay navigate to a list of modules using the button3085. The “Select Modules” page3000may display a filterable and/or searchable list3030of at least a portion of the modules available from the module registry115. Any quantity of modules from the filterable and/or searchable list3030may be added to by clicking the button3020. List3040may display a list of the selected modules. By default, selecting a module may add the most recent version of the module, for example, to the second configuration file125b. A different version of the module may be selected by clicking on the module's version number3050in the list3040. The “Set Variables” page3015shown inFIG.4Jmay be accessed by clicking the button3070. The “Set Variables” page3015may display a variables list3090for the module3082selected from the list3080. The variables list3090of the module3082may be viewed by clicking the button3084. Each variable may be labeled as required or optional. Once a value is set for all of a module's required variables, the button3084may change to a “configured” button. When all modules are configured,FIG.4Ishows that the finished configuration may be viewed by clicking the button3086. As shown inFIG.4J, one user may delegate the setting and/or modifying of a variable in a module to another users by selecting a “deferred” checkbox, which ties the value of the variable to a new top-level variable having no default value. Anyone creating a workspace using the module may have an opportunity to provide a value for the delegated variable. Once complete, the first configuration file125aand/or the second configuration file125bmay be viewed by clicking the button3086. The corresponding code may be copied into a text editor, saved as a main.tf file in a new directory, and committed to the version controller140to enable subsequent merging into a corresponding workspace. Additional changes to the first configuration file125aand/or the second configuration file125bmay be made without selecting and adding existing modules from the module registry115. Referring toFIG.4K, only team420with administrative access for a workspace may make changes to settings of the workspace. The information technology infrastructure controller110may be configured to automatically apply the execution plan190by selecting the auto apply option. When the auto apply option is selected, the information technology infrastructure controller110may automatically apply, to the information technology infrastructure130, the configurations associated with the execution plan190, for example, when the execution plan190is successfully validated. By contrast, if the manual apply option is selected, the first user145aat the first client120aand/or the second user145bat the second client120bmay be required to provide a confirmation before the configurations associated with the execution plan190are applied to the information technology infrastructure130. Referring toFIG.4L, if a key is required for the repository linked to the workspace, then a unique identifier associated with the key may be selected in the field3470. Clicking the button3410may update the key, which may be modified by the first user145aand/or the second user145bif the first user145aand/or the second user145bhave administrative access. The information technology infrastructure controller110may use the key for cloning modules used during one or more runs of the execution plan190. As used here, the key may refer to, for example, a secure shell (SSH) key. A workspace may be locked and/or unlocked by the first user145aand/or the second user145bif the first user145aand/or the second user145bhas write access or administrative access. Locking a workspace may prevent users with write access from manually queuing runs, prevent automatic runs due to the first configuration file125aand/or the second configuration file125bbeing committed to the version controller140, prevent creation of runs via an application programming interface (API), prevent creation of runs using a command line interface (CLI) tool, and/or the like. To enable runs, the workspace may be unlocked via the toggle button3430. In some example embodiments, having administrative access may enable the first user145aand/or the second user145bto delete a workspace. Before deleting the workspace, the first user145aand/or the second user145bmay set the environment variable “CONFIRM_DESTROY” to “1” for the workspace and queue a destroy plan. Queueing a destroy plan may destroy the resources in the information technology infrastructure130managed by the workspace. It should be appreciated that resources must be destroyed before deleting a corresponding workspace. Otherwise, if the resources are not destroyed before deleting the workspace, these resources may become unmanaged and may require destruction by the first provider150aand/or the second provider150b. In some example embodiments, the information technology infrastructure controller110may store one or more keys (e.g., secure shell (SSH) keys) such that the keys may be used in that clones modules from a server that requires credentials such as, for example, the version controller140. The information technology infrastructure controller110may manage the keys used to clone modules at the organization level and may allow multiple keys to be associated with, for example, the organization155. Keys may be added or deleted via organizational settings. Once a key is uploaded, the text of the key may be hidden from the first user145aat the first client120aand/or the second user145bat the second client120b. The first user145aand/or the second user145bmay set up an organizational key using, for example, the user interface shown inFIG.4M. For example, as shown inFIG.4M, to add a key (e.g., a secure shell (SSH) key) to the information technology infrastructure controller110, the first user145aand/or the second user145bmay obtain a key pair that the information technology infrastructure controller110may use to download one or more modules (e.g., the first module116a, the second module116b, and/or the like) during the running, for example, of the execution plan190. A key pair may be created using the following command: ssh-keygen-t rsa-f “/Users/<NAME>/ssh/service_tfe”-C “service_terraform_enterprise” The command above may create a service_tfe file with a private key as well as a service_tfe.pub file with the public key. The public key may be distributed, for example, to the version controller140. Meanwhile, as shown inFIG.4M, a unique identifier for the private key1040may be entered in the field3570and the text of the private key1040may be entered in the field3520before the private key1040may be added by clicking the “Add Private SSH Key” button. Upon adding the private key1040, the key may appear in the list of keys3740, which may list the private key1040using the unique identifier1062of the private key1040. While the information technology infrastructure controller110may retain the text of the private key1040, the text of the private key1040may remain hidden from the first user145aand/or the second user145b. Furthermore,FIGS.4N-Mshows that to delete a key (e.g., a secure shell (SSH) key), the first user145aand/or the second user145bmay replace the key in the field3470in the workspace settings of the workspaces that use the key with another key. The first user145aand/or the second user145bmay further click the “Destroy” button next to the keys' unique identifier in the list of keys3740. As noted, the first user145aand/or the second user145bmay have access (e.g., read access, write access, administrative access, and/or the like) to a workspace by being associated with a team having access to the workspace. As shown inFIG.4M, a team may be accorded access to the workspace by being added to the workspace and by setting the access privileges associated with the team. Each workspace may be associated with at least one team (e.g., the team3260) that has full access to the workspace including, for example, read access, write access, administrative access, and/or the like. Removing a team from a workspace may remove the team's access to the workspace. Referring now toFIG.5A, all users in the organization155(e.g., the first user145a, the second user145b, and/or the like) may view the module registry115, which may be associated exclusively with the organization196. Alternatively, one or more of the modules in the module registry115may only be visible to some users or groups of users within the organization155but remain hidden from others users or groups of users within the organization155. In some example embodiments, a workspace associated with the organization155may only be permitted to use modules associated with the organization155. For a module to be available to users from more than one organization, the same module may be added to the module registry of each organization. As shown inFIG.5A, a list of available modules for an organization may be accessed by clicking the button4020in the main navigation bar. The modules page4000may list the available modules for an organization. The drop-down4010may be used to filter the list to show modules for one or more selected providers. The field4050may be used to enter a search for modules by keyword. The details associated with a module may be viewed by clicking the button4060. The dropdown4070may be used to switch between different versions of the same module. Selecting the tabs4110may provide detailed documentation for each version of a module. Any module from an organization's module registry may be added to a configuration file such as, for example, the first configuration file125a, the second configuration file125b, and/or the like. Table 6 below depicts the syntax for referencing modules in source attributes. TABLE 6<TFE HOSTNAME>/<TFE ORGANIZATION>/<MODULE NAME>/<PROVIDER> As noted, the module registry115may allow a corresponding organization to publish configuration modules for consumption by users across an organization155. The privilege to publish, for example, the first module116aand/or the second module116bto the module registry115may be limited to certain users and/or certain teams within the organization155. Once the first module116aand/or the second module116bis published to the module registry115, the version controller140may be configured to manage the release of new versions of the first module116aand/or the second module116b. The first module116aand/or the second module116bmay be published to the module registry115of the organization155by at least providing the name of a corresponding repository to the information technology infrastructure controller110. The module registry115may use the name of the repository to determine a name and a provider for each of the first module116and/or the second module116b. The module registry115may further use the repository's tags to identify one or more available versions of the first module116and/or the second module116b. Furthermore, the module registry may format documentation for each version of the first module116aand/or the second module116bbased on the corresponding README and/or configurations in the repository. A new version of the first module116and/or the second module116bmay be released by pushing a new tag to its repository. The module registry115may be update automatically, for example, to include new versions of the first module116aand/or the second module116b. Consumers of a module do not need access to its source repository; the module registry115may handle downloads and may further use application programming interface (API) tokens associated with the information technology infrastructure controller110to control access to the module registry115and/or any infrastructure modules at the module registry115. Modules can be shared by multiple organizations by sharing the underlying VCS repository. Each organization155is granted access to the module's repository and then added to each organization's module registry. When tags are pushed to publish new module versions of the modules, all organizations' registries will update appropriately. In some example embodiments, a module repository may reside at the version controller140while the information technology infrastructure controller110may have access (e.g., administrative access) to that repository. Since the module registry115may rely on a webhook to import new versions of the first module116aand/or the second module116bfrom the version controller140, the information technology infrastructure controller110may be required to have sufficient access privileges to create the webhook. The first module116aand/or the second module116bmay be required to conform to a standard structure to enable the module registry115to perform inspection, generate documentation, track resource usage, and/or the like. As shown inFIG.5A, the first module116aand/or the second module116bmay be published clicking the button4030on the modules page4000, selecting the version controller140the list of VCS providers4230, entering the name of the repository, and clicking on the button4210. A new version of the first module116aand/or the second module116bmay be added by pushing a new version tag to a corresponding repository at the version controller140. Pushing the new version tag (e.g., v1.0.4 and 0.9.2) may cause the module registry115to automatically import the new version of the first module116aand/or the second module116b. The module registry115maybe configured to import new versions of the first module116aand/or the second module116bautomatically, for example, when new versions of the first module116aand/or the second module116bare detected at the VCS providers4230. Alternatively and/or additionally, the module registry115may interact with the VCS provider4230periodically to determine whether new versions of the first module116aand/or the second module116bhave been added to the VCS provider4230. Referring toFIG.5B, the first module116a, the second module116b, and/or any version thereof may be navigating to the module's details page. Each module's details page may include the button4120, which may be used to delete a version of the module and/or the module in its entirety. For example, a single version of a module may be deleted by selecting the version of the module to be deleted and then clicking on the button4120. FIGS.6A-Cdepict examples of user interfaces for interacting with runs within a workspace. Referring toFIG.6A, all runs of the execution plan190may be performed within at least one workspace such as, for example, the first workspace165a, the second workspace165b, and/or the third workspace165c. The first workspace165a, the second workspace165b, and/or the third workspace165cmay provide the state526, the variables3750, and the variables3760required for the run. The first workspace165a, the second workspace165b, and/or the third workspace165cmay further specify the sources810and820of the configuration. Each workspace may include a button3755to start a run, a link3765to the full list of runs, and a link3775to the most recent active run or the last completed run. It should be appreciated that the most recent active run may not be the most recently initiated run because pending runs may remain inactive until the completion of a current run. Runs may be processed one at a time in order and only one active run may be permitted for each workspace. Whenever a new run is initiated, the run may be added to an end of a run queue. When a run in progress, the new run may be held in abeyance until the current run is complete. The runs page may display the run name3720, the identity of the user who initiated the run3710, and the source of the run start3730(e.g., version controller140, the information technology infrastructure controller110, and/or the like). The run page may also display the name of the branch3770, the code commit for the run3760, and the status of the run3750. As noted, a run may be started via the information technology infrastructure controller110or the version controller140. Alternatively, a run may also be created via a command line interface (CLI) and initiated via a user interface. When a run is initiated, the information technology infrastructure controller110may lock the run, for example, to the first configuration file125a. Any subsequent changes, for example, from the second configuration file125b, may apply to future runs but not the runs that are already in progress (e.g., pending, planning, or awaiting to be applied to the information technology infrastructure130). The information technology infrastructure controller110may be configured to initiate a run for the execution plan190automatically. Whenever a new commit is detected at the version controller140, the information technology infrastructure controller110may respond by queuing a corresponding plan. The first user145aand/or the second user145bmay also queue a plan, for example, after editing one or more variables associated with the first workspace165a, the second workspace165b, and/or the third workspace165c. Each run of a plan may pass through several stages of action including, for example, pending, planning, policy checking, applying, complete, and/or the like. The information technology infrastructure controller110may be configured to provide an indication of the status of each run. For example, in the list of workspaces, each workspace may be shown with the status of a current run and/or the most recently completed run. FIG.6Bdepicts an individual run page configured to display the progress and outcomes of each stage of a run. The run page may show a current status of the run3820, the code commit associated with the run3810, the manner in which the run was initiated3865, when the run was initiated3885, the user3840initiating the run, a timeline of events related to the run, the output3930from the plan, and/or the like. Where a user has sufficient access privileges (e.g., write access) to a workspace, the run page may provide controls for interacting with a run while the run is in progress. For example, the run may be cancelled while the run is in progress or the execution plan190may be discarded before the execution plan190is applied during the run. One or more of the first policy175a, the second policy175b, first quota175c, and/or the second quota175dmay also be overridden and thus excluded from the validation of the execution plan190. A user with sufficient access privileges (e.g., write access or administrative access) to the workspace may temporarily suspend the queuing of runs by at least locking the workspace. New runs may remain in a pending state until the workspace is unlocked. Current and historical state data for a workspace may be viewed from a “states” tab. Each state in the list may be associated with a run and/or a commit to the version controller140. Each state in the list may be further associated with a link to a raw state file and a delta file storing one or more differences between a current state and a previous state. A given workspace may access state data for workspaces within the same organization (e.g., the first workspace165a, the second workspace165b, and/or the third workspace165cassociated with the organization155). In some example embodiments, outputs from other workspaces may be accessed remotely, for example, by being added as a data source in the first configuration file125aand/or the second configuration file. In some example embodiments, the information technology infrastructure controller110may generate a token (e.g., an application programming interface (API) token) that is unique to each run. The token may be exported to the shell environment. Moreover, the token may be used to read and/or write state data for the workspace associated with the run as well as to read state data from any other workspace in the same organization155. However, a token may become invalid after a corresponding run is complete. FIG.7depicts a block diagram illustrating a computing system700consistent with implementations of the current subject matter. Referring toFIGS.1A-Cand7, the computing system700can be used to implement the information technology infrastructure controller110and/or any components therein. As shown inFIG.7, the computing system700can include a processor710, a memory720, a storage device730, and input/output device740. The processor710, the memory720, the storage device730, and the input/output device740can be interconnected via a system bus750. The processor710is capable of processing instructions for execution within the computing system700. Such executed instructions can implement one or more components of, for example, the information technology infrastructure controller110. In some implementations of the current subject matter, the processor710can be a single-threaded processor. Alternately, the processor710can be a multi-threaded processor. The processor710is capable of processing instructions stored in the memory720and/or on the storage device730to display graphical information for a user interface provided via the input/output device740. The memory720is a computer readable medium such as volatile or non-volatile that stores information within the computing system700. The memory720can store data structures representing configuration object databases, for example. The storage device730is capable of providing persistent storage for the computing system700. The storage device730can be a floppy disk device, a hard disk device, an optical disk device, a tape device, a solid state device, and/or any other suitable persistent storage means. The input/output device740provides input/output operations for the computing system700. In some implementations of the current subject matter, the input/output device740includes a keyboard and/or pointing device. In various implementations, the input/output device740includes a display unit for displaying graphical user interfaces. According to some implementations of the current subject matter, the input/output device740can provide input/output operations for a network device. For example, the input/output device740can include Ethernet ports or other networking ports to communicate with one or more wired and/or wireless networks (e.g., a local area network (LAN), a wide area network (WAN), the Internet). In some implementations of the current subject matter, the computing system700can be used to execute various interactive computer software applications that can be used for organization, analysis and/or storage of data in various (e.g., tabular) format (e.g., Microsoft Excel®, and/or any other type of software). Alternatively, the computing system700can be used to execute any type of software applications. These applications can be used to perform various functionalities, e.g., planning functionalities (e.g., generating, managing, editing of spreadsheet documents, word processing documents, and/or any other objects, etc.), computing functionalities, communications functionalities, etc. The applications can include various add-in functionalities or can be standalone computing products and/or functionalities. Upon activation within the applications, the functionalities can be used to generate the user interface provided via the input/output device740. The user interface can be generated and presented to a user by the computing system700(e.g., on a computer screen monitor, etc.). One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs, field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system can include users and servers. A user and server are generally remote from each other and typically interact through a communication network. The relationship of user and server arises by virtue of computer programs running on the respective computers and having a user-server relationship to each other. These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example, as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. Other possible input devices include touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive track pads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. For example, the logic flows can include different and/or additional operations than shown without departing from the scope of the present disclosure. One or more operations of the logic flows can be repeated and/or omitted without departing from the scope of the present disclosure. Other implementations can be within the scope of the following claims. | 109,142 |
11863390 | DETAILED DESCRIPTION In at least one embodiment, a number of computing resources, such as processors (e.g., central processing units (CPUs) and graphics processing units (GPUs)), are connected using other resources, such that different instructions for a task can be performed by different computing resources, as illustrated by configuration100ofFIG.1. In at least one embodiment, a first set of processors102,104,106,108can be connected to a second set of processors126,128,130,132by a set of network switches110,112,114,116,118,120,122,124. In at least one embodiment, these switches may connect each processor with every other processor, or any subset of processors (or other interconnected nodes, such as network interface cards (NICs)) with another subset of processors, according to a relevant network or system architecture. In at least one embodiment, there may be a complex task to be performed that requires use of multiple processors, compute nodes, or components, such as to distribute or offload a portion of this processing. In at least one embodiment, such a task may relate to, without limitation, training of a neural network, inferencing using a neural network, graphics rendering, medical image segmentation, content synthesis, or computer vision. In at least one embodiment, processor102might send data, instructions, or communications to processor128using at least one of a first level of switches110,112,114,116and at least one of a second set of switches118,120,122,124. In at least one embodiment, such sending may relate to an offloading of a portion of a processing task, such as a CPU sending instructions to be performed by a CPU. In at least one embodiment, these resources may be allocated to different tenants (e.g., users, entities, customers, or applications) at different times, or shared among various tenants. In at least one embodiment, a tenant may need or request to execute a job or perform a task in a confidential or secure compute environment, where one or more specific processors are to send information over specific switches to be received by one or more other specific processors (or nodes, etc.). In at least one embodiment, such an operation may correspond to a reduction operation, where a number of processors (e.g., 10 processors) send data where operations are to be performed that result in a single set of data being received to a single destination processor (or smaller set of processors). In at least one embodiment, this may involve a designated path140from these sending processors102,104,106,108to a recipient or target processor128. In at least one embodiment, this path can thus correspond to a reduction tree, where a number of nodes involved at each level (or at least certain levels) is decreased, or reduced. In at least one embodiment, this path may be selected to include specific switches112,116,122that help to ensure that any data sent along this path is not accessible by another tenant or entity, and that confidence and security are maintained. In at least one embodiment, this may involve being able to attest to a correct reduction tree being implemented before performing a task or sending data using nodes according to that reduction tree. In at least one embodiment, it can be advantageous to provide such attestation ability without having to expose and update network configuration information (e.g., tree topology) to individual tenants, or requiring those tenants to have experience and capacity needed to perform such attestation. In at least one embodiment, a path such as a reduction tree200for a task may be selected or allocated as illustrated inFIG.2A. In at least one embodiment, an approach can be taken to ensure that each relevant switch112,116,122has accurate participant data such that data or communications are correctly routed between sending processors102,104,106,108and a recipient processor128for this respective tree. In at least one embodiment, a test can be run before performance of a task associated with a tree, such as reduction tree200, to verify that reduction tree200includes correct participants, or participating nodes. In at least one embodiment, participating nodes on a tree can run a test to verify that correct participants are included, and only those correct participants. In at least one embodiment, such an approach does not require knowledge of tree topology, or attesting tree configuration in every single tree. In at least one embodiment, such a reduction tree200can be used for shared processing among processors (or processing units or cores) of one or more types. In at least one embodiment, part of this processing is offloaded onto switches112,116,122that can perform operations such as reductions. In at least one embodiment, reductions are mathematical operations that are performed as data is passed through a network. In at least one embodiment, operations will be performed on this data at one or more levels202,204(or other logical groupings) of switches. In at least one embodiment, a reduction performed at each level of switches can be defined or determined, at least in part, by an operation to be performed, such as an addition or subtraction, as well as this reduction tree itself, which is a list of switches to be participating in this operation (e.g., switches112,116,122as illustrated inFIG.2A, from a set of possible switches such as those illustrated inFIG.1). In at least one embodiment, each switch112,116at a first switch level202may perform an operation on data from at least two processors, such as switch112operating on data from processors102,104and switch116operating on data from processors106,108. In at least one embodiment, each of these switches112,116will then output a result of a respective operation, which in this example are then provided as input to switch122at a next level of switches204in reduction tree200. In at least one embodiment, there may be other numbers of switches in a given level, other number of levels, other number of processors providing data to switches, and other tree configurations as well. In at least one embodiment, a result of an operation performed by switch122can then be provided as input to target destination processor128. In at least one embodiment, if such a reduction tree is to be used to support confidential compute, then it can be necessary to ensure that proper nodes are included in this reduction tree to prevent inadvertently exposing data from one tenant to another through use of an improper path or in correct node(s). In at least one embodiment, inclusion of improper nodes may also enable one tenant to be able to affect an integrity of data of another tenant, even if that data is encrypted. In at least one embodiment, a tenant may also want to be able to verify that an operation was performed as requested, using appropriate resources, including data from correct processors or sources. In at least one embodiment, a test can be performed before any operation is performed using a specific path or tree of resources. In at least one embodiment, such a test can verify a configuration of switches through an application that is performing an operation, such as a reduction. In at least one embodiment, a successful test can prove, with sufficient strength or confidence, that these switches are configured correctly and that correct operations are being performed among members of this path or tree. In at least one embodiment, each member to at least one tree or path can be given a unique identifier or identity. In at least one embodiment, reference numbers for these nodes will be used for simplicity, but it should be understood that unique identifiers may be much more complex in order to improve security, such as may correspond to 128 or 256 bit alphanumeric strings that may be randomly, pseudo-randomly, or generated according to a determined process, where each identity is guaranteed to be unique among these members. In at least one embodiment, any type of secret may be distributed amongst these members, where a secret is only known to a member to which that secret is assigned, as well as any entity or component that is to validate or authenticate that secret. In at least one embodiment, secrets for member nodes to a reduction tree can be known by an initiator of this test. In at least one embodiment, operations can be performed by individual nodes during this test, whereby these secrets are placed into a result of these operations, and placed in a specific order, which then allows a recipient of this result to verify not only which members processed data for this test, but also a relative order in which these members processed this data. In at least one embodiment, an example test flow of secrets230is illustrated inFIG.2B. In at least one embodiment, switch112in a first layer202of switches will receive data from processor102and processor104. In at least one embodiment, if these values correspond to secrets for these members, then switch112can concatenate these values to a value, such as “102:104” which indicates that this switch received and processed data from these two processors. In at least one embodiment, switch116will similarly receive data from processors106and108, and can concatenate these values into a string “106:108”. In at least one embodiment, this data can be concatenated in different ways, and may include data received from these members as well. In at least one embodiment, data from switches112and116will then be received by switch122in second layer of switches204. In at least one embodiment, switch122can concatenate received strings from these switches into “102:104:112:106:108:116”, which indicates that data from processors102and104was received by switch112, and data from processors106and108was received by switch116. In at least one embodiment, switch112can then pass this data along to target processor128at a final level232with a final string value that also concatenates its own secret into this string, arriving finally at “102:104:112:106:108:116:122” which indicates that switch122was last to concatenate data and provide this data to processor128. In at least one embodiment, it can be seen that such concatenation of secret values (e.g., member identifiers) can be used to not only determine which nodes or members participated in this test, and are thus included in this reduction tree or path, but also enables an initiator of this test to determine an ordering of this concatenation, which helps to rebuild this reduction tree and determine that these members or nodes are connected in a correct order or arrangement. In at least one embodiment, other mathematical operations can be performed as well, as may relate to AND, OR, addition, multiplication, hashes, or other such operations. In at least one embodiment, such variations can produce results in other forms as well, such as (without limitation) a single numerical value, list of secrets, or hash of concatenated secrets. In at least one embodiment, these values can also be authenticated and encrypted with digital keys at these various levels, in order to further protect this data from unintended access or exposure. In at least one embodiment, members for such a test can include any type of end node in such an environment, as may include a CPU, GPU, or NIC. In at least one embodiment, an initiator of this test can configure an operation where this initiator receives these responses. In at least one embodiment, each end node to this test will initiate a message, and these messages will be reduced (e.g., concatenated, merged, or combined in some way) at one or more levels of switches or other intermediate nodes. In at least one embodiment, each message sent by a member or node will include a unique message that includes its own secret, such that a result will include an ordered list or string of secrets, or at least one value from which that order of secrets can be determined. In at least one embodiment, a test may not include information about intermediate nodes, but may simply care that information from correct initiating nodes was used, and that this information was processed in a correct order. In at least one embodiment, for a flow260ofFIG.2C, this may instead result in a final string of “102:104:106:108” which shows a correct combination and ordering of end nodes, but without information about any intermediate switches. In at least one embodiment, this may correspond to a situation where a test initiator does not care which intermediate switches were used, as long as a correct flow of reductions was performed on a correct set of end node data. In at least one embodiment, such an approach may be utilized over an approach that verifies which switches are utilized as an initiator may be completely (or at least substantially) indifferent as to a topology of a system, and may not care about which switches are used as long as operations are performed in a correct order. In at least one embodiment, such an approach may also be beneficial as this topology may then be modified, such as to replace or re-allocate switches, without a need to modify or update an initiator. In at least one embodiment, a system300in which various approaches described herein can be implemented is illustrated inFIG.3. In at least one embodiment, system300may correspond to, without limitation, a computer or server, a rack of servers, or a distributed set of servers. In at least one embodiment, a set of computing resources, such as a NIC304, set of CPUs306, and set of GPUs308,310,312can be connected by a set of switches314or similar networking or communication components. In at least one embodiment, various tasks for one or more users may utilize respective subsets of these computing resources, which can be connected by appropriate selections and subsets of switches314. In at least one embodiment, to ensure that data and communications are secure in a multi-tenant environment, a specific selection of compute nodes and switches can be selected as part of a secure of confidential communication path, which in many instances may involve a tree structure or other multi-path structure. In at least one embodiment, such a structure may represent these compute resources as end nodes, with selected switches314as intermediate nodes. In at least one embodiment, a specific tree structure may be selected for a given task for a specific user or entity, and this tree structure may be stored to accessible memory316in, or in communication with, this system300. In at least one embodiment, this tree structure may be located in other locations as well, such as CPU memory in at least one CPU306or received to a NIC304. In at least one embodiment, it may be desirable to verify that this tree structure is implemented properly before performing this task, such as to verify that switches are configured properly to implement this tree structure using correct nodes, and only correct nodes, in a proper ordering or configuration. In at least one embodiment, an attestation module302, component, system, service, or process may be utilized to attest to, or verify, such a configuration before a corresponding task is performed. In at least one embodiment, a user or application may contact attestation module302to perform a test, or a compute node receiving an instruction might contact attestation module302before (or as part of) performing a task. In at least one embodiment, this can enable this test to be run on a same tree that this application is to run on, in order to verify this configuration. In at least one embodiment, an attestation process may be built into one or more of these compute nodes. In at least one embodiment, attestation module302can receive a request and can check for a required or corresponding network configuration (e.g., tree structure) to use for a given task. In at least one embodiment, attestation module302can determine compute resources corresponding to end nodes of this configuration, and can send instructions for those resources to initiate a test corresponding to this configuration. In at least one embodiment, individual nodes (e.g., compute resources and/or switches) can store respective secrets, such as an alphanumeric string that is unique to a given node at least among this set of nodes. In at least one embodiment, each initiating node can send its unique secret to a switch314indicated by this configuration, which can then send these secrets, along with its unique secret in at least one embodiment, to another switch314or recipient node, and repeat until all nodes of this configuration or tree have been visited and their unique secrets sent in a specific ordering or arrangement to at least one recipient node. In at least one embodiment, this recipient node may provide this data string (or list, arrangement, or grouping) to attestation module302, which can compare this received data string to an expected data string. In at least one embodiment, an expected data string may include unique secrets for each node to a tree concatenated together (or otherwise operated on mathematically) in a specific order, which generates an expected data string, list, or other arrangement of unique secrets. In at least one embodiment, if this received data string exactly matches this expected data string generated from unique secrets of this configuration or structure, then attestation module302can attest to a correctness of this configuration, and this task can then be performed using this configuration. In at least one embodiment, if this received data string is not equal to this expected data string then an appropriate action can be taken, such as to deny performance of this task, attempt to reconfigure this network then retest, or provide notification of this incorrect configuration. In at least one embodiment, a configuration can be provided for each task, or may be stored for use by specific tasks or for specific users, applications, systems, services, processes, or entities. In at least one embodiment, an entire reduction operation can be based, at least in part, upon trusted hardware, while a tree configuration may be completely (or at least substantially) software controlled. In at least one embodiment, this trusted hardware can ensure that it is locked during a lifetime of a given reduction tree. In at least one embodiment, a process400for verifying a network configuration can be performed as illustrated inFIG.4. In at least one embodiment, a task may be performed in a network of computing resources using only a specific tree structure, or selection and configuration of these resources. In at least one embodiment, one or more sending nodes, intermediate switches (or other nodes), and one or more receiving nodes can be determined402to correctly implement this tree structure, or other network configuration. In at least one embodiment, it can be ensured404that each of these nodes has a unique and secure secret, such as a unique alphanumeric identifier. In at least one embodiment, before performing this task or operation, a test can be initiated406to validate this tree structure. In at least one embodiment, each sending node can be caused408to send at least its node-specific secret to a corresponding switch indicated by this tree structure. In at least one embodiment, each switch receiving two or more secrets can then be caused410to concatenate those secrets, or perform a mathematical operation with respect to those secrets. In at least one embodiment, a determination can be made412as to whether there are other switches in this network configuration that are to receive data for this task or operation, and if so switches having received and concatenated secrets (as may include their own unique secrets) can be caused414to send these concatenated secrets to one or more switches in a next level of this tree structure. In at least one embodiment, this process can continue as long as there are additional switches, or levels of switches, to receive data in this tree structure. In at least one embodiment, each level of switches will receive a secret string including at least one secret value, or multiple concatenated secret values, which can then be passed on according to this tree structure, as well as being concatenated with any relevant or received secrets by that switch. In at least one embodiment, once data has passed through all relevant switches (overall or along at least one branch of this tree structure), a string, set, or arrangement of concatenated secrets can be provided416to a receiving node of this network structure. In at least one embodiment, if there are multiple branches or recipients then each recipient can receive a respective set of concatenated secrets, which may be similar or different depending upon paths or branches leading to those end nodes. In at least one embodiment, an initiator of this test can analyze418this final concatenated string, or each concatenated string if there are multiple end nodes, to verify an expected inclusion or ordering of secrets. In at least one embodiment, this can involve comparing a received data string of concatenated secrets against an expected data string for this particular network configuration or structure, and determining whether there are any differences that would indicate an improper configuration. In at least one embodiment, such a string comparison does not require any knowledge of these nodes or their possible configurations, but can be performed by receiving or obtaining only this expected string for comparison with a received string resulting from a network test. In at least one embodiment, if these strings match then this configuration can be verified and this operation performed. In at least one embodiment, if these strings differ or do not match, then a different action can be taken as discussed elsewhere herein. In at least one embodiment, a process500for verifying proper configuration of a network of computing nodes can be performed as illustrated inFIG.5. In at least one embodiment, a network of computing nodes can be determined502that is to be used to perform a task. In at least one embodiment, this can correspond to a subset of computing nodes available in a multi-tenant setting, where data or communications are to flow through these nodes in a specific order or along a specific structure. In at least one embodiment, a network test can be performed504to obtain one or more data strings generated by this network, such as to generate an ordered and concatenated set of unique secrets provided by these nodes. In at least one embodiment, it can be verified506that this network of computing nodes is properly configured for this task based, at least in part, on one or more expected data strings being generated by this network of computing nodes. In at least one embodiment, this can include verifying that one or more received data strings produced by this test correspond to, or match, one or more expected strings for this network of computing nodes. In at least one embodiment, aspects of network operation and configuration, as well as performance of applications or operations that utilize such networks, can be performed on a single device or system, as discussed with respect to a system ofFIG.1,2A or3, or may be distributed in various locations on various different devices. In at least one embodiment, a client device602can perform one or more tasks for a session according to a content application604executing on client device602and data stored locally on that client device as illustrated inFIG.6. In at least one embodiment, a content application624executing on content server620may initiate a session associated with at least client device602, as may utilize a session manager and user data stored in a user database634, and can cause content632to be processed or generated for a content application624through use of a content manager626, which may utilize one or more neural networks trained using a training module628or process. In at least one embodiment, these networks can perform complex tasks such as, without limitation, neural network-based inferencing, graphics generation, computer vision, or 3D medical image segmentation. In at least one embodiment, a process module628may attempt to perform a task, under instruction of content application624, which requires a specific set and configuration or network resources. In at least one embodiment, before this process628has this task performed using a specific set of nodes632in a specific structure or configuration, an attestation module630can perform a test herein to cause a data string (or other result) to be generated that corresponds to a set of secrets for these nodes, which can then be compared to an expected data string to verify correct configuration of these nodes632according to an expected network configuration or structure. In at least one embodiment, results of any of these components (e.g., attestation of configuration or a result of a process performed using this configuration once verified) can be transmitted to client device602using an appropriate transmission manager622to send by download, streaming, or another such transmission channel. In at least one embodiment, client device602receiving this content can provide this content to a corresponding application604, which may also or alternatively include a content manager610for providing at least some of this content for presentation via client device602, such as image or video content through a display606and audio content through at least one audio playback device608, such as one or more speakers or speaker arrays. In at least one embodiment, a process module612or attestation module614on client device602may also be used to assist in, or perform, such tasks. In at least one embodiment, where any or all of this this functionality executes can depend, at least in part, upon where training tasks are to occur, where inferencing tasks are to occur, and where any sensitive data may live or be limited in distribution. In at least one embodiment, data or content that is transmitted across network640may be compressed before transmission, with a receiving entity or system then attempting to decompress this data or content. In at least one embodiment, at least some of this content may already be stored on, rendered on, or accessible to client device602such that transmission over network640is not required for at least that portion of content, such as where that content may have been previously downloaded or stored locally on a hard drive or optical disk. In at least one embodiment, a transmission mechanism such as data streaming can be used to transfer this content from server620, or content database634, to client device602. In at least one embodiment, at least a portion of this content can be obtained or streamed from another source, such as a third party service660that may also include an application662for performing any such tasks. In at least one embodiment, portions of this functionality can be performed using multiple computing devices, or multiple processors within one or more computing devices, such as may include a combination of CPUs and GPUs. In at least one embodiment, locations where at least some of this functionality is performed may be configurable, or may depend upon factors such as a type of client device602or availability of a network connection with appropriate bandwidth, among other such factors. In at least one embodiment, one or more neural networks can be used for performing or assisting in this functionality, where those neural networks (or at least network parameters for those networks) can be provided by content server620or third party system660. In at least one embodiment, generated content or data can also be provided, or made available, to other client devices650, that may perform similar calculations or training tasks, such as for download or streaming from a data source storing a copy of that content. Inference and Training Logic FIG.7Aillustrates inference and/or training logic715used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided below in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may include, without limitation, code and/or data storage701to store forward and/or output weight and/or input/output data, and/or other parameters to configure neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, training logic715may include, or be coupled to code and/or data storage701to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs)). In at least one embodiment, code, such as graph code, loads weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, code and/or data storage701stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of code and/or data storage701may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage701may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or code and/or data storage701may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or code and/or data storage701is internal or external to a processor, for example, or comprising DRAM, SRAM, flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic715may include, without limitation, a code and/or data storage705to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, code and/or data storage705stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, training logic715may include, or be coupled to code and/or data storage705to store graph code or other software to control timing and/or order, in which weight and/or other parameter information is to be loaded to configure, logic, including integer and/or floating point units (collectively, arithmetic logic units (ALUs)). In at least one embodiment, code, such as graph code, causes a loading of weight or other parameter information into processor ALUs based on an architecture of a neural network to which such code corresponds. In at least one embodiment, any portion of code and/or data storage705may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage705may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, code and/or data storage705may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, a choice of whether code and/or data storage705is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, code and/or data storage701and code and/or data storage705may be separate storage structures. In at least one embodiment, code and/or data storage701and code and/or data storage705may be a combined storage structure. In at least one embodiment, code and/or data storage701and code and/or data storage705may be partially combined and partially separate. In at least one embodiment, any portion of code and/or data storage701and code and/or data storage705may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, inference and/or training logic715may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”)710, including integer and/or floating point units, to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code (e.g., graph code), a result of which may produce activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage720that are functions of input/output and/or weight parameter data stored in code and/or data storage701and/or code and/or data storage705. In at least one embodiment, activations stored in activation storage720are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s)710in response to performing instructions or other code, wherein weight values stored in code and/or data storage705and/or data storage701are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage705or code and/or data storage701or another storage on or off-chip. In at least one embodiment, ALU(s)710are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s)710may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs710may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, code and/or data storage701, code and/or data storage705, and activation storage720may share a processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage720may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits. In at least one embodiment, activation storage720may be cache memory, DRAM, SRAM, non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, activation storage720may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, a choice of whether activation storage720is internal or external to a processor, for example, or comprising DRAM, SRAM, flash memory or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic715illustrated inFIG.7Amay be used in conjunction with an application-specific integrated circuit (“ASIC”), such as a TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic715illustrated inFIG.7Amay be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”). FIG.7Billustrates inference and/or training logic715, according to at least one embodiment. In at least one embodiment, inference and/or training logic715may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic715illustrated inFIG.7Bmay be used in conjunction with an application-specific integrated circuit (ASIC), such as TensorFlow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic715illustrated inFIG.7Bmay be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic715includes, without limitation, code and/or data storage701and code and/or data storage705, which may be used to store code (e.g., graph code), weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated inFIG.7B, each of code and/or data storage701and code and/or data storage705is associated with a dedicated computational resource, such as computational hardware702and computational hardware706, respectively. In at least one embodiment, each of computational hardware702and computational hardware706comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in code and/or data storage701and code and/or data storage705, respectively, result of which is stored in activation storage720. In at least one embodiment, each of code and/or data storage701and705and corresponding computational hardware702and706, respectively, correspond to different layers of a neural network, such that resulting activation from one storage/computational pair701/702of code and/or data storage701and computational hardware702is provided as an input to a next storage/computational pair705/706of code and/or data storage705and computational hardware706, in order to mirror a conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs701/702and705/706may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage/computation pairs701/702and705/706may be included in inference and/or training logic715. Neural Network Training and Deployment FIG.8illustrates training and deployment of a deep neural network, according to at least one embodiment. In at least one embodiment, untrained neural network806is trained using a training dataset802. In at least one embodiment, training framework804is a PyTorch framework, whereas in other embodiments, training framework804is a TensorFlow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment, training framework804trains an untrained neural network806and enables it to be trained using processing resources described herein to generate a trained neural network808. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner. In at least one embodiment, untrained neural network806is trained using supervised learning, wherein training dataset802includes an input paired with a desired output for an input, or where training dataset802includes input having a known output and an output of neural network806is manually graded. In at least one embodiment, untrained neural network806is trained in a supervised manner and processes inputs from training dataset802and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network806. In at least one embodiment, training framework804adjusts weights that control untrained neural network806. In at least one embodiment, training framework804includes tools to monitor how well untrained neural network806is converging towards a model, such as trained neural network808, suitable to generating correct answers, such as in result814, based on input data such as a new dataset812. In at least one embodiment, training framework804trains untrained neural network806repeatedly while adjust weights to refine an output of untrained neural network806using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework804trains untrained neural network806until untrained neural network806achieves a desired accuracy. In at least one embodiment, trained neural network808can then be deployed to implement any number of machine learning operations. In at least one embodiment, untrained neural network806is trained using unsupervised learning, wherein untrained neural network806attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset802will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network806can learn groupings within training dataset802and can determine how individual inputs are related to untrained dataset802. In at least one embodiment, unsupervised training can be used to generate a self-organizing map in trained neural network808capable of performing operations useful in reducing dimensionality of new dataset812. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in new dataset812that deviate from normal patterns of new dataset812. In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset802includes a mix of labeled and unlabeled data. In at least one embodiment, training framework804may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network808to adapt to new dataset812without forgetting knowledge instilled within trained neural network808during initial training. In at least one embodiment, training framework804is a framework processed in connection with a software development toolkit such as an OpenVINO (Open Visual Inference and Neural network Optimization) toolkit. In at least one embodiment, an OpenVINO toolkit is a toolkit such as those developed by Intel Corporation of Santa Clara, CA. In at least one embodiment, OpenVINO is a toolkit for facilitating development of applications, specifically neural network applications, for various tasks and operations, such as human vision emulation, speech recognition, natural language processing, recommendation systems, and/or variations thereof. In at least one embodiment, OpenVINO supports neural networks such as convolutional neural networks (CNNs), recurrent and/or attention-based neural networks, and/or various other neural network models. In at least one embodiment, OpenVINO supports various software libraries such as OpenCV, OpenCL, and/or variations thereof. In at least one embodiment, OpenVINO supports neural network models for various tasks and operations, such as classification, segmentation, object detection, face recognition, speech recognition, pose estimation (e.g., humans and/or objects), monocular depth estimation, image inpainting, style transfer, action recognition, colorization, and/or variations thereof. In at least one embodiment, OpenVINO comprises one or more software tools and/or modules for model optimization, also referred to as a model optimizer. In at least one embodiment, a model optimizer is a command line tool that facilitates transitions between training and deployment of neural network models. In at least one embodiment, a model optimizer optimizes neural network models for execution on various devices and/or processing units, such as a GPU, CPU, PPU, GPGPU, and/or variations thereof. In at least one embodiment, a model optimizer generates an internal representation of a model, and optimizes said model to generate an intermediate representation. In at least one embodiment, a model optimizer reduces a number of layers of a model. In at least one embodiment, a model optimizer removes layers of a model that are utilized for training. In at least one embodiment, a model optimizer performs various neural network operations, such as modifying inputs to a model (e.g., resizing inputs to a model), modifying a size of inputs of a model (e.g., modifying a batch size of a model), modifying a model structure (e.g., modifying layers of a model), normalization, standardization, quantization (e.g., converting weights of a model from a first representation, such as floating point, to a second representation, such as integer), and/or variations thereof. In at least one embodiment, OpenVINO comprises one or more software libraries for inferencing, also referred to as an inference engine. In at least one embodiment, an inference engine is a C++ library, or any suitable programming language library. In at least one embodiment, an inference engine is utilized to infer input data. In at least one embodiment, an inference engine implements various classes to infer input data and generate one or more results. In at least one embodiment, an inference engine implements one or more API functions to process an intermediate representation, set input and/or output formats, and/or execute a model on one or more devices. In at least one embodiment, OpenVINO provides various abilities for heterogeneous execution of one or more neural network models. In at least one embodiment, heterogeneous execution, or heterogeneous computing, refers to one or more computing processes and/or systems that utilize one or more types of processors and/or cores. In at least one embodiment, OpenVINO provides various software functions to execute a program on one or more devices. In at least one embodiment, OpenVINO provides various software functions to execute a program and/or portions of a program on different devices. In at least one embodiment, OpenVINO provides various software functions to, for example, run a first portion of code on a CPU and a second portion of code on a GPU and/or FPGA. In at least one embodiment, OpenVINO provides various software functions to execute one or more layers of a neural network on one or more devices (e.g., a first set of layers on a first device, such as a GPU, and a second set of layers on a second device, such as a CPU). In at least one embodiment, OpenVINO includes various functionality similar to functionalities associated with a CUDA programming model, such as various neural network model operations associated with frameworks such as TensorFlow, PyTorch, and/or variations thereof. In at least one embodiment, one or more CUDA programming model operations are performed using OpenVINO. In at least one embodiment, various systems, methods, and/or techniques described herein are implemented using OpenVINO. Data Center FIG.9illustrates an example data center900, in which at least one embodiment may be used. In at least one embodiment, data center900includes a data center infrastructure layer910, a framework layer920, a software layer930and an application layer940. In at least one embodiment, as shown inFIG.9, data center infrastructure layer910may include a resource orchestrator912, grouped computing resources914, and node computing resources (“node C.R.s”)916(1)-916(N), where “N” represents a positive integer (which may be a different integer “N” than used in other figures). In at least one embodiment, node C.R.s916(1)-916(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory storage devices918(1)-918(N) (e.g., dynamic read-only memory, solid state storage or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s916(1)-916(N) may be a server having one or more of above-mentioned computing resources. In at least one embodiment, grouped computing resources914may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). In at least one embodiment, separate groupings of node C.R.s within grouped computing resources914may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination. In at least one embodiment, resource orchestrator912may configure or otherwise control one or more node C.R.s916(1)-916(N) and/or grouped computing resources914. In at least one embodiment, resource orchestrator912may include a software design infrastructure (“SDI”) management entity for data center900. In at least one embodiment, resource orchestrator712may include hardware, software or some combination thereof. In at least one embodiment, as shown inFIG.9, framework layer920includes a job scheduler922, a configuration manager924, a resource manager926and a distributed file system928. In at least one embodiment, framework layer920may include a framework to support software932of software layer930and/or one or more application(s)942of application layer940. In at least one embodiment, software932or application(s)942may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer920may be, but is not limited to, a type of free and open-source software web application framework such as Apache Spark™ (hereinafter “Spark”) that may utilize distributed file system928for large-scale data processing (e.g., “big data”). In at least one embodiment, job scheduler922may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center900. In at least one embodiment, configuration manager924may be capable of configuring different layers such as software layer930and framework layer920including Spark and distributed file system928for supporting large-scale data processing. In at least one embodiment, resource manager926may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system928and job scheduler922. In at least one embodiment, clustered or grouped computing resources may include grouped computing resources914at data center infrastructure layer910. In at least one embodiment, resource manager926may coordinate with resource orchestrator912to manage these mapped or allocated computing resources. In at least one embodiment, software932included in software layer930may include software used by at least portions of node C.R.s916(1)-916(N), grouped computing resources914, and/or distributed file system928of framework layer920. In at least one embodiment, one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software. In at least one embodiment, application(s)942included in application layer940may include one or more types of applications used by at least portions of node C.R.s916(1)-916(N), grouped computing resources914, and/or distributed file system928of framework layer920. In at least one embodiment, one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, application and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments. In at least one embodiment, any of configuration manager924, resource manager926, and resource orchestrator912may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center900from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center. In at least one embodiment, data center900may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center900. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center900by using weight parameters calculated through one or more training techniques described herein. In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.9for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include generating a first image of an object based, at least in part, upon adding noise to, and removing this noise from, a second image of this object. Autonomous Vehicle FIG.10Aillustrates an example of an autonomous vehicle1000, according to at least one embodiment. In at least one embodiment, autonomous vehicle1000(alternatively referred to herein as “vehicle1000”) may be, without limitation, a passenger vehicle, such as a car, a truck, a bus, and/or another type of vehicle that accommodates one or more passengers. In at least one embodiment, vehicle1000may be a semi-tractor-trailer truck used for hauling cargo. In at least one embodiment, vehicle1000may be an airplane, robotic vehicle, or other kind of vehicle. Autonomous vehicles may be described in terms of automation levels, defined by National Highway Traffic Safety Administration (“NHTSA”), a division of US Department of Transportation, and Society of Automotive Engineers (“SAE”) “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” (e.g., Standard No. J3016-201806, published on Jun. 15, 2018, Standard No. J3016-201609, published on Sep. 30, 2016, and previous and future versions of this standard). In at least one embodiment, vehicle1000may be capable of functionality in accordance with one or more of Level 1 through Level 5 of autonomous driving levels. For example, in at least one embodiment, vehicle1000may be capable of conditional automation (Level 3), high automation (Level 4), and/or full automation (Level 5), depending on embodiment. In at least one embodiment, vehicle1000may include, without limitation, components such as a chassis, a vehicle body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of a vehicle. In at least one embodiment, vehicle1000may include, without limitation, a propulsion system1050, such as an internal combustion engine, hybrid electric power plant, an all-electric engine, and/or another propulsion system type. In at least one embodiment, propulsion system1050may be connected to a drive train of vehicle1000, which may include, without limitation, a transmission, to enable propulsion of vehicle1000. In at least one embodiment, propulsion system1050may be controlled in response to receiving signals from a throttle/accelerator(s)1052. In at least one embodiment, a steering system1054, which may include, without limitation, a steering wheel, is used to steer vehicle1000(e.g., along a desired path or route) when propulsion system1050is operating (e.g., when vehicle1000is in motion). In at least one embodiment, steering system1054may receive signals from steering actuator(s)1056. In at least one embodiment, a steering wheel may be optional for full automation (Level 5) functionality. In at least one embodiment, a brake sensor system1046may be used to operate vehicle brakes in response to receiving signals from brake actuator(s)1048and/or brake sensors. In at least one embodiment, controller(s)1036, which may include, without limitation, one or more system on chips (“SoCs”) (not shown inFIG.10A) and/or graphics processing unit(s) (“GPU(s)”), provide signals (e.g., representative of commands) to one or more components and/or systems of vehicle1000. For instance, in at least one embodiment, controller(s)1036may send signals to operate vehicle brakes via brake actuator(s)1048, to operate steering system1054via steering actuator(s)1056, to operate propulsion system1050via throttle/accelerator(s)1052. In at least one embodiment, controller(s)1036may include one or more onboard (e.g., integrated) computing devices that process sensor signals, and output operation commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving vehicle1000. In at least one embodiment, controller(s)1036may include a first controller for autonomous driving functions, a second controller for functional safety functions, a third controller for artificial intelligence functionality (e.g., computer vision), a fourth controller for infotainment functionality, a fifth controller for redundancy in emergency conditions, and/or other controllers. In at least one embodiment, a single controller may handle two or more of above functionalities, two or more controllers may handle a single functionality, and/or any combination thereof. In at least one embodiment, controller(s)1036provide signals for controlling one or more components and/or systems of vehicle1000in response to sensor data received from one or more sensors (e.g., sensor inputs). In at least one embodiment, sensor data may be received from, for example and without limitation, global navigation satellite systems (“GNSS”) sensor(s)1058(e.g., Global Positioning System sensor(s)), RADAR sensor(s)1060, ultrasonic sensor(s)1062, LIDAR sensor(s)1064, inertial measurement unit (“IMU”) sensor(s)1066(e.g., accelerometer(s), gyroscope(s), a magnetic compass or magnetic compasses, magnetometer(s), etc.), microphone(s)1096, stereo camera(s)1068, wide-view camera(s)1070(e.g., fisheye cameras), infrared camera(s)1072, surround camera(s)1074(e.g., 360 degree cameras), long-range cameras (not shown inFIG.10A), mid-range camera(s) (not shown inFIG.10A), speed sensor(s)1044(e.g., for measuring speed of vehicle1000), vibration sensor(s)1042, steering sensor(s)1040, brake sensor(s) (e.g., as part of brake sensor system1046), and/or other sensor types. In at least one embodiment, one or more of controller(s)1036may receive inputs (e.g., represented by input data) from an instrument cluster1032of vehicle1000and provide outputs (e.g., represented by output data, display data, etc.) via a human-machine interface (“HMI”) display1034, an audible annunciator, a loudspeaker, and/or via other components of vehicle1000. In at least one embodiment, outputs may include information such as vehicle velocity, speed, time, map data (e.g., a High Definition map (not shown inFIG.10A)), location data (e.g., vehicle's1000location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by controller(s)1036, etc. For example, in at least one embodiment, HMI display1034may display information about presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers vehicle has made, is making, or will make (e.g., changing lanes now, taking exit34B in two miles, etc.). In at least one embodiment, vehicle1000further includes a network interface1024which may use wireless antenna(s)1026and/or modem(s) to communicate over one or more networks. For example, in at least one embodiment, network interface1024may be capable of communication over Long-Term Evolution (“LTE”), Wideband Code Division Multiple Access (“WCDMA”), Universal Mobile Telecommunications System (“UMTS”), Global System for Mobile communication (“GSM”), IMT-CDMA Multi-Carrier (“CDMA2000”) networks, etc. In at least one embodiment, wireless antenna(s)1026may also enable communication between objects in environment (e.g., vehicles, mobile devices, etc.), using local area network(s), such as Bluetooth, Bluetooth Low Energy (“LE”), Z-Wave, ZigBee, etc., and/or low power wide-area network(s) (“LPWANs”), such as LoRaWAN, SigFox, etc. protocols. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.10Afor inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.10Billustrates an example of camera locations and fields of view for autonomous vehicle1000ofFIG.10A, according to at least one embodiment. In at least one embodiment, cameras and respective fields of view are one example embodiment and are not intended to be limiting. For instance, in at least one embodiment, additional and/or alternative cameras may be included and/or cameras may be located at different locations on vehicle1000. In at least one embodiment, camera types for cameras may include, but are not limited to, digital cameras that may be adapted for use with components and/or systems of vehicle1000. In at least one embodiment, camera(s) may operate at automotive safety integrity level (“ASIL”) B and/or at another ASIL. In at least one embodiment, camera types may be capable of any image capture rate, such as 60 frames per second (fps), 1220 fps, 240 fps, etc., depending on embodiment. In at least one embodiment, cameras may be capable of using rolling shutters, global shutters, another type of shutter, or a combination thereof. In at least one embodiment, color filter array may include a red clear clear clear (“RCCC”) color filter array, a red clear clear blue (“RCCB”) color filter array, a red blue green clear (“RBGC”) color filter array, a Foveon X3 color filter array, a Bayer sensors (“RGGB”) color filter array, a monochrome sensor color filter array, and/or another type of color filter array. In at least one embodiment, clear pixel cameras, such as cameras with an RCCC, an RCCB, and/or an RBGC color filter array, may be used in an effort to increase light sensitivity. In at least one embodiment, one or more of camera(s) may be used to perform advanced driver assistance systems (“ADAS”) functions (e.g., as part of a redundant or fail-safe design). For example, in at least one embodiment, a Multi-Function Mono Camera may be installed to provide functions including lane departure warning, traffic sign assist and intelligent headlamp control. In at least one embodiment, one or more of camera(s) (e.g., all cameras) may record and provide image data (e.g., video) simultaneously. In at least one embodiment, one or more camera may be mounted in a mounting assembly, such as a custom designed (three-dimensional (“3D”) printed) assembly, in order to cut out stray light and reflections from within vehicle1000(e.g., reflections from dashboard reflected in windshield mirrors) which may interfere with camera image data capture abilities. With reference to wing-mirror mounting assemblies, in at least one embodiment, wing-mirror assemblies may be custom 3D printed so that a camera mounting plate matches a shape of a wing-mirror. In at least one embodiment, camera(s) may be integrated into wing-mirrors. In at least one embodiment, for side-view cameras, camera(s) may also be integrated within four pillars at each corner of a cabin. In at least one embodiment, cameras with a field of view that include portions of an environment in front of vehicle1000(e.g., front-facing cameras) may be used for surround view, to help identify forward facing paths and obstacles, as well as aid in, with help of one or more of controller(s)1036and/or control SoCs, providing information critical to generating an occupancy grid and/or determining preferred vehicle paths. In at least one embodiment, front-facing cameras may be used to perform many similar ADAS functions as LIDAR, including, without limitation, emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, front-facing cameras may also be used for ADAS functions and systems including, without limitation, Lane Departure Warnings (“LDW”), Autonomous Cruise Control (“ACC”), and/or other functions such as traffic sign recognition. In at least one embodiment, a variety of cameras may be used in a front-facing configuration, including, for example, a monocular camera platform that includes a CMOS (“complementary metal oxide semiconductor”) color imager. In at least one embodiment, a wide-view camera1070may be used to perceive objects coming into view from a periphery (e.g., pedestrians, crossing traffic or bicycles). Although only one wide-view camera1070is illustrated inFIG.10B, in other embodiments, there may be any number (including zero) wide-view cameras on vehicle1000. In at least one embodiment, any number of long-range camera(s)1098(e.g., a long-view stereo camera pair) may be used for depth-based object detection, especially for objects for which a neural network has not yet been trained. In at least one embodiment, long-range camera(s)1098may also be used for object detection and classification, as well as basic object tracking. In at least one embodiment, any number of stereo camera(s)1068may also be included in a front-facing configuration. In at least one embodiment, one or more of stereo camera(s)1068may include an integrated control unit comprising a scalable processing unit, which may provide a programmable logic (“FPGA”) and a multi-core micro-processor with an integrated Controller Area Network (“CAN”) or Ethernet interface on a single chip. In at least one embodiment, such a unit may be used to generate a 3D map of an environment of vehicle1000, including a distance estimate for all points in an image. In at least one embodiment, one or more of stereo camera(s)1068may include, without limitation, compact stereo vision sensor(s) that may include, without limitation, two camera lenses (one each on left and right) and an image processing chip that may measure distance from vehicle1000to target object and use generated information (e.g., metadata) to activate autonomous emergency braking and lane departure warning functions. In at least one embodiment, other types of stereo camera(s)1068may be used in addition to, or alternatively from, those described herein. In at least one embodiment, cameras with a field of view that include portions of environment to sides of vehicle1000(e.g., side-view cameras) may be used for surround view, providing information used to create and update an occupancy grid, as well as to generate side impact collision warnings. For example, in at least one embodiment, surround camera(s)1074(e.g., four surround cameras as illustrated inFIG.10B) could be positioned on vehicle1000. In at least one embodiment, surround camera(s)1074may include, without limitation, any number and combination of wide-view cameras, fisheye camera(s), 360 degree camera(s), and/or similar cameras. For instance, in at least one embodiment, four fisheye cameras may be positioned on a front, a rear, and sides of vehicle1000. In at least one embodiment, vehicle1000may use three surround camera(s)1074(e.g., left, right, and rear), and may leverage one or more other camera(s) (e.g., a forward-facing camera) as a fourth surround-view camera. In at least one embodiment, cameras with a field of view that include portions of an environment behind vehicle1000(e.g., rear-view cameras) may be used for parking assistance, surround view, rear collision warnings, and creating and updating an occupancy grid. In at least one embodiment, a wide variety of cameras may be used including, but not limited to, cameras that are also suitable as a front-facing camera(s) (e.g., long-range cameras1098and/or mid-range camera(s)1076, stereo camera(s)1068, infrared camera(s)1072, etc.,) as described herein. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.10Bfor inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.10Cis a block diagram illustrating an example system architecture for autonomous vehicle1000ofFIG.10A, according to at least one embodiment. In at least one embodiment, each of components, features, and systems of vehicle1000inFIG.10Cis illustrated as being connected via a bus1002. In at least one embodiment, bus1002may include, without limitation, a CAN data interface (alternatively referred to herein as a “CAN bus”). In at least one embodiment, a CAN may be a network inside vehicle1000used to aid in control of various features and functionality of vehicle1000, such as actuation of brakes, acceleration, braking, steering, windshield wipers, etc. In at least one embodiment, bus1002may be configured to have dozens or even hundreds of nodes, each with its own unique identifier (e.g., a CAN ID). In at least one embodiment, bus1002may be read to find steering wheel angle, ground speed, engine revolutions per minute (“RPMs”), button positions, and/or other vehicle status indicators. In at least one embodiment, bus1002may be a CAN bus that is ASIL B compliant. In at least one embodiment, in addition to, or alternatively from CAN, FlexRay and/or Ethernet protocols may be used. In at least one embodiment, there may be any number of busses forming bus1002, which may include, without limitation, zero or more CAN busses, zero or more FlexRay busses, zero or more Ethernet busses, and/or zero or more other types of busses using different protocols. In at least one embodiment, two or more busses may be used to perform different functions, and/or may be used for redundancy. For example, a first bus may be used for collision avoidance functionality and a second bus may be used for actuation control. In at least one embodiment, each bus of bus1002may communicate with any of components of vehicle1000, and two or more busses of bus1002may communicate with corresponding components. In at least one embodiment, each of any number of system(s) on chip(s) (“SoC(s)”)1004(such as SoC1004(A) and SoC1004(B)), each of controller(s)1036, and/or each computer within vehicle may have access to same input data (e.g., inputs from sensors of vehicle1000), and may be connected to a common bus, such CAN bus. In at least one embodiment, vehicle1000may include one or more controller(s)1036, such as those described herein with respect toFIG.10A. In at least one embodiment, controller(s)1036may be used for a variety of functions. In at least one embodiment, controller(s)1036may be coupled to any of various other components and systems of vehicle1000, and may be used for control of vehicle1000, artificial intelligence of vehicle1000, infotainment for vehicle1000, and/or other functions. In at least one embodiment, vehicle1000may include any number of SoCs1004. In at least one embodiment, each of SoCs1004may include, without limitation, central processing units (“CPU(s)”)1006, graphics processing units (“GPU(s)”)1008, processor(s)1010, cache(s)1012, accelerator(s)1014, data store(s)1016, and/or other components and features not illustrated. In at least one embodiment, SoC(s)1004may be used to control vehicle1000in a variety of platforms and systems. For example, in at least one embodiment, SoC(s)1004may be combined in a system (e.g., system of vehicle1000) with a High Definition (“HD”) map1022which may obtain map refreshes and/or updates via network interface1024from one or more servers (not shown inFIG.10C). In at least one embodiment, CPU(s)1006may include a CPU cluster or CPU complex (alternatively referred to herein as a “CCPLEX”). In at least one embodiment, CPU(s)1006may include multiple cores and/or level two (“L2”) caches. For instance, in at least one embodiment, CPU(s)1006may include eight cores in a coherent multi-processor configuration. In at least one embodiment, CPU(s)1006may include four dual-core clusters where each cluster has a dedicated L2 cache (e.g., a 2 megabyte (MB) L2 cache). In at least one embodiment, CPU(s)1006(e.g., CCPLEX) may be configured to support simultaneous cluster operations enabling any combination of clusters of CPU(s)1006to be active at any given time. In at least one embodiment, one or more of CPU(s)1006may implement power management capabilities that include, without limitation, one or more of following features: individual hardware blocks may be clock-gated automatically when idle to save dynamic power; each core clock may be gated when such core is not actively executing instructions due to execution of Wait for Interrupt (“WFI”)/Wait for Event (“WFE”) instructions; each core may be independently power-gated; each core cluster may be independently clock-gated when all cores are clock-gated or power-gated; and/or each core cluster may be independently power-gated when all cores are power-gated. In at least one embodiment, CPU(s)1006may further implement an enhanced algorithm for managing power states, where allowed power states and expected wakeup times are specified, and hardware/microcode determines which best power state to enter for core, cluster, and CCPLEX. In at least one embodiment, processing cores may support simplified power state entry sequences in software with work offloaded to microcode. In at least one embodiment, GPU(s)1008may include an integrated GPU (alternatively referred to herein as an “iGPU”). In at least one embodiment, GPU(s)1008may be programmable and may be efficient for parallel workloads. In at least one embodiment, GPU(s)1008may use an enhanced tensor instruction set. In at least one embodiment, GPU(s)1008may include one or more streaming microprocessors, where each streaming microprocessor may include a level one (“L1”) cache (e.g., an L1 cache with at least 96 KB storage capacity), and two or more streaming microprocessors may share an L2 cache (e.g., an L2 cache with a 512 KB storage capacity). In at least one embodiment, GPU(s)1008may include at least eight streaming microprocessors. In at least one embodiment, GPU(s)1008may use compute application programming interface(s) (API(s)). In at least one embodiment, GPU(s)1008may use one or more parallel computing platforms and/or programming models (e.g., NVIDIA's CUDA model). In at least one embodiment, one or more of GPU(s)1008may be power-optimized for best performance in automotive and embedded use cases. For example, in at least one embodiment, GPU(s)1008could be fabricated on Fin field-effect transistor (“FinFET”) circuitry. In at least one embodiment, each streaming microprocessor may incorporate a number of mixed-precision processing cores partitioned into multiple blocks. For example, and without limitation, 64 PF32 cores and 32 PF64 cores could be partitioned into four processing blocks. In at least one embodiment, each processing block could be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two mixed-precision NVIDIA Tensor cores for deep learning matrix arithmetic, a level zero (“L0”) instruction cache, a scheduler (e.g., warp scheduler) or sequencer, a dispatch unit, and/or a 64 KB register file. In at least one embodiment, streaming microprocessors may include independent parallel integer and floating-point data paths to provide for efficient execution of workloads with a mix of computation and addressing calculations. In at least one embodiment, streaming microprocessors may include independent thread scheduling capability to enable finer-grain synchronization and cooperation between parallel threads. In at least one embodiment, streaming microprocessors may include a combined L1 data cache and shared memory unit in order to improve performance while simplifying programming. In at least one embodiment, one or more of GPU(s)1008may include a high bandwidth memory (“HBM”) and/or a 16 GB HBM2 memory subsystem to provide, in some examples, about 900 GB/second peak memory bandwidth. In at least one embodiment, in addition to, or alternatively from, HBM memory, a synchronous graphics random-access memory (“SGRAM”) may be used, such as a graphics double data rate type five synchronous random-access memory (“GDDR5”). In at least one embodiment, GPU(s)1008may include unified memory technology. In at least one embodiment, address translation services (“ATS”) support may be used to allow GPU(s)1008to access CPU(s)1006page tables directly. In at least one embodiment, embodiment, when a GPU of GPU(s)1008memory management unit (“MMU”) experiences a miss, an address translation request may be transmitted to CPU(s)1006. In response, 2 CPU of CPU(s)1006may look in its page tables for a virtual-to-physical mapping for an address and transmit translation back to GPU(s)1008, in at least one embodiment. In at least one embodiment, unified memory technology may allow a single unified virtual address space for memory of both CPU(s)1006and GPU(s)1008, thereby simplifying GPU(s)1008programming and porting of applications to GPU(s)1008. In at least one embodiment, GPU(s)1008may include any number of access counters that may keep track of frequency of access of GPU(s)1008to memory of other processors. In at least one embodiment, access counter(s) may help ensure that memory pages are moved to physical memory of a processor that is accessing pages most frequently, thereby improving efficiency for memory ranges shared between processors. In at least one embodiment, one or more of SoC(s)1004may include any number of cache(s)1012, including those described herein. For example, in at least one embodiment, cache(s)1012could include a level three (“L3”) cache that is available to both CPU(s)1006and GPU(s)1008(e.g., that is connected to CPU(s)1006and GPU(s)1008). In at least one embodiment, cache(s)1012may include a write-back cache that may keep track of states of lines, such as by using a cache coherence protocol (e.g., MEI, MESI, MSI, etc.). In at least one embodiment, a L3 cache may include 4 MB of memory or more, depending on embodiment, although smaller cache sizes may be used. In at least one embodiment, one or more of SoC(s)1004may include one or more accelerator(s)1014(e.g., hardware accelerators, software accelerators, or a combination thereof). In at least one embodiment, SoC(s)1004may include a hardware acceleration cluster that may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, large on-chip memory (e.g., 4 MB of SRAM), may enable a hardware acceleration cluster to accelerate neural networks and other calculations. In at least one embodiment, a hardware acceleration cluster may be used to complement GPU(s)1008and to off-load some of tasks of GPU(s)1008(e.g., to free up more cycles of GPU(s)1008for performing other tasks). In at least one embodiment, accelerator(s)1014could be used for targeted workloads (e.g., perception, convolutional neural networks (“CNNs”), recurrent neural networks (“RNNs”), etc.) that are stable enough to be amenable to acceleration. In at least one embodiment, a CNN may include a region-based or regional convolutional neural networks (“RCNNs”) and Fast RCNNs (e.g., as used for object detection) or other type of CNN. In at least one embodiment, accelerator(s)1014(e.g., hardware acceleration cluster) may include one or more deep learning accelerator (“DLA”). In at least one embodiment, DLA(s) may include, without limitation, one or more Tensor processing units (“TPUs”) that may be configured to provide an additional ten trillion operations per second for deep learning applications and inferencing. In at least one embodiment, TPUs may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.). In at least one embodiment, DLA(s) may further be optimized for a specific set of neural network types and floating point operations, as well as inferencing. In at least one embodiment, design of DLA(s) may provide more performance per millimeter than a typical general-purpose GPU, and typically vastly exceeds performance of a CPU. In at least one embodiment, TPU(s) may perform several functions, including a single-instance convolution function, supporting, for example, INT8, INT16, and FP16 data types for both features and weights, as well as post-processor functions. In at least one embodiment, DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events. In at least one embodiment, DLA(s) may perform any function of GPU(s)1008, and by using an inference accelerator, for example, a designer may target either DLA(s) or GPU(s)1008for any function. For example, in at least one embodiment, a designer may focus processing of CNNs and floating point operations on DLA(s) and leave other functions to GPU(s)1008and/or accelerator(s)1014. In at least one embodiment, accelerator(s)1014may include programmable vision accelerator (“PVA”), which may alternatively be referred to herein as a computer vision accelerator. In at least one embodiment, PVA may be designed and configured to accelerate computer vision algorithms for advanced driver assistance system (“ADAS”)1038, autonomous driving, augmented reality (“AR”) applications, and/or virtual reality (“VR”) applications. In at least one embodiment, PVA may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA may include, for example and without limitation, any number of reduced instruction set computer (“RISC”) cores, direct memory access (“DMA”), and/or any number of vector processors. In at least one embodiment, RISC cores may interact with image sensors (e.g., image sensors of any cameras described herein), image signal processor(s), etc. In at least one embodiment, each RISC core may include any amount of memory. In at least one embodiment, RISC cores may use any of a number of protocols, depending on embodiment. In at least one embodiment, RISC cores may execute a real-time operating system (“RTOS”). In at least one embodiment, RISC cores may be implemented using one or more integrated circuit devices, application specific integrated circuits (“ASICs”), and/or memory devices. For example, in at least one embodiment, RISC cores could include an instruction cache and/or a tightly coupled RAM. In at least one embodiment, DMA may enable components of PVA to access system memory independently of CPU(s)1006. In at least one embodiment, DMA may support any number of features used to provide optimization to a PVA including, but not limited to, supporting multi-dimensional addressing and/or circular addressing. In at least one embodiment, DMA may support up to six or more dimensions of addressing, which may include, without limitation, block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping. In at least one embodiment, vector processors may be programmable processors that may be designed to efficiently and flexibly execute programming for computer vision algorithms and provide signal processing capabilities. In at least one embodiment, a PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, a PVA core may include a processor subsystem, DMA engine(s) (e.g., two DMA engines), and/or other peripherals. In at least one embodiment, a vector processing subsystem may operate as a primary processing engine of a PVA, and may include a vector processing unit (“VPU”), an instruction cache, and/or vector memory (e.g., “VMEM”). In at least one embodiment, VPU core may include a digital signal processor such as, for example, a single instruction, multiple data (“SIMD”), very long instruction word (“VLIW”) digital signal processor. In at least one embodiment, a combination of SIMD and VLIW may enhance throughput and speed. In at least one embodiment, each of vector processors may include an instruction cache and may be coupled to dedicated memory. As a result, in at least one embodiment, each of vector processors may be configured to execute independently of other vector processors. In at least one embodiment, vector processors that are included in a particular PVA may be configured to employ data parallelism. For instance, in at least one embodiment, plurality of vector processors included in a single PVA may execute a common computer vision algorithm, but on different regions of an image. In at least one embodiment, vector processors included in a particular PVA may simultaneously execute different computer vision algorithms, on one image, or even execute different algorithms on sequential images or portions of an image. In at least one embodiment, among other things, any number of PVAs may be included in hardware acceleration cluster and any number of vector processors may be included in each PVA. In at least one embodiment, PVA may include additional error correcting code (“ECC”) memory, to enhance overall system safety. In at least one embodiment, accelerator(s)1014may include a computer vision network on-chip and static random-access memory (“SRAM”), for providing a high-bandwidth, low latency SRAM for accelerator(s)1014. In at least one embodiment, on-chip memory may include at least 4 MB SRAM, comprising, for example and without limitation, eight field-configurable memory blocks, that may be accessible by both a PVA and a DLA. In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus (“APB”) interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, a PVA and a DLA may access memory via a backbone that provides a PVA and a DLA with high-speed access to memory. In at least one embodiment, a backbone may include a computer vision network on-chip that interconnects a PVA and a DLA to memory (e.g., using APB). In at least one embodiment, a computer vision network on-chip may include an interface that determines, before transmission of any control signal/address/data, that both a PVA and a DLA provide ready and valid signals. In at least one embodiment, an interface may provide for separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transfer. In at least one embodiment, an interface may comply with International Organization for Standardization (“ISO”) 26262 or International Electrotechnical Commission (“IEC”) 61508 standards, although other standards and protocols may be used. In at least one embodiment, one or more of SoC(s)1004may include a real-time ray-tracing hardware accelerator. In at least one embodiment, real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses. In at least one embodiment, accelerator(s)1014can have a wide array of uses for autonomous driving. In at least one embodiment, a PVA may be used for key processing stages in ADAS and autonomous vehicles. In at least one embodiment, a PVA's capabilities are a good match for algorithmic domains needing predictable processing, at low power and low latency. In other words, a PVA performs well on semi-dense or dense regular computation, even on small data sets, which might require predictable run-times with low latency and low power. In at least one embodiment, such as in vehicle1000, PVAs might be designed to run classic computer vision algorithms, as they can be efficient at object detection and operating on integer math. For example, according to at least one embodiment of technology, a PVA is used to perform computer stereo vision. In at least one embodiment, a semi-global matching-based algorithm may be used in some examples, although this is not intended to be limiting. In at least one embodiment, applications for Level 3-5 autonomous driving use motion estimation/stereo matching on-the-fly (e.g., structure from motion, pedestrian recognition, lane detection, etc.). In at least one embodiment, a PVA may perform computer stereo vision functions on inputs from two monocular cameras. In at least one embodiment, a PVA may be used to perform dense optical flow. For example, in at least one embodiment, a PVA could process raw RADAR data (e.g., using a 4D Fast Fourier Transform) to provide processed RADAR data. In at least one embodiment, a PVA is used for time of flight depth processing, by processing raw time of flight data to provide processed time of flight data, for example. In at least one embodiment, a DLA may be used to run any type of network to enhance control and driving safety, including for example and without limitation, a neural network that outputs a measure of confidence for each object detection. In at least one embodiment, confidence may be represented or interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. In at least one embodiment, a confidence measure enables a system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. In at least one embodiment, a system may set a threshold value for confidence and consider only detections exceeding threshold value as true positive detections. In an embodiment in which an automatic emergency braking (“AEB”) system is used, false positive detections would cause vehicle to automatically perform emergency braking, which is obviously undesirable. In at least one embodiment, highly confident detections may be considered as triggers for AEB. In at least one embodiment, a DLA may run a neural network for regressing confidence value. In at least one embodiment, neural network may take as its input at least some subset of parameters, such as bounding box dimensions, ground plane estimate obtained (e.g., from another subsystem), output from IMU sensor(s)1066that correlates with vehicle1000orientation, distance, 3D location estimates of object obtained from neural network and/or other sensors (e.g., LIDAR sensor(s)1064or RADAR sensor(s)1060), among others. In at least one embodiment, one or more of SoC(s)1004may include data store(s)1016(e.g., memory). In at least one embodiment, data store(s)1016may be on-chip memory of SoC(s)1004, which may store neural networks to be executed on GPU(s)1008and/or a DLA. In at least one embodiment, data store(s)1016may be large enough in capacity to store multiple instances of neural networks for redundancy and safety. In at least one embodiment, data store(s)1016may comprise L2 or L3 cache(s). In at least one embodiment, one or more of SoC(s)1004may include any number of processor(s)1010(e.g., embedded processors). In at least one embodiment, processor(s)1010may include a boot and power management processor that may be a dedicated processor and subsystem to handle boot power and management functions and related security enforcement. In at least one embodiment, a boot and power management processor may be a part of a boot sequence of SoC(s)1004and may provide runtime power management services. In at least one embodiment, a boot power and management processor may provide clock and voltage programming, assistance in system low power state transitions, management of SoC(s)1004thermals and temperature sensors, and/or management of SoC(s)1004power states. In at least one embodiment, each temperature sensor may be implemented as a ring-oscillator whose output frequency is proportional to temperature, and SoC(s)1004may use ring-oscillators to detect temperatures of CPU(s)1006, GPU(s)1008, and/or accelerator(s)1014. In at least one embodiment, if temperatures are determined to exceed a threshold, then a boot and power management processor may enter a temperature fault routine and put SoC(s)1004into a lower power state and/or put vehicle1000into a chauffeur to safe stop mode (e.g., bring vehicle1000to a safe stop). In at least one embodiment, processor(s)1010may further include a set of embedded processors that may serve as an audio processing engine which may be an audio subsystem that enables full hardware support for multi-channel audio over multiple interfaces, and a broad and flexible range of audio I/O interfaces. In at least one embodiment, an audio processing engine is a dedicated processor core with a digital signal processor with dedicated RAM. In at least one embodiment, processor(s)1010may further include an always-on processor engine that may provide necessary hardware features to support low power sensor management and wake use cases. In at least one embodiment, an always-on processor engine may include, without limitation, a processor core, a tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and routing logic. In at least one embodiment, processor(s)1010may further include a safety cluster engine that includes, without limitation, a dedicated processor subsystem to handle safety management for automotive applications. In at least one embodiment, a safety cluster engine may include, without limitation, two or more processor cores, a tightly coupled RAM, support peripherals (e.g., timers, an interrupt controller, etc.), and/or routing logic. In a safety mode, two or more cores may operate, in at least one embodiment, in a lockstep mode and function as a single core with comparison logic to detect any differences between their operations. In at least one embodiment, processor(s)1010may further include a real-time camera engine that may include, without limitation, a dedicated processor subsystem for handling real-time camera management. In at least one embodiment, processor(s)1010may further include a high-dynamic range signal processor that may include, without limitation, an image signal processor that is a hardware engine that is part of a camera processing pipeline. In at least one embodiment, processor(s)1010may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce a final image for a player window. In at least one embodiment, a video image compositor may perform lens distortion correction on wide-view camera(s)1070, surround camera(s)1074, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC1004, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change a vehicle's destination, activate or change a vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to a driver when a vehicle is operating in an autonomous mode and are disabled otherwise. In at least one embodiment, a video image compositor may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, where motion occurs in a video, noise reduction weights spatial information appropriately, decreasing weights of information provided by adjacent frames. In at least one embodiment, where an image or portion of an image does not include motion, temporal noise reduction performed by video image compositor may use information from a previous image to reduce noise in a current image. In at least one embodiment, a video image compositor may also be configured to perform stereo rectification on input stereo lens frames. In at least one embodiment, a video image compositor may further be used for user interface composition when an operating system desktop is in use, and GPU(s)1008are not required to continuously render new surfaces. In at least one embodiment, when GPU(s)1008are powered on and active doing 3D rendering, a video image compositor may be used to offload GPU(s)1008to improve performance and responsiveness. In at least one embodiment, one or more SoC of SoC(s)1004may further include a mobile industry processor interface (“MIPI”) camera serial interface for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for a camera and related pixel input functions. In at least one embodiment, one or more of SoC(s)1004may further include an input/output controller(s) that may be controlled by software and may be used for receiving I/O signals that are uncommitted to a specific role. In at least one embodiment, one or more Soc of SoC(s)1004may further include a broad range of peripheral interfaces to enable communication with peripherals, audio encoders/decoders (“codecs”), power management, and/or other devices. In at least one embodiment, SoC(s)1004may be used to process data from cameras (e.g., connected over Gigabit Multimedia Serial Link and Ethernet channels), sensors (e.g., LIDAR sensor(s)1064, RADAR sensor(s)1060, etc. that may be connected over Ethernet channels), data from bus1002(e.g., speed of vehicle1000, steering wheel position, etc.), data from GNSS sensor(s)1058(e.g., connected over a Ethernet bus or a CAN bus), etc. In at least one embodiment, one or more SoC of SoC(s)1004may further include dedicated high-performance mass storage controllers that may include their own DMA engines, and that may be used to free CPU(s)1006from routine data management tasks. In at least one embodiment, SoC(s)1004may be an end-to-end platform with a flexible architecture that spans automation Levels 3-5, thereby providing a comprehensive functional safety architecture that leverages and makes efficient use of computer vision and ADAS techniques for diversity and redundancy, and provides a platform for a flexible, reliable driving software stack, along with deep learning tools. In at least one embodiment, SoC(s)1004may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, in at least one embodiment, accelerator(s)1014, when combined with CPU(s)1006, GPU(s)1008, and data store(s)1016, may provide for a fast, efficient platform for Level 3-5 autonomous vehicles. In at least one embodiment, computer vision algorithms may be executed on CPUs, which may be configured using a high-level programming language, such as C, to execute a wide variety of processing algorithms across a wide variety of visual data. However, in at least one embodiment, CPUs are oftentimes unable to meet performance requirements of many computer vision applications, such as those related to execution time and power consumption, for example. In at least one embodiment, many CPUs are unable to execute complex object detection algorithms in real-time, which is used in in-vehicle ADAS applications and in practical Level 3-5 autonomous vehicles. Embodiments described herein allow for multiple neural networks to be performed simultaneously and/or sequentially, and for results to be combined together to enable Level 3-5 autonomous driving functionality. For example, in at least one embodiment, a CNN executing on a DLA or a discrete GPU (e.g., GPU(s)1020) may include text and word recognition, allowing reading and understanding of traffic signs, including signs for which a neural network has not been specifically trained. In at least one embodiment, a DLA may further include a neural network that is able to identify, interpret, and provide semantic understanding of a sign, and to pass that semantic understanding to path planning modules running on a CPU Complex. In at least one embodiment, multiple neural networks may be run simultaneously, as for Level 3, 4, or 5 driving. For example, in at least one embodiment, a warning sign stating “Caution: flashing lights indicate icy conditions,” along with an electric light, may be independently or collectively interpreted by several neural networks. In at least one embodiment, such warning sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), text “flashing lights indicate icy conditions” may be interpreted by a second deployed neural network, which informs a vehicle's path planning software (preferably executing on a CPU Complex) that when flashing lights are detected, icy conditions exist. In at least one embodiment, a flashing light may be identified by operating a third deployed neural network over multiple frames, informing a vehicle's path-planning software of a presence (or an absence) of flashing lights. In at least one embodiment, all three neural networks may run simultaneously, such as within a DLA and/or on GPU(s)1008. In at least one embodiment, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify presence of an authorized driver and/or owner of vehicle1000. In at least one embodiment, an always-on sensor processing engine may be used to unlock a vehicle when an owner approaches a driver door and turns on lights, and, in a security mode, to disable such vehicle when an owner leaves such vehicle. In this way, SoC(s)1004provide for security against theft and/or carjacking. In at least one embodiment, a CNN for emergency vehicle detection and identification may use data from microphones1096to detect and identify emergency vehicle sirens. In at least one embodiment, SoC(s)1004use a CNN for classifying environmental and urban sounds, as well as classifying visual data. In at least one embodiment, a CNN running on a DLA is trained to identify a relative closing speed of an emergency vehicle (e.g., by using a Doppler effect). In at least one embodiment, a CNN may also be trained to identify emergency vehicles specific to a local area in which a vehicle is operating, as identified by GNSS sensor(s)1058. In at least one embodiment, when operating in Europe, a CNN will seek to detect European sirens, and when in North America, a CNN will seek to identify only North American sirens. In at least one embodiment, once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing a vehicle, pulling over to a side of a road, parking a vehicle, and/or idling a vehicle, with assistance of ultrasonic sensor(s)1062, until emergency vehicles pass. In at least one embodiment, vehicle1000may include CPU(s)1018(e.g., discrete CPU(s), or dCPU(s)), that may be coupled to SoC(s)1004via a high-speed interconnect (e.g., PCIe). In at least one embodiment, CPU(s)1018may include an X86 processor, for example. CPU(s)1018may be used to perform any of a variety of functions, including arbitrating potentially inconsistent results between ADAS sensors and SoC(s)1004, and/or monitoring status and health of controller(s)1036and/or an infotainment system on a chip (“infotainment SoC”)1030, for example. In at least one embodiment, SoC(s)1004includes one or more interconnects, and an interconnect can include a peripheral component interconnect express (PCIe). In at least one embodiment, vehicle1000may include GPU(s)1020(e.g., discrete GPU(s), or dGPU(s)), that may be coupled to SoC(s)1004via a high-speed interconnect (e.g., NVIDIA's NVLINK channel). In at least one embodiment, GPU(s)1020may provide additional artificial intelligence functionality, such as by executing redundant and/or different neural networks, and may be used to train and/or update neural networks based at least in part on input (e.g., sensor data) from sensors of a vehicle1000. In at least one embodiment, vehicle1000may further include network interface1024which may include, without limitation, wireless antenna(s)1026(e.g., one or more wireless antennas for different communication protocols, such as a cellular antenna, a Bluetooth antenna, etc.). In at least one embodiment, network interface1024may be used to enable wireless connectivity to Internet cloud services (e.g., with server(s) and/or other network devices), with other vehicles, and/or with computing devices (e.g., client devices of passengers). In at least one embodiment, to communicate with other vehicles, a direct link may be established between vehicle100and another vehicle and/or an indirect link may be established (e.g., across networks and over the Internet). In at least one embodiment, direct links may be provided using a vehicle-to-vehicle communication link. In at least one embodiment, a vehicle-to-vehicle communication link may provide vehicle1000information about vehicles in proximity to vehicle1000(e.g., vehicles in front of, on a side of, and/or behind vehicle1000). In at least one embodiment, such aforementioned functionality may be part of a cooperative adaptive cruise control functionality of vehicle1000. In at least one embodiment, network interface1024may include an SoC that provides modulation and demodulation functionality and enables controller(s)1036to communicate over wireless networks. In at least one embodiment, network interface1024may include a radio frequency front-end for up-conversion from baseband to radio frequency, and down conversion from radio frequency to baseband. In at least one embodiment, frequency conversions may be performed in any technically feasible fashion. For example, frequency conversions could be performed through well-known processes, and/or using super-heterodyne processes. In at least one embodiment, radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, network interfaces may include wireless functionality for communicating over LTE, WCDMA, UMTS, GSM, CDMA2000, Bluetooth, Bluetooth LE, Wi-Fi, Z-Wave, ZigBee, LoRaWAN, and/or other wireless protocols. In at least one embodiment, vehicle1000may further include data store(s)1028which may include, without limitation, off-chip (e.g., off SoC(s)1004) storage. In at least one embodiment, data store(s)1028may include, without limitation, one or more storage elements including RAM, SRAM, dynamic random-access memory (“DRAM”), video random-access memory (“VRAM”), flash memory, hard disks, and/or other components and/or devices that may store at least one bit of data. In at least one embodiment, vehicle1000may further include GNSS sensor(s)1058(e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions. In at least one embodiment, any number of GNSS sensor(s)1058may be used, including, for example and without limitation, a GPS using a USB connector with an Ethernet-to-Serial (e.g., RS-232) bridge. In at least one embodiment, vehicle1000may further include RADAR sensor(s)1060. In at least one embodiment, RADAR sensor(s)1060may be used by vehicle1000for long-range vehicle detection, even in darkness and/or severe weather conditions. In at least one embodiment, RADAR functional safety levels may be ASIL B. In at least one embodiment, RADAR sensor(s)1060may use a CAN bus and/or bus1002(e.g., to transmit data generated by RADAR sensor(s)1060) for control and to access object tracking data, with access to Ethernet channels to access raw data in some examples. In at least one embodiment, a wide variety of RADAR sensor types may be used. For example, and without limitation, RADAR sensor(s)1060may be suitable for front, rear, and side RADAR use. In at least one embodiment, one or more sensor of RADAR sensors(s)1060is a Pulse Doppler RADAR sensor. In at least one embodiment, RADAR sensor(s)1060may include different configurations, such as long-range with narrow field of view, short-range with wide field of view, short-range side coverage, etc. In at least one embodiment, long-range RADAR may be used for adaptive cruise control functionality. In at least one embodiment, long-range RADAR systems may provide a broad field of view realized by two or more independent scans, such as within a 250 m (meter) range. In at least one embodiment, RADAR sensor(s)1060may help in distinguishing between static and moving objects, and may be used by ADAS system1038for emergency brake assist and forward collision warning. In at least one embodiment, sensors1060(s) included in a long-range RADAR system may include, without limitation, monostatic multimodal RADAR with multiple (e.g., six or more) fixed RADAR antennae and a high-speed CAN and FlexRay interface. In at least one embodiment, with six antennae, a central four antennae may create a focused beam pattern, designed to record vehicle's1000surroundings at higher speeds with minimal interference from traffic in adjacent lanes. In at least one embodiment, another two antennae may expand field of view, making it possible to quickly detect vehicles entering or leaving a lane of vehicle1000. In at least one embodiment, mid-range RADAR systems may include, as an example, a range of up to 160 m (front) or 80 m (rear), and a field of view of up to 42 degrees (front) or 150 degrees (rear). In at least one embodiment, short-range RADAR systems may include, without limitation, any number of RADAR sensor(s)1060designed to be installed at both ends of a rear bumper. When installed at both ends of a rear bumper, in at least one embodiment, a RADAR sensor system may create two beams that constantly monitor blind spots in a rear direction and next to a vehicle. In at least one embodiment, short-range RADAR systems may be used in ADAS system1038for blind spot detection and/or lane change assist. In at least one embodiment, vehicle1000may further include ultrasonic sensor(s)1062. In at least one embodiment, ultrasonic sensor(s)1062, which may be positioned at a front, a back, and/or side location of vehicle1000, may be used for parking assist and/or to create and update an occupancy grid. In at least one embodiment, a wide variety of ultrasonic sensor(s)1062may be used, and different ultrasonic sensor(s)1062may be used for different ranges of detection (e.g., 2.5 m, 4 m). In at least one embodiment, ultrasonic sensor(s)1062may operate at functional safety levels of ASIL B. In at least one embodiment, vehicle1000may include LIDAR sensor(s)1064. In at least one embodiment, LIDAR sensor(s)1064may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, LIDAR sensor(s)1064may operate at functional safety level ASIL B. In at least one embodiment, vehicle1000may include multiple LIDAR sensors1064(e.g., two, four, six, etc.) that may use an Ethernet channel (e.g., to provide data to a Gigabit Ethernet switch). In at least one embodiment, LIDAR sensor(s)1064may be capable of providing a list of objects and their distances for a 360-degree field of view. In at least one embodiment, commercially available LIDAR sensor(s)1064may have an advertised range of approximately 100 m, with an accuracy of 2 cm to 3 cm, and with support for a 100 Mbps Ethernet connection, for example. In at least one embodiment, one or more non-protruding LIDAR sensors may be used. In such an embodiment, LIDAR sensor(s)1064may include a small device that may be embedded into a front, a rear, a side, and/or a corner location of vehicle1000. In at least one embodiment, LIDAR sensor(s)1064, in such an embodiment, may provide up to a 120-degree horizontal and 35-degree vertical field-of-view, with a 200 m range even for low-reflectivity objects. In at least one embodiment, front-mounted LIDAR sensor(s)1064may be configured for a horizontal field of view between 45 degrees and 135 degrees. In at least one embodiment, LIDAR technologies, such as 3D flash LIDAR, may also be used. In at least one embodiment, 3D flash LIDAR uses a flash of a laser as a transmission source, to illuminate surroundings of vehicle1000up to approximately 200 m. In at least one embodiment, a flash LIDAR unit includes, without limitation, a receptor, which records laser pulse transit time and reflected light on each pixel, which in turn corresponds to a range from vehicle1000to objects. In at least one embodiment, flash LIDAR may allow for highly accurate and distortion-free images of surroundings to be generated with every laser flash. In at least one embodiment, four flash LIDAR sensors may be deployed, one at each side of vehicle1000. In at least one embodiment, 3D flash LIDAR systems include, without limitation, a solid-state 3D staring array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). In at least one embodiment, flash LIDAR device may use a 5 nanosecond class I (eye-safe) laser pulse per frame and may capture reflected laser light as a 3D range point cloud and co-registered intensity data. In at least one embodiment, vehicle1000may further include IMU sensor(s)1066. In at least one embodiment, IMU sensor(s)1066may be located at a center of a rear axle of vehicle1000. In at least one embodiment, IMU sensor(s)1066may include, for example and without limitation, accelerometer(s), magnetometer(s), gyroscope(s), a magnetic compass, magnetic compasses, and/or other sensor types. In at least one embodiment, such as in six-axis applications, IMU sensor(s)1066may include, without limitation, accelerometers and gyroscopes. In at least one embodiment, such as in nine-axis applications, IMU sensor(s)1066may include, without limitation, accelerometers, gyroscopes, and magnetometers. In at least one embodiment, IMU sensor(s)1066may be implemented as a miniature, high performance GPS-Aided Inertial Navigation System (“GPS/INS”) that combines micro-electro-mechanical systems (“MEMS”) inertial sensors, a high-sensitivity GPS receiver, and advanced Kalman filtering algorithms to provide estimates of position, velocity, and attitude. In at least one embodiment, IMU sensor(s)1066may enable vehicle1000to estimate its heading without requiring input from a magnetic sensor by directly observing and correlating changes in velocity from a GPS to IMU sensor(s)1066. In at least one embodiment, IMU sensor(s)1066and GNSS sensor(s)1058may be combined in a single integrated unit. In at least one embodiment, vehicle1000may include microphone(s)1096placed in and/or around vehicle1000. In at least one embodiment, microphone(s)1096may be used for emergency vehicle detection and identification, among other things. In at least one embodiment, vehicle1000may further include any number of camera types, including stereo camera(s)1068, wide-view camera(s)1070, infrared camera(s)1072, surround camera(s)1074, long-range camera(s)1098, mid-range camera(s)1076, and/or other camera types. In at least one embodiment, cameras may be used to capture image data around an entire periphery of vehicle1000. In at least one embodiment, which types of cameras used depends on vehicle1000. In at least one embodiment, any combination of camera types may be used to provide necessary coverage around vehicle1000. In at least one embodiment, a number of cameras deployed may differ depending on embodiment. For example, in at least one embodiment, vehicle1000could include six cameras, seven cameras, ten cameras, twelve cameras, or another number of cameras. In at least one embodiment, cameras may support, as an example and without limitation, Gigabit Multimedia Serial Link (“GMSL”) and/or Gigabit Ethernet communications. In at least one embodiment, each camera might be as described with more detail previously herein with respect toFIG.10AandFIG.10B. In at least one embodiment, vehicle1000may further include vibration sensor(s)1042. In at least one embodiment, vibration sensor(s)1042may measure vibrations of components of vehicle1000, such as axle(s). For example, in at least one embodiment, changes in vibrations may indicate a change in road surfaces. In at least one embodiment, when two or more vibration sensors1042are used, differences between vibrations may be used to determine friction or slippage of road surface (e.g., when a difference in vibration is between a power-driven axle and a freely rotating axle). In at least one embodiment, vehicle1000may include ADAS system1038. In at least one embodiment, ADAS system1038may include, without limitation, an SoC, in some examples. In at least one embodiment, ADAS system1038may include, without limitation, any number and combination of an autonomous/adaptive/automatic cruise control (“ACC”) system, a cooperative adaptive cruise control (“CACC”) system, a forward crash warning (“FCW”) system, an automatic emergency braking (“AEB”) system, a lane departure warning (“LDW)” system, a lane keep assist (“LKA”) system, a blind spot warning (“BSW”) system, a rear cross-traffic warning (“RCTW”) system, a collision warning (“CW”) system, a lane centering (“LC”) system, and/or other systems, features, and/or functionality. In at least one embodiment, ACC system may use RADAR sensor(s)1060, LIDAR sensor(s)1064, and/or any number of camera(s). In at least one embodiment, ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, a longitudinal ACC system monitors and controls distance to another vehicle immediately ahead of vehicle1000and automatically adjusts speed of vehicle1000to maintain a safe distance from vehicles ahead. In at least one embodiment, a lateral ACC system performs distance keeping, and advises vehicle1000to change lanes when necessary. In at least one embodiment, a lateral ACC is related to other ADAS applications, such as LC and CW. In at least one embodiment, a CACC system uses information from other vehicles that may be received via network interface1024and/or wireless antenna(s)1026from other vehicles via a wireless link, or indirectly, over a network connection (e.g., over the Internet). In at least one embodiment, direct links may be provided by a vehicle-to-vehicle (“V2V”) communication link, while indirect links may be provided by an infrastructure-to-vehicle (“I2V”) communication link. In general, V2V communication provides information about immediately preceding vehicles (e.g., vehicles immediately ahead of and in same lane as vehicle1000), while I2V communication provides information about traffic further ahead. In at least one embodiment, a CACC system may include either or both I2V and V2V information sources. In at least one embodiment, given information of vehicles ahead of vehicle1000, a CACC system may be more reliable and it has potential to improve traffic flow smoothness and reduce congestion on road. In at least one embodiment, an FCW system is designed to alert a driver to a hazard, so that such driver may take corrective action. In at least one embodiment, an FCW system uses a front-facing camera and/or RADAR sensor(s)1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an FCW system may provide a warning, such as in form of a sound, visual warning, vibration and/or a quick brake pulse. In at least one embodiment, an AEB system detects an impending forward collision with another vehicle or other object, and may automatically apply brakes if a driver does not take corrective action within a specified time or distance parameter. In at least one embodiment, AEB system may use front-facing camera(s) and/or RADAR sensor(s)1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when an AEB system detects a hazard, it will typically first alert a driver to take corrective action to avoid collision and, if that driver does not take corrective action, that AEB system may automatically apply brakes in an effort to prevent, or at least mitigate, an impact of a predicted collision. In at least one embodiment, an AEB system may include techniques such as dynamic brake support and/or crash imminent braking. In at least one embodiment, an LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert driver when vehicle1000crosses lane markings. In at least one embodiment, an LDW system does not activate when a driver indicates an intentional lane departure, such as by activating a turn signal. In at least one embodiment, an LDW system may use front-side facing cameras, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an LKA system is a variation of an LDW system. In at least one embodiment, an LKA system provides steering input or braking to correct vehicle1000if vehicle1000starts to exit its lane. In at least one embodiment, a BSW system detects and warns a driver of vehicles in an automobile's blind spot. In at least one embodiment, a BSW system may provide a visual, audible, and/or tactile alert to indicate that merging or changing lanes is unsafe. In at least one embodiment, a BSW system may provide an additional warning when a driver uses a turn signal. In at least one embodiment, a BSW system may use rear-side facing camera(s) and/or RADAR sensor(s)1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, an RCTW system may provide visual, audible, and/or tactile notification when an object is detected outside a rear-camera range when vehicle1000is backing up. In at least one embodiment, an RCTW system includes an AEB system to ensure that vehicle brakes are applied to avoid a crash. In at least one embodiment, an RCTW system may use one or more rear-facing RADAR sensor(s)1060, coupled to a dedicated processor, DSP, FPGA, and/or ASIC, that is electrically coupled to provide driver feedback, such as a display, speaker, and/or vibrating component. In at least one embodiment, conventional ADAS systems may be prone to false positive results which may be annoying and distracting to a driver, but typically are not catastrophic, because conventional ADAS systems alert a driver and allow that driver to decide whether a safety condition truly exists and act accordingly. In at least one embodiment, vehicle1000itself decides, in case of conflicting results, whether to heed result from a primary computer or a secondary computer (e.g., a first controller or a second controller of controllers1036). For example, in at least one embodiment, ADAS system1038may be a backup and/or secondary computer for providing perception information to a backup computer rationality module. In at least one embodiment, a backup computer rationality monitor may run redundant diverse software on hardware components to detect faults in perception and dynamic driving tasks. In at least one embodiment, outputs from ADAS system1038may be provided to a supervisory MCU. In at least one embodiment, if outputs from a primary computer and outputs from a secondary computer conflict, a supervisory MCU determines how to reconcile conflict to ensure safe operation. In at least one embodiment, a primary computer may be configured to provide a supervisory MCU with a confidence score, indicating that primary computer's confidence in a chosen result. In at least one embodiment, if that confidence score exceeds a threshold, that supervisory MCU may follow that primary computer's direction, regardless of whether that secondary computer provides a conflicting or inconsistent result. In at least one embodiment, where a confidence score does not meet a threshold, and where primary and secondary computers indicate different results (e.g., a conflict), a supervisory MCU may arbitrate between computers to determine an appropriate outcome. In at least one embodiment, a supervisory MCU may be configured to run a neural network(s) that is trained and configured to determine, based at least in part on outputs from a primary computer and outputs from a secondary computer, conditions under which that secondary computer provides false alarms. In at least one embodiment, neural network(s) in a supervisory MCU may learn when a secondary computer's output may be trusted, and when it cannot. For example, in at least one embodiment, when that secondary computer is a RADAR-based FCW system, a neural network(s) in that supervisory MCU may learn when an FCW system is identifying metallic objects that are not, in fact, hazards, such as a drainage grate or manhole cover that triggers an alarm. In at least one embodiment, when a secondary computer is a camera-based LDW system, a neural network in a supervisory MCU may learn to override LDW when bicyclists or pedestrians are present and a lane departure is, in fact, a safest maneuver. In at least one embodiment, a supervisory MCU may include at least one of a DLA or a GPU suitable for running neural network(s) with associated memory. In at least one embodiment, a supervisory MCU may comprise and/or be included as a component of SoC(s)1004. In at least one embodiment, ADAS system1038may include a secondary computer that performs ADAS functionality using traditional rules of computer vision. In at least one embodiment, that secondary computer may use classic computer vision rules (if-then), and presence of a neural network(s) in a supervisory MCU may improve reliability, safety and performance. For example, in at least one embodiment, diverse implementation and intentional non-identity makes an overall system more fault-tolerant, especially to faults caused by software (or software-hardware interface) functionality. For example, in at least one embodiment, if there is a software bug or error in software running on a primary computer, and non-identical software code running on a secondary computer provides a consistent overall result, then a supervisory MCU may have greater confidence that an overall result is correct, and a bug in software or hardware on that primary computer is not causing a material error. In at least one embodiment, an output of ADAS system1038may be fed into a primary computer's perception block and/or a primary computer's dynamic driving task block. For example, in at least one embodiment, if ADAS system1038indicates a forward crash warning due to an object immediately ahead, a perception block may use this information when identifying objects. In at least one embodiment, a secondary computer may have its own neural network that is trained and thus reduces a risk of false positives, as described herein. In at least one embodiment, vehicle1000may further include infotainment SoC1030(e.g., an in-vehicle infotainment system (IVI)). Although illustrated and described as an SoC, infotainment system SoC1030, in at least one embodiment, may not be an SoC, and may include, without limitation, two or more discrete components. In at least one embodiment, infotainment SoC1030may include, without limitation, a combination of hardware and software that may be used to provide audio (e.g., music, a personal digital assistant, navigational instructions, news, radio, etc.), video (e.g., TV, movies, streaming, etc.), phone (e.g., hands-free calling), network connectivity (e.g., LTE, WiFi, etc.), and/or information services (e.g., navigation systems, rear-parking assistance, a radio data system, vehicle related information such as fuel level, total distance covered, brake fuel level, oil level, door open/close, air filter information, etc.) to vehicle1000. For example, infotainment SoC1030could include radios, disk players, navigation systems, video players, USB and Bluetooth connectivity, carputers, in-car entertainment, WiFi, steering wheel audio controls, hands free voice control, a heads-up display (“HUD”), HMI display1034, a telematics device, a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. In at least one embodiment, infotainment SoC1030may further be used to provide information (e.g., visual and/or audible) to user(s) of vehicle1000, such as information from ADAS system1038, autonomous driving information such as planned vehicle maneuvers, trajectories, surrounding environment information (e.g., intersection information, vehicle information, road information, etc.), and/or other information. In at least one embodiment, infotainment SoC1030may include any amount and type of GPU functionality. In at least one embodiment, infotainment SoC1030may communicate over bus1002with other devices, systems, and/or components of vehicle1000. In at least one embodiment, infotainment SoC1030may be coupled to a supervisory MCU such that a GPU of an infotainment system may perform some self-driving functions in event that primary controller(s)1036(e.g., primary and/or backup computers of vehicle1000) fail. In at least one embodiment, infotainment SoC1030may put vehicle1000into a chauffeur to safe stop mode, as described herein. In at least one embodiment, vehicle1000may further include instrument cluster1032(e.g., a digital dash, an electronic instrument cluster, a digital instrument panel, etc.). In at least one embodiment, instrument cluster1032may include, without limitation, a controller and/or supercomputer (e.g., a discrete controller or supercomputer). In at least one embodiment, instrument cluster1032may include, without limitation, any number and combination of a set of instrumentation such as a speedometer, fuel level, oil pressure, tachometer, odometer, turn indicators, gearshift position indicator, seat belt warning light(s), parking-brake warning light(s), engine-malfunction light(s), supplemental restraint system (e.g., airbag) information, lighting controls, safety system controls, navigation information, etc. In some examples, information may be displayed and/or shared among infotainment SoC1030and instrument cluster1032. In at least one embodiment, instrument cluster1032may be included as part of infotainment SoC1030, or vice versa. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.10Cfor inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.10Dis a diagram of a system for communication between cloud-based server(s) and autonomous vehicle1000ofFIG.10A, according to at least one embodiment. In at least one embodiment, system may include, without limitation, server(s)1078, network(s)1090, and any number and type of vehicles, including vehicle1000. In at least one embodiment, server(s)1078may include, without limitation, a plurality of GPUs1084(A)-1084(H) (collectively referred to herein as GPUs1084), PCIe switches1082(A)-1082(D) (collectively referred to herein as PCIe switches1082), and/or CPUs1080(A)-1080(B) (collectively referred to herein as CPUs1080). In at least one embodiment, GPUs1084, CPUs1080, and PCIe switches1082may be interconnected with high-speed interconnects such as, for example and without limitation, NVLink interfaces1088developed by NVIDIA and/or PCIe connections1086. In at least one embodiment, GPUs1084are connected via an NVLink and/or NVSwitch SoC and GPUs1084and PCIe switches1082are connected via PCIe interconnects. Although eight GPUs1084, two CPUs1080, and four PCIe switches1082are illustrated, this is not intended to be limiting. In at least one embodiment, each of server(s)1078may include, without limitation, any number of GPUs1084, CPUs1080, and/or PCIe switches1082, in any combination. For example, in at least one embodiment, server(s)1078could each include eight, sixteen, thirty-two, and/or more GPUs1084. In at least one embodiment, server(s)1078may receive, over network(s)1090and from vehicles, image data representative of images showing unexpected or changed road conditions, such as recently commenced road-work. In at least one embodiment, server(s)1078may transmit, over network(s)1090and to vehicles, neural networks1092, updated or otherwise, and/or map information1094, including, without limitation, information regarding traffic and road conditions. In at least one embodiment, updates to map information1094may include, without limitation, updates for HD map1022, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions. In at least one embodiment, neural networks1092, and/or map information1094may have resulted from new training and/or experiences represented in data received from any number of vehicles in an environment, and/or based at least in part on training performed at a data center (e.g., using server(s)1078and/or other servers). In at least one embodiment, server(s)1078may be used to train machine learning models (e.g., neural networks) based at least in part on training data. In at least one embodiment, training data may be generated by vehicles, and/or may be generated in a simulation (e.g., using a game engine). In at least one embodiment, any amount of training data is tagged (e.g., where associated neural network benefits from supervised learning) and/or undergoes other pre-processing. In at least one embodiment, any amount of training data is not tagged and/or pre-processed (e.g., where associated neural network does not require supervised learning). In at least one embodiment, once machine learning models are trained, machine learning models may be used by vehicles (e.g., transmitted to vehicles over network(s)1090), and/or machine learning models may be used by server(s)1078to remotely monitor vehicles. In at least one embodiment, server(s)1078may receive data from vehicles and apply data to up-to-date real-time neural networks for real-time intelligent inferencing. In at least one embodiment, server(s)1078may include deep-learning supercomputers and/or dedicated AI computers powered by GPU(s)1084, such as a DGX and DGX Station machines developed by NVIDIA. However, in at least one embodiment, server(s)1078may include deep learning infrastructure that uses CPU-powered data centers. In at least one embodiment, deep-learning infrastructure of server(s)1078may be capable of fast, real-time inferencing, and may use that capability to evaluate and verify health of processors, software, and/or associated hardware in vehicle1000. For example, in at least one embodiment, deep-learning infrastructure may receive periodic updates from vehicle1000, such as a sequence of images and/or objects that vehicle1000has located in that sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). In at least one embodiment, deep-learning infrastructure may run its own neural network to identify objects and compare them with objects identified by vehicle1000and, if results do not match and deep-learning infrastructure concludes that AI in vehicle1000is malfunctioning, then server(s)1078may transmit a signal to vehicle1000instructing a fail-safe computer of vehicle1000to assume control, notify passengers, and complete a safe parking maneuver. In at least one embodiment, server(s)1078may include GPU(s)1084and one or more programmable inference accelerators (e.g., NVIDIA's TensorRT 3 devices). In at least one embodiment, a combination of GPU-powered servers and inference acceleration may make real-time responsiveness possible. In at least one embodiment, such as where performance is less critical, servers powered by CPUs, FPGAs, and other processors may be used for inferencing. In at least one embodiment, hardware structure(s)715are used to perform one or more embodiments. Details regarding hardware structure(x)715are provided herein in conjunction withFIGS.7A and/or7B. Computer Systems FIG.11is a block diagram illustrating an exemplary computer system, which may be a system with interconnected devices and components, a system-on-a-chip (SOC) or some combination thereof formed with a processor that may include execution units to execute an instruction, according to at least one embodiment. In at least one embodiment, a computer system1100may include, without limitation, a component, such as a processor1102to employ execution units including logic to perform algorithms for process data, in accordance with present disclosure, such as in embodiment described herein. In at least one embodiment, computer system1100may include processors, such as PENTIUM® Processor family, Xeon™ Itanium®, XScale™ and/or StrongARM™, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system1100may execute a version of WINDOWS operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux, for example), embedded software, and/or graphical user interfaces, may also be used. Embodiments may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (“PDAs”), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (“DSP”), system on a chip, network computers (“NetPCs”), set-top boxes, network hubs, wide area network (“WAN”) switches, or any other system that may perform one or more instructions in accordance with at least one embodiment. In at least one embodiment, computer system1100may include, without limitation, processor1102that may include, without limitation, one or more execution units1108to perform machine learning model training and/or inferencing according to techniques described herein. In at least one embodiment, computer system1100is a single processor desktop or server system, but in another embodiment, computer system1100may be a multiprocessor system. In at least one embodiment, processor1102may include, without limitation, a complex instruction set computer (“CISC”) microprocessor, a reduced instruction set computing (“RISC”) microprocessor, a very long instruction word (“VLIW”) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In at least one embodiment, processor1102may be coupled to a processor bus1110that may transmit data signals between processor1102and other components in computer system1100. In at least one embodiment, processor1102may include, without limitation, a Level 1 (“L1”) internal cache memory (“cache”)1104. In at least one embodiment, processor1102may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor1102. Other embodiments may also include a combination of both internal and external caches depending on particular implementation and needs. In at least one embodiment, a register file1106may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and an instruction pointer register. In at least one embodiment, execution unit1108, including, without limitation, logic to perform integer and floating point operations, also resides in processor1102. In at least one embodiment, processor1102may also include a microcode (“ucode”) read only memory (“ROM”) that stores microcode for certain macro instructions. In at least one embodiment, execution unit1108may include logic to handle a packed instruction set1109. In at least one embodiment, by including packed instruction set1109in an instruction set of a general-purpose processor, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in processor1102. In at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using a full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across that processor's data bus to perform one or more operations one data element at a time. In at least one embodiment, execution unit1108may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system1100may include, without limitation, a memory1120. In at least one embodiment, memory1120may be a Dynamic Random Access Memory (“DRAM”) device, a Static Random Access Memory (“SRAM”) device, a flash memory device, or another memory device. In at least one embodiment, memory1120may store instruction(s)1119and/or data1121represented by data signals that may be executed by processor1102. In at least one embodiment, a system logic chip may be coupled to processor bus1110and memory1120. In at least one embodiment, a system logic chip may include, without limitation, a memory controller hub (“MCH”)1116, and processor1102may communicate with MCH1116via processor bus1110. In at least one embodiment, MCH1116may provide a high bandwidth memory path1118to memory1120for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH1116may direct data signals between processor1102, memory1120, and other components in computer system1100and to bridge data signals between processor bus1110, memory1120, and a system I/O interface1122. In at least one embodiment, a system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH1116may be coupled to memory1120through high bandwidth memory path1118and a graphics/video card1112may be coupled to MCH1116through an Accelerated Graphics Port (“AGP”) interconnect1114. In at least one embodiment, computer system1100may use system I/O interface1122as a proprietary hub interface bus to couple MCH1116to an I/O controller hub (“ICH”)1130. In at least one embodiment, ICH1130may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, a local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory1120, a chipset, and processor1102. Examples may include, without limitation, an audio controller1129, a firmware hub (“flash BIOS”)1128, a wireless transceiver1126, a data storage1124, a legacy I/O controller1123containing user input and keyboard interfaces1125, a serial expansion port1127, such as a Universal Serial Bus (“USB”) port, and a network controller1134. In at least one embodiment, data storage1124may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. In at least one embodiment,FIG.11illustrates a system, which includes interconnected hardware devices or “chips”, whereas in other embodiments,FIG.11may illustrate an exemplary SoC. In at least one embodiment, devices illustrated inFIG.11may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components of computer system1100are interconnected using compute express link (CXL) interconnects. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.11for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.12is a block diagram illustrating an electronic device1200for utilizing a processor1210, according to at least one embodiment. In at least one embodiment, electronic device1200may be, for example and without limitation, a notebook, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device. In at least one embodiment, electronic device1200may include, without limitation, processor1210communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor1210is coupled using a bus or interface, such as a I2C bus, a System Management Bus (“SMBus”), a Low Pin Count (LPC) bus, a Serial Peripheral Interface (“SPI”), a High Definition Audio (“HDA”) bus, a Serial Advance Technology Attachment (“SATA”) bus, a Universal Serial Bus (“USB”) (versions 1, 2, 3, etc.), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus. In at least one embodiment,FIG.12illustrates a system, which includes interconnected hardware devices or “chips”, whereas in other embodiments,FIG.12may illustrate an exemplary SoC. In at least one embodiment, devices illustrated inFIG.12may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components ofFIG.12are interconnected using compute express link (CXL) interconnects. In at least one embodiment,FIG.12may include a display1224, a touch screen1225, a touch pad1230, a Near Field Communications unit (“NFC”)1245, a sensor hub1240, a thermal sensor1246, an Express Chipset (“EC”)1235, a Trusted Platform Module (“TPM”)1238, BIOS/firmware/flash memory (“BIOS, FW Flash”)1222, a DSP1260, a drive1220such as a Solid State Disk (“SSD”) or a Hard Disk Drive (“HDD”), a wireless local area network unit (“WLAN”)1250, a Bluetooth unit1252, a Wireless Wide Area Network unit (“WWAN”)1256, a Global Positioning System (GPS) unit1255, a camera (“USB 3.0 camera”)1254such as a USB 3.0 camera, and/or a Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”)1215implemented in, for example, an LPDDR3 standard. These components may each be implemented in any suitable manner. In at least one embodiment, other components may be communicatively coupled to processor1210through components described herein. In at least one embodiment, an accelerometer1241, an ambient light sensor (“ALS”)1242, a compass1243, and a gyroscope1244may be communicatively coupled to sensor hub1240. In at least one embodiment, a thermal sensor1239, a fan1237, a keyboard1236, and touch pad1230may be communicatively coupled to EC1235. In at least one embodiment, speakers1263, headphones1264, and a microphone (“mic”)1265may be communicatively coupled to an audio unit (“audio codec and class D amp”)1262, which may in turn be communicatively coupled to DSP1260. In at least one embodiment, audio unit1262may include, for example and without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, a SIM card (“SIM”)1257may be communicatively coupled to WWAN unit1256. In at least one embodiment, components such as WLAN unit1250and Bluetooth unit1252, as well as WWAN unit1256may be implemented in a Next Generation Form Factor (“NGFF”). Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.12for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.13illustrates a computer system1300, according to at least one embodiment. In at least one embodiment, computer system1300is configured to implement various processes and methods described throughout this disclosure. In at least one embodiment, computer system1300comprises, without limitation, at least one central processing unit (“CPU”)1302that is connected to a communication bus1310implemented using any suitable protocol, such as PCI (“Peripheral Component Interconnect”), peripheral component interconnect express (“PCI-Express”), AGP (“Accelerated Graphics Port”), HyperTransport, or any other bus or point-to-point communication protocol(s). In at least one embodiment, computer system1300includes, without limitation, a main memory1304and control logic (e.g., implemented as hardware, software, or a combination thereof) and data are stored in main memory1304, which may take form of random access memory (“RAM”). In at least one embodiment, a network interface subsystem (“network interface”)1322provides an interface to other computing devices and networks for receiving data from and transmitting data to other systems with computer system1300. In at least one embodiment, computer system1300, in at least one embodiment, includes, without limitation, input devices1308, a parallel processing system1312, and display devices1306that can be implemented using a conventional cathode ray tube (“CRT”), a liquid crystal display (“LCD”), a light emitting diode (“LED”) display, a plasma display, or other suitable display technologies. In at least one embodiment, user input is received from input devices1308such as keyboard, mouse, touchpad, microphone, etc. In at least one embodiment, each module described herein can be situated on a single semiconductor platform to form a processing system. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.13for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.14illustrates a computer system1400, according to at least one embodiment. In at least one embodiment, computer system1400includes, without limitation, a computer1410and a USB stick1420. In at least one embodiment, computer1410may include, without limitation, any number and type of processor(s) (not shown) and a memory (not shown). In at least one embodiment, computer1410includes, without limitation, a server, a cloud instance, a laptop, and a desktop computer. In at least one embodiment, USB stick1420includes, without limitation, a processing unit1430, a USB interface1440, and USB interface logic1450. In at least one embodiment, processing unit1430may be any instruction execution system, apparatus, or device capable of executing instructions. In at least one embodiment, processing unit1430may include, without limitation, any number and type of processing cores (not shown). In at least one embodiment, processing unit1430comprises an application specific integrated circuit (“ASIC”) that is optimized to perform any amount and type of operations associated with machine learning. For instance, in at least one embodiment, processing unit1430is a tensor processing unit (“TPC”) that is optimized to perform machine learning inference operations. In at least one embodiment, processing unit1430is a vision processing unit (“VPU”) that is optimized to perform machine vision and machine learning inference operations. In at least one embodiment, USB interface1440may be any type of USB connector or USB socket. For instance, in at least one embodiment, USB interface1440is a USB 3.0 Type-C socket for data and power. In at least one embodiment, USB interface1440is a USB 3.0 Type-A connector. In at least one embodiment, USB interface logic1450may include any amount and type of logic that enables processing unit1430to interface with devices (e.g., computer1410) via USB connector1440. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.14for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.15Aillustrates an exemplary architecture in which a plurality of GPUs1510(1)-1510(N) is communicatively coupled to a plurality of multi-core processors1505(1)-1505(M) over high-speed links1540(1)-1540(N) (e.g., buses, point-to-point interconnects, etc.). In at least one embodiment, high-speed links1540(1)-1540(N) support a communication throughput of 4 GB/s, 30 GB/s, 80 GB/s or higher. In at least one embodiment, various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. In various figures, “N” and “M” represent positive integers, values of which may be different from figure to figure. In at least one embodiment, one or more GPUs in a plurality of GPUs1510(1)-1510(N) includes one or more graphics cores (also referred to simply as “cores”)1800as disclosed inFIGS.18A and18B. In at least one embodiment, one or more graphics cores1800may be referred to as streaming multiprocessors (“SMs”), stream processors (“SPs”), stream processing units (“SPUs”), compute units (“CUs”), execution units (“EUs”), and/or slices, where a slice in this context can refer to a portion of processing resources in a processing unit (e.g., 16 cores, a ray tracing unit, a thread director or scheduler). In addition, and in at least one embodiment, two or more of GPUs1510are interconnected over high-speed links1529(1)-1529(2), which may be implemented using similar or different protocols/links than those used for high-speed links1540(1)-1540(N). Similarly, two or more of multi-core processors1505may be connected over a high-speed link1528which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or higher. Alternatively, all communication between various system components shown inFIG.15Amay be accomplished using similar protocols/links (e.g., over a common interconnection fabric). In at least one embodiment, each multi-core processor1505is communicatively coupled to a processor memory1501(1)-1501(M), via memory interconnects1526(1)-1526(M), respectively, and each GPU1510(1)-1510(N) is communicatively coupled to GPU memory1520(1)-1520(N) over GPU memory interconnects1550(1)-1550(N), respectively. In at least one embodiment, memory interconnects1526and1550may utilize similar or different memory access technologies. By way of example, and not limitation, processor memories1501(1)-1501(M) and GPU memories1520may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In at least one embodiment, some portion of processor memories1501may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy). As described herein, although various multi-core processors1505and GPUs1510may be physically coupled to a particular memory1501,1520, respectively, and/or a unified memory architecture may be implemented in which a virtual system address space (also referred to as “effective address” space) is distributed among various physical memories. For example, processor memories1501(1)-1501(M) may each comprise 64 GB of system memory address space and GPU memories1520(1)-1520(N) may each comprise 32 GB of system memory address space resulting in a total of 256 GB addressable memory when M=2 and N=4. Other values for N and M are possible. FIG.15Billustrates additional details for an interconnection between a multi-core processor1507and a graphics acceleration module1546in accordance with one exemplary embodiment. In at least one embodiment, graphics acceleration module1546may include one or more GPU chips integrated on a line card which is coupled to processor1507via high-speed link1540(e.g., a PCIe bus, NVLink, etc.). In at least one embodiment, graphics acceleration module1546may alternatively be integrated on a package or chip with processor1507. In at least one embodiment, processor1507includes a plurality of cores1560A-1560D (which may be referred to as “execution units”), each with a translation lookaside buffer (“TLB”)1561A-1561D and one or more caches1562A-1562D. In at least one embodiment, cores1560A-1560D may include various other components for executing instructions and processing data that are not illustrated. In at least one embodiment, caches1562A-1562D may comprise Level 1 (L1) and Level 2 (L2) caches. In addition, one or more shared caches1556may be included in caches1562A-1562D and shared by sets of cores1560A-1560D. For example, one embodiment of processor1507includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one or more L2 and L3 caches are shared by two adjacent cores. In at least one embodiment, processor1507and graphics acceleration module1546connect with system memory1514, which may include processor memories1501(1)-1501(M) ofFIG.15A. In at least one embodiment, coherency is maintained for data and instructions stored in various caches1562A-1562D,1556and system memory1514via inter-core communication over a coherence bus1564. In at least one embodiment, for example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over coherence bus1564in response to detected reads or writes to particular cache lines. In at least one embodiment, a cache snooping protocol is implemented over coherence bus1564to snoop cache accesses. In at least one embodiment, a proxy circuit1525communicatively couples graphics acceleration module1546to coherence bus1564, allowing graphics acceleration module1546to participate in a cache coherence protocol as a peer of cores1560A-1560D. In particular, in at least one embodiment, an interface1535provides connectivity to proxy circuit1525over high-speed link1540and an interface1537connects graphics acceleration module1546to high-speed link1540. In at least one embodiment, an accelerator integration circuit1536provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines1531(1)-1531(N) of graphics acceleration module1546. In at least one embodiment, graphics processing engines1531(1)-1531(N) may each comprise a separate graphics processing unit (GPU). In at least one embodiment, plurality of graphics processing engines1531(1)-1531(N) of graphics acceleration module1546include one or more graphics cores1800as discussed in connection withFIGS.18A and18B. In at least one embodiment, graphics processing engines1531(1)-1531(N) alternatively may comprise different types of graphics processing engines within a GPU, such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, graphics acceleration module1546may be a GPU with a plurality of graphics processing engines1531(1)-1531(N) or graphics processing engines1531(1)-1531(N) may be individual GPUs integrated on a common package, line card, or chip. In at least one embodiment, accelerator integration circuit1536includes a memory management unit (MMU)1539for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory1514. In at least one embodiment, MMU1539may also include a translation lookaside buffer (TLB) (not shown) for caching virtual/effective to physical/real address translations. In at least one embodiment, a cache1538can store commands and data for efficient access by graphics processing engines1531(1)-1531(N). In at least one embodiment, data stored in cache1538and graphics memories1533(1)-1533(M) is kept coherent with core caches1562A-1562D,1556and system memory1514, possibly using a fetch unit1544. As mentioned, this may be accomplished via proxy circuit1525on behalf of cache1538and memories1533(1)-1533(M) (e.g., sending updates to cache1538related to modifications/accesses of cache lines on processor caches1562A-1562D,1556and receiving updates from cache1538). In at least one embodiment, a set of registers1545store context data for threads executed by graphics processing engines1531(1)-1531(N) and a context management circuit1548manages thread contexts. For example, context management circuit1548may perform save and restore operations to save and restore contexts of various threads during contexts switches (e.g., where a first thread is saved and a second thread is stored so that a second thread can be execute by a graphics processing engine). For example, on a context switch, context management circuit1548may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore register values when returning to a context. In at least one embodiment, an interrupt management circuit1547receives and processes interrupts received from system devices. In at least one embodiment, virtual/effective addresses from a graphics processing engine1531are translated to real/physical addresses in system memory1514by MMU1539. In at least one embodiment, accelerator integration circuit1536supports multiple (e.g., 4, 8, 16) graphics accelerator modules1546and/or other accelerator devices. In at least one embodiment, graphics accelerator module1546may be dedicated to a single application executed on processor1507or may be shared between multiple applications. In at least one embodiment, a virtualized graphics execution environment is presented in which resources of graphics processing engines1531(1)-1531(N) are shared with multiple applications or virtual machines (VMs). In at least one embodiment, resources may be subdivided into “slices” which are allocated to different VMs and/or applications based on processing requirements and priorities associated with VMs and/or applications. In at least one embodiment, accelerator integration circuit1536performs as a bridge to a system for graphics acceleration module1546and provides address translation and system memory cache services. In addition, in at least one embodiment, accelerator integration circuit1536may provide virtualization facilities for a host processor to manage virtualization of graphics processing engines1531(1)-1531(N), interrupts, and memory management. In at least one embodiment, because hardware resources of graphics processing engines1531(1)-1531(N) are mapped explicitly to a real address space seen by host processor1507, any host processor can address these resources directly using an effective address value. In at least one embodiment, one function of accelerator integration circuit1536is physical separation of graphics processing engines1531(1)-1531(N) so that they appear to a system as independent units. In at least one embodiment, one or more graphics memories1533(1)-1533(M) are coupled to each of graphics processing engines1531(1)-1531(N), respectively and N=M. In at least one embodiment, graphics memories1533(1)-1533(M) store instructions and data being processed by each of graphics processing engines1531(1)-1531(N). In at least one embodiment, graphics memories1533(1)-1533(M) may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In at least one embodiment, to reduce data traffic over high-speed link1540, biasing techniques can be used to ensure that data stored in graphics memories1533(1)-1533(M) is data that will be used most frequently by graphics processing engines1531(1)-1531(N) and preferably not used by cores1560A-1560D (at least not frequently). Similarly, in at least one embodiment, a biasing mechanism attempts to keep data needed by cores (and preferably not graphics processing engines1531(1)-1531(N)) within caches1562A-1562D,1556and system memory1514. FIG.15Cillustrates another exemplary embodiment in which accelerator integration circuit1536is integrated within processor1507. In this embodiment, graphics processing engines1531(1)-1531(N) communicate directly over high-speed link1540to accelerator integration circuit1536via interface1537and interface1535(which, again, may be any form of bus or interface protocol). In at least one embodiment, accelerator integration circuit1536may perform similar operations as those described with respect toFIG.15B, but potentially at a higher throughput given its close proximity to coherence bus1564and caches1562A-1562D,1556. In at least one embodiment, an accelerator integration circuit supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization), which may include programming models which are controlled by accelerator integration circuit1536and programming models which are controlled by graphics acceleration module1546. In at least one embodiment, graphics processing engines1531(1)-1531(N) are dedicated to a single application or process under a single operating system. In at least one embodiment, a single application can funnel other application requests to graphics processing engines1531(1)-1531(N), providing virtualization within a VM/partition. In at least one embodiment, graphics processing engines1531(1)-1531(N), may be shared by multiple VM/application partitions. In at least one embodiment, shared models may use a system hypervisor to virtualize graphics processing engines1531(1)-1531(N) to allow access by each operating system. In at least one embodiment, for single-partition systems without a hypervisor, graphics processing engines1531(1)-1531(N) are owned by an operating system. In at least one embodiment, an operating system can virtualize graphics processing engines1531(1)-1531(N) to provide access to each process or application. In at least one embodiment, graphics acceleration module1546or an individual graphics processing engine1531(1)-1531(N) selects a process element using a process handle. In at least one embodiment, process elements are stored in system memory1514and are addressable using an effective address to real address translation technique described herein. In at least one embodiment, a process handle may be an implementation-specific value provided to a host process when registering its context with graphics processing engine1531(1)-1531(N) (that is, calling system software to add a process element to a process element linked list). In at least one embodiment, a lower 16-bits of a process handle may be an offset of a process element within a process element linked list. FIG.15Dillustrates an exemplary accelerator integration slice1590. In at least one embodiment, a “slice” comprises a specified portion of processing resources of accelerator integration circuit1536. In at least one embodiment, an application is effective address space1582within system memory1514stores process elements1583. In at least one embodiment, process elements1583are stored in response to GPU invocations1581from applications1580executed on processor1507. In at least one embodiment, a process element1583contains process state for corresponding application1580. In at least one embodiment, a work descriptor (WD)1584contained in process element1583can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD1584is a pointer to a job request queue in an application's effective address space1582. In at least one embodiment, graphics acceleration module1546and/or individual graphics processing engines1531(1)-1531(N) can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process states and sending a WD1584to a graphics acceleration module1546to start a job in a virtualized environment may be included. In at least one embodiment, a dedicated-process programming model is implementation-specific. In at least one embodiment, in this model, a single process owns graphics acceleration module1546or an individual graphics processing engine1531. In at least one embodiment, when graphics acceleration module1546is owned by a single process, a hypervisor initializes accelerator integration circuit1536for an owning partition and an operating system initializes accelerator integration circuit1536for an owning process when graphics acceleration module1546is assigned. In at least one embodiment, in operation, a WD fetch unit1591in accelerator integration slice1590fetches next WD1584, which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module1546. In at least one embodiment, data from WD1584may be stored in registers1545and used by MMU1539, interrupt management circuit1547and/or context management circuit1548as illustrated. For example, one embodiment of MMU1539includes segment/page walk circuitry for accessing segment/page tables1586within an OS virtual address space1585. In at least one embodiment, interrupt management circuit1547may process interrupt events1592received from graphics acceleration module1546. In at least one embodiment, when performing graphics operations, an effective address1593generated by a graphics processing engine1531(1)-1531(N) is translated to a real address by MMU1539. In at least one embodiment, registers1545are duplicated for each graphics processing engine1531(1)-1531(N) and/or graphics acceleration module1546and may be initialized by a hypervisor or an operating system. In at least one embodiment, each of these duplicated registers may be included in an accelerator integration slice1590. Exemplary registers that may be initialized by a hypervisor are shown in Table 1. TABLE 1Hypervisor Initialized RegistersRegister #Description1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator UtilizationRecord Pointer9Storage Description Register Exemplary registers that may be initialized by an operating system are shown in Table 2. TABLE 2Operating System Initialized RegistersRegister #Description1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization RecordPointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptor In at least one embodiment, each WD1584is specific to a particular graphics acceleration module1546and/or graphics processing engines1531(1)-1531(N). In at least one embodiment, it contains all information required by a graphics processing engine1531(1)-1531(N) to do work, or it can be a pointer to a memory location where an application has set up a command queue of work to be completed. FIG.15Eillustrates additional details for one exemplary embodiment of a shared model. This embodiment includes a hypervisor real address space1598in which a process element list1599is stored. In at least one embodiment, hypervisor real address space1598is accessible via a hypervisor1596which virtualizes graphics acceleration module engines for operating system1595. In at least one embodiment, shared programming models allow for all or a subset of processes from all or a subset of partitions in a system to use a graphics acceleration module1546. In at least one embodiment, there are two programming models where graphics acceleration module1546is shared by multiple processes and partitions, namely time-sliced shared and graphics directed shared. In at least one embodiment, in this model, system hypervisor1596owns graphics acceleration module1546and makes its function available to all operating systems1595. In at least one embodiment, for a graphics acceleration module1546to support virtualization by system hypervisor1596, graphics acceleration module1546may adhere to certain requirements, such as (1) an application's job request must be autonomous (that is, state does not need to be maintained between jobs), or graphics acceleration module1546must provide a context save and restore mechanism, (2) an application's job request is guaranteed by graphics acceleration module1546to complete in a specified amount of time, including any translation faults, or graphics acceleration module1546provides an ability to preempt processing of a job, and (3) graphics acceleration module1546must be guaranteed fairness between processes when operating in a directed shared programming model. In at least one embodiment, application1580is required to make an operating system1595system call with a graphics acceleration module type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). In at least one embodiment, graphics acceleration module type describes a targeted acceleration function for a system call. In at least one embodiment, graphics acceleration module type may be a system-specific value. In at least one embodiment, WD is formatted specifically for graphics acceleration module1546and can be in a form of a graphics acceleration module1546command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe work to be done by graphics acceleration module1546. In at least one embodiment, an AMR value is an AMR state to use for a current process. In at least one embodiment, a value passed to an operating system is similar to an application setting an AMR. In at least one embodiment, if accelerator integration circuit1536(not shown) and graphics acceleration module1546implementations do not support a User Authority Mask Override Register (UAMOR), an operating system may apply a current UAMOR value to an AMR value before passing an AMR in a hypervisor call. In at least one embodiment, hypervisor1596may optionally apply a current Authority Mask Override Register (AMOR) value before placing an AMR into process element1583. In at least one embodiment, CSRP is one of registers1545containing an effective address of an area in an application's effective address space1582for graphics acceleration module1546to save and restore context state. In at least one embodiment, this pointer is optional if no state is required to be saved between jobs or when a job is preempted. In at least one embodiment, context save/restore area may be pinned system memory. Upon receiving a system call, operating system1595may verify that application1580has registered and been given authority to use graphics acceleration module1546. In at least one embodiment, operating system1595then calls hypervisor1596with information shown in Table 3. TABLE 3OS to Hypervisor Call ParametersParameter #Description1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentiallymasked)3An effective address (EA) Context Save/Restore AreaPointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization recordpointer (AURP)6Virtual address of storage segment table pointer (SSTP)7A logical interrupt service number (LISN) In at least one embodiment, upon receiving a hypervisor call, hypervisor1596verifies that operating system1595has registered and been given authority to use graphics acceleration module1546. In at least one embodiment, hypervisor1596then puts process element1583into a process element linked list for a corresponding graphics acceleration module1546type. In at least one embodiment, a process element may include information shown in Table 4. TABLE 4Process Element InformationElement #Description1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentiallymasked).3An effective address (EA) Context Save/Restore AreaPointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization recordpointer (AURP)6Virtual address of storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from hypervisor callparameters9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilizationrecord pointer12Storage Descriptor Register (SDR) In at least one embodiment, hypervisor initializes a plurality of accelerator integration slice1590registers1545. As illustrated inFIG.15F, in at least one embodiment, a unified memory is used, addressable via a common virtual memory address space used to access physical processor memories1501(1)-1501(N) and GPU memories1520(1)-1520(N). In this implementation, operations executed on GPUs1510(1)-1510(N) utilize a same virtual/effective memory address space to access processor memories1501(1)-1501(M) and vice versa, thereby simplifying programmability. In at least one embodiment, a first portion of a virtual/effective address space is allocated to processor memory1501(1), a second portion to second processor memory1501(N), a third portion to GPU memory1520(1), and so on. In at least one embodiment, an entire virtual/effective memory space (sometimes referred to as an effective address space) is thereby distributed across each of processor memories1501and GPU memories1520, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory. In at least one embodiment, bias/coherence management circuitry1594A-1594E within one or more of MMUs1539A-1539E ensures cache coherence between caches of one or more host processors (e.g.,1505) and GPUs1510and implements biasing techniques indicating physical memories in which certain types of data should be stored. In at least one embodiment, while multiple instances of bias/coherence management circuitry1594A-1594E are illustrated inFIG.15F, bias/coherence circuitry may be implemented within an MMU of one or more host processors1505and/or within accelerator integration circuit1536. One embodiment allows GPU memories1520to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering performance drawbacks associated with full system cache coherence. In at least one embodiment, an ability for GPU memories1520to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. In at least one embodiment, this arrangement allows software of host processor1505to setup operands and access computation results, without overhead of tradition I/O DMA data copies. In at least one embodiment, such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. In at least one embodiment, an ability to access GPU memories1520without cache coherence overheads can be critical to execution time of an offloaded computation. In at least one embodiment, in cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce an effective write bandwidth seen by a GPU1510. In at least one embodiment, efficiency of operand setup, efficiency of results access, and efficiency of GPU computation may play a role in determining effectiveness of a GPU offload. In at least one embodiment, selection of GPU bias and host processor bias is driven by a bias tracker data structure. In at least one embodiment, a bias table may be used, for example, which may be a page-granular structure (e.g., controlled at a granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. In at least one embodiment, a bias table may be implemented in a stolen memory range of one or more GPU memories1520, with or without a bias cache in a GPU1510(e.g., to cache frequently/recently used entries of a bias table). Alternatively, in at least one embodiment, an entire bias table may be maintained within a GPU. In at least one embodiment, a bias table entry associated with each access to a GPU attached memory1520is accessed prior to actual access to a GPU memory, causing following operations. In at least one embodiment, local requests from a GPU1510that find their page in GPU bias are forwarded directly to a corresponding GPU memory1520. In at least one embodiment, local requests from a GPU that find their page in host bias are forwarded to processor1505(e.g., over a high-speed link as described herein). In at least one embodiment, requests from processor1505that find a requested page in host processor bias complete a request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to a GPU1510. In at least one embodiment, a GPU may then transition a page to a host processor bias if it is not currently using a page. In at least one embodiment, a bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism. In at least one embodiment, one mechanism for changing bias state employs an API call (e.g., OpenCL), which, in turn, calls a GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to a GPU directing it to change a bias state and, for some transitions, perform a cache flushing operation in a host. In at least one embodiment, a cache flushing operation is used for a transition from host processor1505bias to GPU bias, but is not for an opposite transition. In at least one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by host processor1505. In at least one embodiment, to access these pages, processor1505may request access from GPU1510, which may or may not grant access right away. In at least one embodiment, thus, to reduce communication between processor1505and GPU1510it is beneficial to ensure that GPU-biased pages are those which are required by a GPU but not host processor1505and vice versa. Hardware structure(s)715are used to perform one or more embodiments. Details regarding a hardware structure(s)715may be provided herein in conjunction withFIGS.7A and/or7B. FIG.16illustrates exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. FIG.16is a block diagram illustrating an exemplary system on a chip integrated circuit1600that may be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, integrated circuit1600includes one or more application processor(s)1605(e.g., CPUs), at least one graphics processor1610, and may additionally include an image processor1615and/or a video processor1620, any of which may be a modular IP core. In at least one embodiment, integrated circuit1600includes peripheral or bus logic including a USB controller1625, a UART controller1630, an SPI/SDIO controller1635, and an I22S/I22C controller1640. In at least one embodiment, integrated circuit1600can include a display device1645coupled to one or more of a high-definition multimedia interface (HDMI) controller1650and a mobile industry processor interface (MIPI) display interface1655. In at least one embodiment, storage may be provided by a flash memory subsystem1660including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller1665for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine1670. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in integrated circuit1600for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIGS.17A-17Billustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. FIGS.17A-17Bare block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein.FIG.17Aillustrates an exemplary graphics processor1710of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment.FIG.17Billustrates an additional exemplary graphics processor1740of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to at least one embodiment. In at least one embodiment, graphics processor1710ofFIG.17Ais a low power graphics processor core. In at least one embodiment, graphics processor1740ofFIG.17Bis a higher performance graphics processor core. In at least one embodiment, each of graphics processors1710,1740can be variants of graphics processor1610ofFIG.16. In at least one embodiment, graphics processor1710includes a vertex processor1705and one or more fragment processor(s)1715A-1715N (e.g.,1715A,1715B,1715C,1715D, through1715N-1, and1715N). In at least one embodiment, graphics processor1710can execute different shader programs via separate logic, such that vertex processor1705is optimized to execute operations for vertex shader programs, while one or more fragment processor(s)1715A-1715N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor1705performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s)1715A-1715N use primitive and vertex data generated by vertex processor1705to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s)1715A-1715N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API. In at least one embodiment, graphics processor1710additionally includes one or more memory management units (MMUs)1720A-1720B, cache(s)1725A-1725B, and circuit interconnect(s)1730A-1730B. In at least one embodiment, one or more MMU(s)1720A-1720B provide for virtual to physical address mapping for graphics processor1710, including for vertex processor1705and/or fragment processor(s)1715A-1715N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s)1725A-1725B. In at least one embodiment, one or more MMU(s)1720A-1720B may be synchronized with other MMUs within a system, including one or more MMUs associated with one or more application processor(s)1605, image processors1615, and/or video processors1620ofFIG.16, such that each processor1605-1620can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s)1730A-1730B enable graphics processor1710to interface with other IP cores within SoC, either via an internal bus of SoC or via a direct connection. In at least one embodiment, graphics processor1740includes one or more shader core(s)1755A-1755N (e.g.,1755A,1755B,1755C,1755D,1755E,1755F, through1755N-1, and1755N) as shown inFIG.17B, which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor1740includes an inter-core task manager1745, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores1755A-1755N and a tiling unit1758to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in integrated circuit17A and/or17B for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIGS.18A-18Billustrate additional exemplary graphics processor logic according to embodiments described herein.FIG.18Aillustrates a graphics core1800that may be included within graphics processor1610ofFIG.16, in at least one embodiment, and may be a unified shader core1755A-1755N as inFIG.17Bin at least one embodiment.FIG.18Billustrates a highly-parallel general-purpose graphics processing unit (“GPGPU”)1830suitable for deployment on a multi-chip module in at least one embodiment. In at least one embodiment, graphics core1800includes a shared instruction cache1802, a texture unit1818, and a cache/shared memory1820(e.g., including L1, L2, L3, last level cache, or other caches) that are common to execution resources within graphics core1800. In at least one embodiment, graphics core1800can include multiple slices1801A-1801N or a partition for each core, and a graphics processor can include multiple instances of graphics core1800. In at least one embodiment, each slice1801A-1801N refers to graphics core1800. In at least one embodiment, slices1801A-1801N have sub-slices, which are part of a slice1801A-1801N. In at least one embodiment, slices1801A-1801N are independent of other slices or dependent on other slices. In at least one embodiment, slices1801A-1801N can include support logic including a local instruction cache1804A-1804N, a thread scheduler (sequencer)1806A-1806N, a thread dispatcher1808A-1808N, and a set of registers1810A-1810N. In at least one embodiment, slices1801A-1801N can include a set of additional function units (AFUs1812A-1812N), floating-point units (FPUs1814A-1814N), integer arithmetic logic units (ALUs1816A-1816N), address computational units (ACUs1813A-1813N), double-precision floating-point units (DPFPUs1815A-1815N), and matrix processing units (MPUs1817A-1817N). In at least one embodiment, each slice1801A-1801N includes one or more engines for floating point and integer vector operations and one or more engines to accelerate convolution and matrix operations in AI, machine learning, or large dataset workloads. In at least one embodiment, one or more slices1801A-1801N include one or more vector engines to compute a vector (e.g., compute mathematical operations for vectors). In at least one embodiment, a vector engine can compute a vector operation in 16-bit floating point (also referred to as “FP16”), 32-bit floating point (also referred to as “FP32”), or 64-bit floating point (also referred to as “FP64”). In at least one embodiment, one or more slices1801A-1801N includes 16 vector engines that are paired with 16 matrix math units to compute matrix/tensor operations, where vector engines and math units are exposed via matrix extensions. In at least one embodiment, a slice a specified portion of processing resources of a processing unit, e.g., 16 cores and a ray tracing unit or 8 cores, a thread scheduler, a thread dispatcher, and additional functional units for a processor. In at least one embodiment, graphics core1800includes one or more matrix engines to compute matrix operations, e.g., when computing tensor operations. In at least one embodiment, one or more slices1801A-1801N includes one or more ray tracing units to compute ray tracing operations (e.g., 16 ray tracing units per slice slices1801A-1801N). In at least one embodiment, a ray tracing unit computes ray traversal, triangle intersection, bounding box intersect, or other ray tracing operations. In at least one embodiment, one or more slices1801A-1801N includes a media slice that encodes, decodes, and/or transcodes data; scales and/or format converts data; and/or performs video quality operations on video data. In at least one embodiment, one or more slices1801A-1801N are linked to L2 cache and memory fabric, link connectors, high-bandwidth memory (HBM) (e.g., HBM2e, HDM3) stacks, and a media engine. In at least one embodiment, one or more slices1801A-1801N include multiple cores (e.g., 16 cores) and multiple ray tracing units (e.g., 16) paired to each core. In at least one embodiment, one or more slices1801A-1801N has one or more L1 caches. In at least one embodiment, one or more slices1801A-1801N include one or more vector engines; one or more instruction caches to store instructions; one or more L1 caches to cache data; one or more shared local memories (SLMs) to store data, e.g., corresponding to instructions; one or more samplers to sample data; one or more ray tracing units to perform ray tracing operations; one or more geometries to perform operations in geometry pipelines and/or apply geometric transformations to vertices or polygons; one or more rasterizers to describe an image in vector graphics format (e.g., shape) and convert it into a raster image (e.g., a series of pixels, dots, or lines, which when displayed together, create an image that is represented by shapes); one or more a Hierarchical Depth Buffer (Hiz) to buffer data; and/or one or more pixel backends. In at least one embodiment, a slice1801A-1801N includes a memory fabric, e.g., an L2 cache. In at least one embodiment, FPUs1814A-1814N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs1815A-1815N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs1816A-1816N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs1817A-1817N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs1817-1817N can perform a variety of matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). In at least one embodiment, AFUs1812A-1812N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., sine, cosiInference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in graphics core1800for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, graphics core1800includes an interconnect and a link fabric sublayer that is attached to a switch and a GPU-GPU bridge that enables multiple graphics processors1800(e.g., 8) to be interlinked without glue to each other with load/store units (LSUs), data transfer units, and sync semantics across multiple graphics processors1800. In at least one embodiment, interconnects include standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, graphics core1800includes multiple tiles. In at least one embodiment, a tile is an individual die or one or more dies, where individual dies can be connected with an interconnect (e.g., embedded multi-die interconnect bridge (EMIB)). In at least one embodiment, graphics core1800includes a compute tile, a memory tile (e.g., where a memory tile can be exclusively accessed by different tiles or different chipsets such as a Rambo tile), substrate tile, a base tile, a HMB tile, a link tile, and EMIB tile, where all tiles are packaged together in graphics core1800as part of a GPU. In at least one embodiment, graphics core1800can include multiple tiles in a single package (also referred to as a “multi tile package”). In at least one embodiment, a compute tile can have 8 graphics cores1800, an L1 cache; and a base tile can have a host interface with PCIe 5.0, HBM2e, MDFI, and EMIB, a link tile with 8 links, 8 ports with an embedded switch. In at least one embodiment, tiles are connected with face-to-face (F2F) chip-on-chip bonding through fine-pitched, 36-micron, microbumps (e.g., copper pillars). In at least one embodiment, graphics core1800includes memory fabric, which includes memory, and is tile that is accessible by multiple tiles. In at least one embodiment, graphics core1800stores, accesses, or loads its own hardware contexts in memory, where a hardware context is a set of data loaded from registers before a process resumes, and where a hardware context can indicate a state of hardware (e.g., state of a GPU). In at least one embodiment, graphics core1800includes serializer/deserializer (SERDES) circuitry that converts a serial data stream to a parallel data stream, or converts a parallel data stream to a serial data stream. In at least one embodiment, graphics core1800includes a high speed coherent unified fabric (GPU to GPU), load/store units, bulk data transfer and sync semantics, and connected GPUs through an embedded switch, where a GPU-GPU bridge is controlled by a controller. In at least one embodiment, graphics core1800performs an API, where said API abstracts hardware of graphics core1800and access libraries with instructions to perform math operations (e.g., math kernel library), deep neural network operations (e.g., deep neural network library), vector operations, collective communications, thread building blocks, video processing, data analytics library, and/or ray tracing operations. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.18Billustrates a general-purpose processing unit (GPGPU)1830that can be configured to enable highly-parallel compute operations to be performed by an array of graphics processing units, in at least one embodiment. In at least one embodiment, GPGPU1830can be linked directly to other instances of GPGPU1830to create a multi-GPU cluster to improve training speed for deep neural networks. In at least one embodiment, GPGPU1830includes a host interface1832to enable a connection with a host processor. In at least one embodiment, host interface1832is a PCI Express interface. In at least one embodiment, host interface1832can be a vendor-specific communications interface or communications fabric. In at least one embodiment, GPGPU1830receives commands from a host processor and uses a global scheduler1834(which may be referred to as a thread sequencer and/or asynchronous compute engine) to distribute execution threads associated with those commands to a set of compute clusters1836A-1836H. In at least one embodiment, compute clusters1836A-1836H share a cache memory1838. In at least one embodiment, cache memory1838can serve as a higher-level cache for cache memories within compute clusters1836A-1836H. In at least one embodiment, GPGPU1830includes memory1844A-1844B coupled with compute clusters1836A-1836H via a set of memory controllers1842A-1842B (e.g., one or more controllers for HBM2e). In at least one embodiment, memory1844A-1844B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In at least one embodiment, compute clusters1836A-1836H each include a set of graphics cores, such as graphics core1800ofFIG.18A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, in at least one embodiment, at least a subset of floating point units in each of compute clusters1836A-1836H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations. In at least one embodiment, multiple instances of GPGPU1830can be configured to operate as a compute cluster. In at least one embodiment, communication used by compute clusters1836A-1836H for synchronization and data exchange varies across embodiments. In at least one embodiment, multiple instances of GPGPU1830communicate over host interface1832. In at least one embodiment, GPGPU1830includes an I/O hub1839that couples GPGPU1830with a GPU link1840that enables a direct connection to other instances of GPGPU1830. In at least one embodiment, GPU link1840is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU1830. In at least one embodiment, GPU link1840couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In at least one embodiment, multiple instances of GPGPU1830are located in separate data processing systems and communicate via a network device that is accessible via host interface1832. In at least one embodiment GPU link1840can be configured to enable a connection to a host processor in addition to or as an alternative to host interface1832. In at least one embodiment, GPGPU1830can be configured to train neural networks. In at least one embodiment, GPGPU1830can be used within an inferencing platform. In at least one embodiment, in which GPGPU1830is used for inferencing, GPGPU1830may include fewer compute clusters1836A-1836H relative to when GPGPU1830is used for training a neural network. In at least one embodiment, memory technology associated with memory1844A-1844B may differ between inferencing and training configurations, with higher bandwidth memory technologies devoted to training configurations. In at least one embodiment, an inferencing configuration of GPGPU1830can support inferencing specific instructions. For example, in at least one embodiment, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which may be used during inferencing operations for deployed neural networks. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in GPGPU1830for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.19is a block diagram illustrating a computing system1900according to at least one embodiment. In at least one embodiment, computing system1900includes a processing subsystem1901having one or more processor(s)1902and a system memory1904communicating via an interconnection path that may include a memory hub1905. In at least one embodiment, memory hub1905may be a separate component within a chipset component or may be integrated within one or more processor(s)1902. In at least one embodiment, memory hub1905couples with an I/O subsystem1911via a communication link1906. In at least one embodiment, I/O subsystem1911includes an I/O hub1907that can enable computing system1900to receive input from one or more input device(s)1908. In at least one embodiment, I/O hub1907can enable a display controller, which may be included in one or more processor(s)1902, to provide outputs to one or more display device(s)1910A. In at least one embodiment, one or more display device(s)1910A coupled with I/O hub1907can include a local, internal, or embedded display device. In at least one embodiment, processing subsystem1901includes one or more parallel processor(s)1912coupled to memory hub1905via a bus or other communication link1913. In at least one embodiment, communication link1913may use one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor-specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s)1912form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many-integrated core (MIC) processor. In at least one embodiment, some or all of parallel processor(s)1912form a graphics processing subsystem that can output pixels to one of one or more display device(s)1910A coupled via I/O Hub1907. In at least one embodiment, parallel processor(s)1912can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s)1910B. In at least one embodiment, parallel processor(s)1912include one or more cores, such as graphics cores1800discussed herein. In at least one embodiment, a system storage unit1914can connect to I/O hub1907to provide a storage mechanism for computing system1900. In at least one embodiment, an I/O switch1916can be used to provide an interface mechanism to enable connections between I/O hub1907and other components, such as a network adapter1918and/or a wireless network adapter1919that may be integrated into platform, and various other devices that can be added via one or more add-in device(s)1920. In at least one embodiment, network adapter1918can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter1919can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios. In at least one embodiment, computing system1900can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and like, may also be connected to I/O hub1907. In at least one embodiment, communication paths interconnecting various components inFIG.19may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or other bus or point-to-point communication interfaces and/or protocol(s), such as NV-Link high-speed interconnect, or interconnect protocols. In at least one embodiment, parallel processor(s)1912incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU), e.g., parallel processor(s)1912includes graphics core1800. In at least one embodiment, parallel processor(s)1912incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system1900may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, parallel processor(s)1912, memory hub1905, processor(s)1902, and I/O hub1907can be integrated into a system on chip (SoC) integrated circuit. In at least one embodiment, components of computing system1900can be integrated into a single package to form a system in package (SIP) configuration. In at least one embodiment, at least a portion of components of computing system1900can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in systemFIG.1900for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. Processors FIG.20Aillustrates a parallel processor2000according to at least one embodiment. In at least one embodiment, various components of parallel processor2000may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). In at least one embodiment, illustrated parallel processor2000is a variant of one or more parallel processor(s)1912shown inFIG.19according to an exemplary embodiment. In at least one embodiment, a parallel processor2000includes one or more graphics cores1800. In at least one embodiment, parallel processor2000includes a parallel processing unit2002. In at least one embodiment, parallel processing unit2002includes an I/O unit2004that enables communication with other devices, including other instances of parallel processing unit2002. In at least one embodiment, I/O unit2004may be directly connected to other devices. In at least one embodiment, I/O unit2004connects with other devices via use of a hub or switch interface, such as a memory hub2005. In at least one embodiment, connections between memory hub2005and I/O unit2004form a communication link2013. In at least one embodiment, I/O unit2004connects with a host interface2006and a memory crossbar2016, where host interface2006receives commands directed to performing processing operations and memory crossbar2016receives commands directed to performing memory operations. In at least one embodiment, when host interface2006receives a command buffer via I/O unit2004, host interface2006can direct work operations to perform those commands to a front end2008. In at least one embodiment, front end2008couples with a scheduler2010(which may be referred to as a sequencer), which is configured to distribute commands or other work items to a processing cluster array2012. In at least one embodiment, scheduler2010ensures that processing cluster array2012is properly configured and in a valid state before tasks are distributed to a cluster of processing cluster array2012. In at least one embodiment, scheduler2010is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler2010is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array2012. In at least one embodiment, host software can prove workloads for scheduling on processing cluster array2012via one of multiple graphics processing paths. In at least one embodiment, workloads can then be automatically distributed across processing array cluster2012by scheduler2010logic within a microcontroller including scheduler2010. In at least one embodiment, processing cluster array2012can include up to “N” processing clusters (e.g., cluster2014A, cluster2014B, through cluster2014N), where “N” represents a positive integer (which may be a different integer “N” than used in other figures). In at least one embodiment, each cluster2014A-2014N of processing cluster array2012can execute a large number of concurrent threads. In at least one embodiment, scheduler2010can allocate work to clusters2014A-2014N of processing cluster array2012using various scheduling and/or work distribution algorithms, which may vary depending on workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler2010, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing cluster array2012. In at least one embodiment, different clusters2014A-2014N of processing cluster array2012can be allocated for processing different types of programs or for performing different types of computations. In at least one embodiment, processing cluster array2012can be configured to perform various types of parallel processing operations. In at least one embodiment, processing cluster array2012is configured to perform general-purpose parallel compute operations. For example, in at least one embodiment, processing cluster array2012can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations. In at least one embodiment, processing cluster array2012is configured to perform parallel graphics processing operations. In at least one embodiment, processing cluster array2012can include additional logic to support execution of such graphics processing operations, including but not limited to, texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing cluster array2012can be configured to execute graphics processing related shader programs such as, but not limited to, vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit2002can transfer data from system memory via I/O unit2004for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., parallel processor memory2022) during processing, then written back to system memory. In at least one embodiment, when parallel processing unit2002is used to perform graphics processing, scheduler2010can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters2014A-2014N of processing cluster array2012. In at least one embodiment, portions of processing cluster array2012can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters2014A-2014N may be stored in buffers to allow intermediate data to be transmitted between clusters2014A-2014N for further processing. In at least one embodiment, processing cluster array2012can receive processing tasks to be executed via scheduler2010, which receives commands defining processing tasks from front end2008. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler2010may be configured to fetch indices corresponding to tasks or may receive indices from front end2008. In at least one embodiment, front end2008can be configured to ensure processing cluster array2012is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated. In at least one embodiment, each of one or more instances of parallel processing unit2002can couple with a parallel processor memory2022. In at least one embodiment, parallel processor memory2022can be accessed via memory crossbar2016, which can receive memory requests from processing cluster array2012as well as I/O unit2004. In at least one embodiment, memory crossbar2016can access parallel processor memory2022via a memory interface2018. In at least one embodiment, memory interface2018can include multiple partition units (e.g., partition unit2020A, partition unit2020B, through partition unit2020N) that can each couple to a portion (e.g., memory unit) of parallel processor memory2022. In at least one embodiment, a number of partition units2020A-2020N is configured to be equal to a number of memory units, such that a first partition unit2020A has a corresponding first memory unit2024A, a second partition unit2020B has a corresponding memory unit2024B, and an N-th partition unit2020N has a corresponding N-th memory unit2024N. In at least one embodiment, a number of partition units2020A-2020N may not be equal to a number of memory units. In at least one embodiment, memory units2024A-2024N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In at least one embodiment, memory units2024A-2024N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM), HBM2e, or HDM3. In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units2024A-2024N, allowing partition units2020A-2020N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory2022. In at least one embodiment, a local instance of parallel processor memory2022may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory. In at least one embodiment, any one of clusters2014A-2014N of processing cluster array2012can process data that will be written to any of memory units2024A-2024N within parallel processor memory2022. In at least one embodiment, memory crossbar2016can be configured to transfer an output of each cluster2014A-2014N to any partition unit2020A-2020N or to another cluster2014A-2014N, which can perform additional processing operations on an output. In at least one embodiment, each cluster2014A-2014N can communicate with memory interface2018through memory crossbar2016to read from or write to various external memory devices. In at least one embodiment, memory crossbar2016has a connection to memory interface2018to communicate with I/O unit2004, as well as a connection to a local instance of parallel processor memory2022, enabling processing units within different processing clusters2014A-2014N to communicate with system memory or other memory that is not local to parallel processing unit2002. In at least one embodiment, memory crossbar2016can use virtual channels to separate traffic streams between clusters2014A-2014N and partition units2020A-2020N. In at least one embodiment, multiple instances of parallel processing unit2002can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit2002can be configured to interoperate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit2002can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit2002or parallel processor2000can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. FIG.20Bis a block diagram of a partition unit2020according to at least one embodiment. In at least one embodiment, partition unit2020is an instance of one of partition units2020A-2020N ofFIG.20A. In at least one embodiment, partition unit2020includes an L2 cache2021, a frame buffer interface2025, and a ROP2026(raster operations unit). In at least one embodiment, L2 cache2021is a read/write cache that is configured to perform load and store operations received from memory crossbar2016and ROP2026. In at least one embodiment, read misses and urgent write-back requests are output by L2 cache2021to frame buffer interface2025for processing. In at least one embodiment, updates can also be sent to a frame buffer via frame buffer interface2025for processing. In at least one embodiment, frame buffer interface2025interfaces with one of memory units in parallel processor memory, such as memory units2024A-2024N ofFIG.20(e.g., within parallel processor memory2022). In at least one embodiment, ROP2026is a processing unit that performs raster operations such as stencil, z test, blending, etc. In at least one embodiment, ROP2026then outputs processed graphics data that is stored in graphics memory. In at least one embodiment, ROP2026includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. In at least one embodiment, compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. In at least one embodiment, a type of compression that is performed by ROP2026can vary based on statistical characteristics of data to be compressed. For example, in at least one embodiment, delta color compression is performed on depth and color data on a per-tile basis. In at least one embodiment, ROP2026is included within each processing cluster (e.g., cluster2014A-2014N ofFIG.20A) instead of within partition unit2020. In at least one embodiment, read and write requests for pixel data are transmitted over memory crossbar2016instead of pixel fragment data. In at least one embodiment, processed graphics data may be displayed on a display device, such as one of one or more display device(s)1910ofFIG.19, routed for further processing by processor(s)1902, or routed for further processing by one of processing entities within parallel processor2000ofFIG.20A. FIG.20Cis a block diagram of a processing cluster2014within a parallel processing unit according to at least one embodiment. In at least one embodiment, a processing cluster is an instance of one of processing clusters2014A-2014N ofFIG.20A. In at least one embodiment, processing cluster2014can be configured to execute many threads in parallel, where “thread” refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of processing clusters. In at least one embodiment, operation of processing cluster2014can be controlled via a pipeline manager2032that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager2032receives instructions from scheduler2010ofFIG.20Aand manages execution of those instructions via a graphics multiprocessor2034and/or a texture unit2036. In at least one embodiment, graphics multiprocessor2034is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors of differing architectures may be included within processing cluster2014. In at least one embodiment, one or more instances of graphics multiprocessor2034can be included within a processing cluster2014. In at least one embodiment, graphics multiprocessor2034can process data and a data crossbar2040can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager2032can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar2040. In at least one embodiment, each graphics multiprocessor2034within processing cluster2014can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present. In at least one embodiment, instructions transmitted to processing cluster2014constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a common program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor2034. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor2034. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor2034. In at least one embodiment, when a thread group includes more threads than number of processing engines within graphics multiprocessor2034, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on a graphics multiprocessor2034. In at least one embodiment, graphics multiprocessor2034includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor2034can forego an internal cache and use a cache memory (e.g., L1 cache2048) within processing cluster2014. In at least one embodiment, each graphics multiprocessor2034also has access to L2 caches within partition units (e.g., partition units2020A-2020N ofFIG.20A) that are shared among all processing clusters2014and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor2034may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit2002may be used as global memory. In at least one embodiment, processing cluster2014includes multiple instances of graphics multiprocessor2034and can share common instructions and data, which may be stored in L1 cache2048. In at least one embodiment, each processing cluster2014may include an MMU2045(memory management unit) that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU2045may reside within memory interface2018ofFIG.20A. In at least one embodiment, MMU2045includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MMU2045may include address translation lookaside buffers (TLB) or caches that may reside within graphics multiprocessor2034or L12048cache or processing cluster2014. In at least one embodiment, a physical address is processed to distribute surface data access locally to allow for efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss. In at least one embodiment, a processing cluster2014may be configured such that each graphics multiprocessor2034is coupled to a texture unit2036for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture L1 cache (not shown) or from an L1 cache within graphics multiprocessor2034and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor2034outputs processed tasks to data crossbar2040to provide processed task to another processing cluster2014for further processing or to store processed task in an L2 cache, local parallel processor memory, or system memory via memory crossbar2016. In at least one embodiment, a preROP2042(pre-raster operations unit) is configured to receive data from graphics multiprocessor2034, and direct data to ROP units, which may be located with partition units as described herein (e.g., partition units2020A-2020N ofFIG.20A). In at least one embodiment, preROP2042unit can perform optimizations for color blending, organizing pixel color data, and performing address translations. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in graphics processing cluster2014for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.20Dshows a graphics multiprocessor2034according to at least one embodiment. In at least one embodiment, graphics multiprocessor2034couples with pipeline manager2032of processing cluster2014. In at least one embodiment, graphics multiprocessor2034has an execution pipeline including but not limited to an instruction cache2052, an instruction unit2054, an address mapping unit2056, a register file2058, one or more general purpose graphics processing unit (GPGPU) cores2062, and one or more load/store units2066, where one or more load/store units2066can perform load/store operations to load/store instructions corresponding to performing an operation. In at least one embodiment, GPGPU cores2062and load/store units2066are coupled with cache memory2072and shared memory2070via a memory and cache interconnect2068. In at least one embodiment, instruction cache2052receives a stream of instructions to execute from pipeline manager2032. In at least one embodiment, instructions are cached in instruction cache2052and dispatched for execution by an instruction unit2054. In at least one embodiment, instruction unit2054can dispatch instructions as thread groups (e.g., warps, wavefronts, waves), with each thread of thread group assigned to a different execution unit within GPGPU cores2062. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit2056can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by load/store units2066. In at least one embodiment, register file2058provides a set of registers for functional units of graphics multiprocessor2034. In at least one embodiment, register file2058provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores2062, load/store units2066) of graphics multiprocessor2034. In at least one embodiment, register file2058is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file2058. In at least one embodiment, register file2058is divided between different warps (which may be referred to as wavefronts and/or waves) being executed by graphics multiprocessor2034. In at least one embodiment, GPGPU cores2062can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of graphics multiprocessor2034. In at least one embodiment, GPGPU cores2062can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores2062include a single precision FPU and an integer ALU while a second portion of GPGPU cores include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor2034can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment, one or more of GPGPU cores2062can also include fixed or special function logic. In at least one embodiment, GPGPU cores2062include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment, GPGPU cores2062can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads that perform same or similar operations can be executed in parallel via a single SIMD8 logic unit. In at least one embodiment, memory and cache interconnect2068is an interconnect network that connects each functional unit of graphics multiprocessor2034to register file2058and to shared memory2070. In at least one embodiment, memory and cache interconnect2068is a crossbar interconnect that allows load/store unit2066to implement load and store operations between shared memory2070and register file2058. In at least one embodiment, register file2058can operate at a same frequency as GPGPU cores2062, thus data transfer between GPGPU cores2062and register file2058can have very low latency. In at least one embodiment, shared memory2070can be used to enable communication between threads that execute on functional units within graphics multiprocessor2034. In at least one embodiment, cache memory2072can be used as a data cache for example, to cache texture data communicated between functional units and texture unit2036. In at least one embodiment, shared memory2070can also be used as a program managed cache. In at least one embodiment, threads executing on GPGPU cores2062can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory2072. In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In at least one embodiment, a GPU may be integrated on a package or chip as cores and communicatively coupled to cores over an internal processor bus/interconnect internal to a package or chip. In at least one embodiment, regardless a manner in which a GPU is connected, processor cores may allocate work to such GPU in a form of sequences of commands/instructions contained in a work descriptor. In at least one embodiment, that GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in graphics multiprocessor2034for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.21illustrates a multi-GPU computing system2100, according to at least one embodiment. In at least one embodiment, multi-GPU computing system2100can include a processor2102coupled to multiple general purpose graphics processing units (GPGPUs)2106A-D via a host interface switch2104. In at least one embodiment, host interface switch2104is a PCI express switch device that couples processor2102to a PCI express bus over which processor2102can communicate with GPGPUs2106A-D. In at least one embodiment, GPGPUs2106A-D can interconnect via a set of high-speed point-to-point GPU-to-GPU links2116. In at least one embodiment, GPU-to-GPU links2116connect to each of GPGPUs2106A-D via a dedicated GPU link. In at least one embodiment, P2P GPU links2116enable direct communication between each of GPGPUs2106A-D without requiring communication over host interface bus2104to which processor2102is connected. In at least one embodiment, with GPU-to-GPU traffic directed to P2P GPU links2116, host interface bus2104remains available for system memory access or to communicate with other instances of multi-GPU computing system2100, for example, via one or more network devices. While in at least one embodiment GPGPUs2106A-D connect to processor2102via host interface switch2104, in at least one embodiment processor2102includes direct support for P2P GPU links2116and can connect directly to GPGPUs2106A-D. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in multi-GPU computing system2100for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, multi-GPU computing system2100includes one or more graphics cores1800. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.22is a block diagram of a graphics processor2200, according to at least one embodiment. In at least one embodiment, graphics processor2200includes a ring interconnect2202, a pipeline front-end2204, a media engine2237, and graphics cores2280A-2280N. In at least one embodiment, ring interconnect2202couples graphics processor2200to other processing units, including other graphics processors or one or more general-purpose processor cores. In at least one embodiment, graphics processor2200is one of many processors integrated within a multi-core processing system. In at least one embodiment, graphics processor2200includes graphics core1800. In at least one embodiment, graphics processor2200receives batches of commands via ring interconnect2202. In at least one embodiment, incoming commands are interpreted by a command streamer2203in pipeline front-end2204. In at least one embodiment, graphics processor2200includes scalable execution logic to perform 3D geometry processing and media processing via graphics core(s)2280A-2280N. In at least one embodiment, for 3D geometry processing commands, command streamer2203supplies commands to geometry pipeline2236. In at least one embodiment, for at least some media processing commands, command streamer2203supplies commands to a video front end2234, which couples with media engine2237. In at least one embodiment, media engine2237includes a Video Quality Engine (VQE)2230for video and image post-processing and a multi-format encode/decode (MFX)2233engine to provide hardware-accelerated media data encoding and decoding. In at least one embodiment, geometry pipeline2236and media engine2237each generate execution threads for thread execution resources provided by at least one graphics core2280. In at least one embodiment, graphics processor2200includes scalable thread execution resources featuring graphics cores2280A-2280N (which can be modular and are sometimes referred to as core slices), each having multiple sub-cores2250A-50N,2260A-2260N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor2200can have any number of graphics cores2280A. In at least one embodiment, graphics processor2200includes a graphics core2280A having at least a first sub-core2250A and a second sub-core2260A. In at least one embodiment, graphics processor2200is a low power processor with a single sub-core (e.g.,2250A). In at least one embodiment, graphics processor2200includes multiple graphics cores2280A-2280N, each including a set of first sub-cores2250A-2250N and a set of second sub-cores2260A-2260N. In at least one embodiment, each sub-core in first sub-cores2250A-2250N includes at least a first set of execution units2252A-2252N and media/texture samplers2254A-2254N. In at least one embodiment, each sub-core in second sub-cores2260A-2260N includes at least a second set of execution units2262A-2262N and samplers2264A-2264N. In at least one embodiment, each sub-core2250A-2250N,2260A-2260N shares a set of shared resources2270A-2270N. In at least one embodiment, shared resources include shared cache memory and pixel operation logic. In at least one embodiment, graphics processor2200includes load/store units in pipeline front-end2204. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, inference and/or training logic715may be used in graphics processor2200for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.23is a block diagram illustrating micro-architecture for a processor2300that may include logic circuits to perform instructions, according to at least one embodiment. In at least one embodiment, processor2300may perform instructions, including x86 instructions, ARM instructions, specialized instructions for application-specific integrated circuits (ASICs), etc. In at least one embodiment, processor2300may include registers to store packed data, such as 64-bit wide MMX™ registers in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, Calif. In at least one embodiment, MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany single instruction, multiple data (“SIMD”) and streaming SIMD extensions (“SSE”) instructions. In at least one embodiment, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, AVX, or beyond (referred to generically as “SSEx”) technology may hold such packed data operands. In at least one embodiment, processor2300may perform instructions to accelerate machine learning or deep learning algorithms, training, or inferencing. In at least one embodiment, processor2300includes an in-order front end (“front end”)2301to fetch instructions to be executed and prepare instructions to be used later in a processor pipeline. In at least one embodiment, front end2301may include several units. In at least one embodiment, an instruction prefetcher2326fetches instructions from memory and feeds instructions to an instruction decoder2328which in turn decodes or interprets instructions. For example, in at least one embodiment, instruction decoder2328decodes a received instruction into one or more operations called “micro-instructions” or “micro-operations” (also called “micro ops” or “uops” or “μ-ops”) that a machine may execute. In at least one embodiment, instruction decoder2328parses an instruction into an opcode and corresponding data and control fields that may be used by micro-architecture to perform operations in accordance with at least one embodiment. In at least one embodiment, a trace cache2330may assemble decoded uops into program ordered sequences or traces in a uop queue2334for execution. In at least one embodiment, when trace cache2330encounters a complex instruction, a microcode ROM2332provides uops needed to complete an operation. In at least one embodiment, some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete full operation. In at least one embodiment, if more than four micro-ops are needed to complete an instruction, instruction decoder2328may access microcode ROM2332to perform that instruction. In at least one embodiment, an instruction may be decoded into a small number of micro-ops for processing at instruction decoder2328. In at least one embodiment, an instruction may be stored within microcode ROM2332should a number of micro-ops be needed to accomplish such operation. In at least one embodiment, trace cache2330refers to an entry point programmable logic array (“PLA”) to determine a correct micro-instruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM2332in accordance with at least one embodiment. In at least one embodiment, after microcode ROM2332finishes sequencing micro-ops for an instruction, front end2301of a machine may resume fetching micro-ops from trace cache2330. In at least one embodiment, out-of-order execution engine (“out of order engine”)2303may prepare instructions for execution. In at least one embodiment, out-of-order execution logic has a number of buffers to smooth out and re-order flow of instructions to optimize performance as they go down a pipeline and get scheduled for execution. In at least one embodiment, out-of-order execution engine2303includes, without limitation, an allocator/register renamer2340, a memory uop queue2342, an integer/floating point uop queue2344, a memory scheduler2346, a fast scheduler2302, a slow/general floating point scheduler (“slow/general FP scheduler”)2304, and a simple floating point scheduler (“simple FP scheduler”)2306. In at least one embodiment, fast schedule2302, slow/general floating point scheduler2304, and simple floating point scheduler2306are also collectively referred to herein as “uop schedulers2302,2304,2306.” In at least one embodiment, allocator/register renamer2340allocates machine buffers and resources that each uop needs in order to execute. In at least one embodiment, allocator/register renamer2340renames logic registers onto entries in a register file. In at least one embodiment, allocator/register renamer2340also allocates an entry for each uop in one of two uop queues, memory uop queue2342for memory operations and integer/floating point uop queue2344for non-memory operations, in front of memory scheduler2346and uop schedulers2302,2304,2306. In at least one embodiment, uop schedulers2302,2304,2306, determine when a uop is ready to execute based on readiness of their dependent input register operand sources and availability of execution resources uops need to complete their operation. In at least one embodiment, fast scheduler2302may schedule on each half of a main clock cycle while slow/general floating point scheduler2304and simple floating point scheduler2306may schedule once per main processor clock cycle. In at least one embodiment, uop schedulers2302,2304,2306arbitrate for dispatch ports to schedule uops for execution. In at least one embodiment, execution block2311includes, without limitation, an integer register file/bypass network2308, a floating point register file/bypass network (“FP register file/bypass network”)2310, address generation units (“AGUs”)2312and2314, fast Arithmetic Logic Units (ALUs) (“fast ALUs”)2316and2318, a slow Arithmetic Logic Unit (“slow ALU”)2320, a floating point ALU (“FP”)2322, and a floating point move unit (“FP move”)2324. In at least one embodiment, integer register file/bypass network2308and floating point register file/bypass network2310are also referred to herein as “register files2308,2310.” In at least one embodiment, AGUSs2312and2314, fast ALUs2316and2318, slow ALU2320, floating point ALU2322, and floating point move unit2324are also referred to herein as “execution units2312,2314,2316,2318,2320,2322, and2324.” In at least one embodiment, execution block2311may include, without limitation, any number (including zero) and type of register files, bypass networks, address generation units, and execution units, in any combination. In at least one embodiment, register networks2308,2310may be arranged between uop schedulers2302,2304,2306, and execution units2312,2314,2316,2318,2320,2322, and2324. In at least one embodiment, integer register file/bypass network2308performs integer operations. In at least one embodiment, floating point register file/bypass network2310performs floating point operations. In at least one embodiment, each of register networks2308,2310may include, without limitation, a bypass network that may bypass or forward just completed results that have not yet been written into a register file to new dependent uops. In at least one embodiment, register networks2308,2310may communicate data with each other. In at least one embodiment, integer register file/bypass network2308may include, without limitation, two separate register files, one register file for a low-order thirty-two bits of data and a second register file for a high order thirty-two bits of data. In at least one embodiment, floating point register file/bypass network2310may include, without limitation, 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width. In at least one embodiment, execution units2312,2314,2316,2318,2320,2322,2324may execute instructions. In at least one embodiment, register networks2308,2310store integer and floating point data operand values that micro-instructions need to execute. In at least one embodiment, processor2300may include, without limitation, any number and combination of execution units2312,2314,2316,2318,2320,2322,2324. In at least one embodiment, floating point ALU2322and floating point move unit2324, may execute floating point, MMX, SIMD, AVX and SSE, or other operations, including specialized machine learning instructions. In at least one embodiment, floating point ALU2322may include, without limitation, a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro ops. In at least one embodiment, instructions involving a floating point value may be handled with floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs2316,2318. In at least one embodiment, fast ALUS2316,2318may execute fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations go to slow ALU2320as slow ALU2320may include, without limitation, integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. In at least one embodiment, memory load/store operations may be executed by AGUs2312,2314. In at least one embodiment, fast ALU2316, fast ALU2318, and slow ALU2320may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU2316, fast ALU2318, and slow ALU2320may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. In at least one embodiment, floating point ALU2322and floating point move unit2324may be implemented to support a range of operands having bits of various widths, such as 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions. In at least one embodiment, uop schedulers2302,2304,2306dispatch dependent operations before a parent load has finished executing. In at least one embodiment, as uops may be speculatively scheduled and executed in processor2300, processor2300may also include logic to handle memory misses. In at least one embodiment, if a data load misses in a data cache, there may be dependent operations in flight in a pipeline that have left a scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, dependent operations might need to be replayed and independent ones may be allowed to complete. In at least one embodiment, schedulers and a replay mechanism of at least one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations. In at least one embodiment, “registers” may refer to on-board processor storage locations that may be used as part of instructions to identify operands. In at least one embodiment, registers may be those that may be usable from outside of a processor (from a programmer's perspective). In at least one embodiment, registers might not be limited to a particular type of circuit. Rather, in at least one embodiment, a register may store data, provide data, and perform functions described herein. In at least one embodiment, registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In at least one embodiment, integer registers store 32-bit integer data. A register file of at least one embodiment also contains eight multimedia SIMD registers for packed data. In at least one embodiment, processor2300or each core of processor2300includes one or more prefetchers, one or more fetchers, one or more pre-decoders, one or more decoders to decode data (e.g., instructions), one or more instruction queues to process instructions (e.g., corresponding to operations or API calls), one or more micro-operation (μOP) cache to store μOPs, one or more micro-operation (μOP) queues, an in-order execution engine, one or more load buffers, one or more store buffers, one or more reorder buffers, one or more fill buffers, an out-of-order execution engine, one or more ports, one or more shift and/or shifter units, one or more fused multiply accumulate (FMA) units, one or more load and store units (“LSUs”) to perform load of store operations corresponding to loading/storing data (e.g., instructions) to perform an operation (e.g., perform an API, an API call), one or more matrix multiply accumulate (MMA) units, and/or one or more shuffle units to perform any function further described herein with respect to said processor2300. In at least one embodiment processor2300can access, use, perform, or execute instructions corresponding to calling an API. In at least one embodiment, processor2300includes one or more ultra path interconnects (UPIs), e.g., that is a point-to-point processor interconnect; one or more PCIe's; one or more accelerators to accelerate computations or operations; and/or one or more memory controllers. In at least one embodiment, processor2300includes a shared last level cache (LLC) that is coupled to one or more memory controllers, which can enable shared memory access across processor cores. In at least one embodiment, processor2300or a core of processor2300has a mesh architecture where processor cores, on-chip caches, memory controllers, and I/O controllers are organized in rows and columns, with wires and switches connecting them at each intersection to allow for turns. In at least one embodiment, processor2300has a one or more higher memory bandwidths (HMBs, e.g., HMBe) to store data or cache data, e.g., in Double Data Rate 5 Synchronous Dynamic Random-Access Memory (DDR5 SDRAM). In at least one embodiment, one or more components of processor2300are interconnected using compute express link (CXL) interconnects. In at least one embodiment, a memory controller uses a “least recently used” (LRU) approach to determine what gets stored in a cache. In at least one embodiment, processor2300includes one or more PCIe's (e.g., PCIe 5.0). Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment portions or all of inference and/or training logic715may be incorporated into execution block2311and other memory or registers shown or not shown. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs illustrated in execution block2311. Moreover, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of execution block2311to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.24illustrates a deep learning application processor2400, according to at least one embodiment. In at least one embodiment, deep learning application processor2400uses instructions that, if executed by deep learning application processor2400, cause deep learning application processor2400to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, deep learning application processor2400is an application-specific integrated circuit (ASIC). In at least one embodiment, application processor2400performs matrix multiply operations either “hard-wired” into hardware as a result of performing one or more instructions or both. In at least one embodiment, deep learning application processor2400includes, without limitation, processing clusters2410(1)-2410(12), Inter-Chip Links (“ICLs”)2420(1)-2420(12), Inter-Chip Controllers (“ICCs”)2430(1)-2430(2), high-bandwidth memory second generation (“HBM2”)2440(1)-2440(4), memory controllers (“Mem Ctrlrs”)2442(1)-2442(4), high bandwidth memory physical layer (“HBM PHY”)2444(1)-2444(4), a management-controller central processing unit (“management-controller CPU”)2450, a Serial Peripheral Interface, Inter-Integrated Circuit, and General Purpose Input/Output block (“SPI, I2C, GPIO”)2460, a peripheral component interconnect express controller and direct memory access block (“PCIe Controller and DMA”)2470, and a sixteen-lane peripheral component interconnect express port (“PCI Express×16”)2480. In at least one embodiment, processing clusters2410may perform deep learning operations, including inference or prediction operations based on weight parameters calculated one or more training techniques, including those described herein. In at least one embodiment, each processing cluster2410may include, without limitation, any number and type of processors. In at least one embodiment, deep learning application processor2400may include any number and type of processing clusters2400. In at least one embodiment, Inter-Chip Links2420are bi-directional. In at least one embodiment, Inter-Chip Links2420and Inter-Chip Controllers2430enable multiple deep learning application processors2400to exchange information, including activation information resulting from performing one or more machine learning algorithms embodied in one or more neural networks. In at least one embodiment, deep learning application processor2400may include any number (including zero) and type of ICLs2420and ICCs2430. In at least one embodiment, HBM2s2440provide a total of 32 Gigabytes (GB) of memory. In at least one embodiment, HBM22440(i) is associated with both memory controller2442(i) and HBM PHY2444(i) where “i” is an arbitrary integer. In at least one embodiment, any number of HBM2s2440may provide any type and total amount of high bandwidth memory and may be associated with any number (including zero) and type of memory controllers2442and HBM PHYs2444. In at least one embodiment, SPI, I2C, GPIO2460, PCIe Controller and DMA2470, and/or PCIe2480may be replaced with any number and type of blocks that enable any number and type of communication standards in any technically feasible fashion. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to deep learning application processor2400. In at least one embodiment, deep learning application processor2400is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by deep learning application processor2400. In at least one embodiment, processor2400may be used to perform one or more neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.25is a block diagram of a neuromorphic processor2500, according to at least one embodiment. In at least one embodiment, neuromorphic processor2500may receive one or more inputs from sources external to neuromorphic processor2500. In at least one embodiment, these inputs may be transmitted to one or more neurons2502within neuromorphic processor2500. In at least one embodiment, neurons2502and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor2500may include, without limitation, thousands or millions of instances of neurons2502, but any suitable number of neurons2502may be used. In at least one embodiment, each instance of neuron2502may include a neuron input2504and a neuron output2506. In at least one embodiment, neurons2502may generate outputs that may be transmitted to inputs of other instances of neurons2502. For example, in at least one embodiment, neuron inputs2504and neuron outputs2506may be interconnected via synapses2508. In at least one embodiment, neurons2502and synapses2508may be interconnected such that neuromorphic processor2500operates to process or analyze information received by neuromorphic processor2500. In at least one embodiment, neurons2502may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input2504exceed a threshold. In at least one embodiment, neurons2502may sum or integrate signals received at neuron inputs2504. For example, in at least one embodiment, neurons2502may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron2502may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs2504into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs2504rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons2502may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons2502may include, without limitation, comparator circuits or logic that generate an output spike at neuron output2506when result of applying a transfer function to neuron input2504exceeds a threshold. In at least one embodiment, once neuron2502fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron2502may resume normal operation after a suitable period of time (or refractory period). In at least one embodiment, neurons2502may be interconnected through synapses2508. In at least one embodiment, synapses2508may operate to transmit signals from an output of a first neuron2502to an input of a second neuron2502. In at least one embodiment, neurons2502may transmit information over more than one instance of synapse2508. In at least one embodiment, one or more instances of neuron output2506may be connected, via an instance of synapse2508, to an instance of neuron input2504in same neuron2502. In at least one embodiment, an instance of neuron2502generating an output to be transmitted over an instance of synapse2508may be referred to as a “pre-synaptic neuron” with respect to that instance of synapse2508. In at least one embodiment, an instance of neuron2502receiving an input transmitted over an instance of synapse2508may be referred to as a “post-synaptic neuron” with respect to that instance of synapse2508. Because an instance of neuron2502may receive inputs from one or more instances of synapse2508, and may also transmit outputs over one or more instances of synapse2508, a single instance of neuron2502may therefore be both a “pre-synaptic neuron” and “post-synaptic neuron,” with respect to various instances of synapses2508, in at least one embodiment. In at least one embodiment, neurons2502may be organized into one or more layers. In at least one embodiment, each instance of neuron2502may have one neuron output2506that may fan out through one or more synapses2508to one or more neuron inputs2504. In at least one embodiment, neuron outputs2506of neurons2502in a first layer2510may be connected to neuron inputs2504of neurons2502in a second layer2512. In at least one embodiment, layer2510may be referred to as a “feed-forward layer.” In at least one embodiment, each instance of neuron2502in an instance of first layer2510may fan out to each instance of neuron2502in second layer2512. In at least one embodiment, first layer2510may be referred to as a “fully connected feed-forward layer.” In at least one embodiment, each instance of neuron2502in an instance of second layer2512may fan out to fewer than all instances of neuron2502in a third layer2514. In at least one embodiment, second layer2512may be referred to as a “sparsely connected feed-forward layer.” In at least one embodiment, neurons2502in second layer2512may fan out to neurons2502in multiple other layers, including to neurons2502also in second layer2512. In at least one embodiment, second layer2512may be referred to as a “recurrent layer.” In at least one embodiment, neuromorphic processor2500may include, without limitation, any suitable combination of recurrent layers and feed-forward layers, including, without limitation, both sparsely connected feed-forward layers and fully connected feed-forward layers. In at least one embodiment, neuromorphic processor2500may include, without limitation, a reconfigurable interconnect architecture or dedicated hard-wired interconnects to connect synapse2508to neurons2502. In at least one embodiment, neuromorphic processor2500may include, without limitation, circuitry or logic that allows synapses to be allocated to different neurons2502as needed based on neural network topology and neuron fan-in/out. For example, in at least one embodiment, synapses2508may be connected to neurons2502using an interconnect fabric, such as network-on-chip, or with dedicated connections. In at least one embodiment, synapse interconnections and components thereof may be implemented using circuitry or logic. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.26is a block diagram of a processing system, according to at least one embodiment. In at least one embodiment, system2600includes one or more processors2602and one or more graphics processors2608, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors2602or processor cores2607. In at least one embodiment, system2600is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices. In at least one embodiment, one or more graphics processors2608include one or more graphics cores1800. In at least one embodiment, system2600can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, system2600is a mobile phone, a smart phone, a tablet computing device or a mobile Internet device. In at least one embodiment, processing system2600can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, a smart eyewear device, an augmented reality device, or a virtual reality device. In at least one embodiment, processing system2600is a television or set top box device having one or more processors2602and a graphical interface generated by one or more graphics processors2608. In at least one embodiment, one or more processors2602each include one or more processor cores2607to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores2607is configured to process a specific instruction sequence2609. In at least one embodiment, instruction sequence2609may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). In at least one embodiment, processor cores2607may each process a different instruction sequence2609, which may include instructions to facilitate emulation of other instruction sequences. In at least one embodiment, processor core2607may also include other processing devices, such a Digital Signal Processor (DSP). In at least one embodiment, processor2602includes a cache memory2604. In at least one embodiment, processor2602can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor2602. In at least one embodiment, processor2602also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores2607using known cache coherency techniques. In at least one embodiment, a register file2606is additionally included in processor2602, which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file2606may include general-purpose registers or other registers. In at least one embodiment, one or more processor(s)2602are coupled with one or more interface bus(es)2610to transmit communication signals such as address, data, or control signals between processor2602and other components in system2600. In at least one embodiment, interface bus2610can be a processor bus, such as a version of a Direct Media Interface (DMI) bus. In at least one embodiment, interface bus2610is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In at least one embodiment processor(s)2602include an integrated memory controller2616and a platform controller hub2630. In at least one embodiment, memory controller2616facilitates communication between a memory device and other components of system2600, while platform controller hub (PCH)2630provides connections to I/O devices via a local I/O bus. In at least one embodiment, a memory device2620can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In at least one embodiment, memory device2620can operate as system memory for system2600, to store data2622and instructions2621for use when one or more processors2602executes an application or process. In at least one embodiment, memory controller2616also couples with an optional external graphics processor2612, which may communicate with one or more graphics processors2608in processors2602to perform graphics and media operations. In at least one embodiment, a display device2611can connect to processor(s)2602. In at least one embodiment, display device2611can include one or more of an internal display device, as in a mobile electronic device or a laptop device, or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device2611can include a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications. In at least one embodiment, platform controller hub2630enables peripherals to connect to memory device2620and processor2602via a high-speed I/O bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller2646, a network controller2634, a firmware interface2628, a wireless transceiver2626, touch sensors2625, a data storage device2624(e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device2624can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). In at least one embodiment, touch sensors2625can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver2626can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. In at least one embodiment, firmware interface2628enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). In at least one embodiment, network controller2634can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus2610. In at least one embodiment, audio controller2646is a multi-channel high definition audio controller. In at least one embodiment, system2600includes an optional legacy I/O controller2640for coupling legacy (e.g., Personal System 2 (PS/2)) devices to system2600. In at least one embodiment, platform controller hub2630can also connect to one or more Universal Serial Bus (USB) controllers2642connect input devices, such as keyboard and mouse2643combinations, a camera2644, or other USB input devices. In at least one embodiment, an instance of memory controller2616and platform controller hub2630may be integrated into a discreet external graphics processor, such as external graphics processor2612. In at least one embodiment, platform controller hub2630and/or memory controller2616may be external to one or more processor(s)2602. For example, in at least one embodiment, system2600can include an external memory controller2616and platform controller hub2630, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s)2602. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment portions or all of inference and/or training logic715may be incorporated into graphics processor2608. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor2608to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.27is a block diagram of a processor2700having one or more processor cores2702A-2702N, an integrated memory controller2714, and an integrated graphics processor2708, according to at least one embodiment. In at least one embodiment, processor2700can include additional cores up to and including additional core2702N represented by dashed lined boxes. In at least one embodiment, each of processor cores2702A-2702N includes one or more internal cache units2704A-2704N. In at least one embodiment, each processor core also has access to one or more shared cached units2706. In at least one embodiment, graphics processor2708includes one or more graphics cores1800. In at least one embodiment, internal cache units2704A-2704N and shared cache units2706represent a cache memory hierarchy within processor2700. In at least one embodiment, cache memory units2704A-2704N may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where a highest level of cache before external memory is classified as an LLC. In at least one embodiment, cache coherency logic maintains coherency between various cache units2706and2704A-2704N. In at least one embodiment, processor2700may also include a set of one or more bus controller units2716and a system agent core2710. In at least one embodiment, bus controller units2716manage a set of peripheral buses, such as one or more PCI or PCI express busses. In at least one embodiment, system agent core2710provides management functionality for various processor components. In at least one embodiment, system agent core2710includes one or more integrated memory controllers2714to manage access to various external memory devices (not shown). In at least one embodiment, one or more of processor cores2702A-2702N include support for simultaneous multi-threading. In at least one embodiment, system agent core2710includes components for coordinating and operating cores2702A-2702N during multi-threaded processing. In at least one embodiment, system agent core2710may additionally include a power control unit (PCU), which includes logic and components to regulate one or more power states of processor cores2702A-2702N and graphics processor2708. In at least one embodiment, processor2700additionally includes graphics processor2708to execute graphics processing operations. In at least one embodiment, graphics processor2708couples with shared cache units2706, and system agent core2710, including one or more integrated memory controllers2714. In at least one embodiment, system agent core2710also includes a display controller2711to drive graphics processor output to one or more coupled displays. In at least one embodiment, display controller2711may also be a separate module coupled with graphics processor2708via at least one interconnect, or may be integrated within graphics processor2708. In at least one embodiment, a ring-based interconnect unit2712is used to couple internal components of processor2700. In at least one embodiment, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques. In at least one embodiment, graphics processor2708couples with ring interconnect2712via an I/O link2713. In at least one embodiment, I/O link2713represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module2718, such as an eDRAM module. In at least one embodiment, each of processor cores2702A-2702N and graphics processor2708use embedded memory module2718as a shared Last Level Cache. In at least one embodiment, processor cores2702A-2702N are homogeneous cores executing a common instruction set architecture. In at least one embodiment, processor cores2702A-2702N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores2702A-2702N execute a common instruction set, while one or more other cores of processor cores2702A-2702N executes a subset of a common instruction set or a different instruction set. In at least one embodiment, processor cores2702A-2702N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In at least one embodiment, processor2700can be implemented on one or more chips or as an SoC integrated circuit. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment portions or all of inference and/or training logic715may be incorporated into graphics processor2708. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline, graphics core(s)2702, shared function logic, or other logic inFIG.27. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of processor2700to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.28is a block diagram of a graphics processor2800, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In at least one embodiment, graphics processor2800communicates via a memory mapped I/O interface to registers on graphics processor2800and with commands placed into memory. In at least one embodiment, graphics processor2800includes a memory interface2814to access memory. In at least one embodiment, memory interface2814is an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. In at least one embodiment, graphics processor2800includes graphics core1800. In at least one embodiment, graphics processor2800also includes a display controller2802to drive display output data to a display device2820. In at least one embodiment, display controller2802includes hardware for one or more overlay planes for display device2820and composition of multiple layers of video or user interface elements. In at least one embodiment, display device2820can be an internal or external display device. In at least one embodiment, display device2820is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In at least one embodiment, graphics processor2800includes a video codec engine2806to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. In at least one embodiment, graphics processor2800includes a block image transfer (BLIT) engine2804to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in at least one embodiment, 2D graphics operations are performed using one or more components of a graphics processing engine (GPE)2810. In at least one embodiment, GPE2810is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. In at least one embodiment, GPE2810includes a 3D pipeline2812for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). In at least one embodiment, 3D pipeline2812includes programmable and fixed function elements that perform various tasks and/or spawn execution threads to a 3D/Media sub-system2815. While 3D pipeline2812can be used to perform media operations, in at least one embodiment, GPE2810also includes a media pipeline2816that is used to perform media operations, such as video post-processing and image enhancement. In at least one embodiment, media pipeline2816includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of, video codec engine2806. In at least one embodiment, media pipeline2816additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system2815. In at least one embodiment, spawned threads perform computations for media operations on one or more graphics execution units included in 3D/Media sub-system2815. In at least one embodiment, 3D/Media subsystem2815includes logic for executing threads spawned by 3D pipeline2812and media pipeline2816. In at least one embodiment, 3D pipeline2812and media pipeline2816send thread execution requests to 3D/Media subsystem2815, which includes thread dispatch logic for arbitrating and dispatching various requests to available thread execution resources. In at least one embodiment, execution resources include an array of graphics execution units to process 3D and media threads. In at least one embodiment, 3D/Media subsystem2815includes one or more internal caches for thread instructions and data. In at least one embodiment, subsystem2815also includes shared memory, including registers and addressable memory, to share data between threads and to store output data. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment portions or all of inference and/or training logic715may be incorporated into graphics processor2800. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline2812. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor2800to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.29is a block diagram of a graphics processing engine2910of a graphics processor in accordance with at least one embodiment. In at least one embodiment, graphics processing engine (GPE)2910is a version of GPE2810shown inFIG.28. In at least one embodiment, a media pipeline2916is optional and may not be explicitly included within GPE2910. In at least one embodiment, a separate media and/or image processor is coupled to GPE2910. In at least one embodiment, GPE2910is coupled to or includes a command streamer2903, which provides a command stream to a 3D pipeline2912and/or media pipeline2916. In at least one embodiment, command streamer2903is coupled to memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In at least one embodiment, command streamer2903receives commands from memory and sends commands to 3D pipeline2912and/or media pipeline2916. In at least one embodiment, commands are instructions, primitives, or micro-operations fetched from a ring buffer, which stores commands for 3D pipeline2912and media pipeline2916. In at least one embodiment, a ring buffer can additionally include batch command buffers storing batches of multiple commands. In at least one embodiment, commands for 3D pipeline2912can also include references to data stored in memory, such as, but not limited to, vertex and geometry data for 3D pipeline2912and/or image data and memory objects for media pipeline2916. In at least one embodiment, 3D pipeline2912and media pipeline2916process commands and data by performing operations or by dispatching one or more execution threads to a graphics core array2914. In at least one embodiment, graphics core array2914includes one or more blocks of graphics cores (e.g., graphics core(s)2915A, graphics core(s)2915B), each block including one or more graphics cores. In at least one embodiment, graphics core(s)2915A,2915B may be referred to as execution units (“EUs”). In at least one embodiment, each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic, including inference and/or training logic715inFIG.7AandFIG.7B. In at least one embodiment, 3D pipeline2912includes fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing instructions and dispatching execution threads to graphics core array2914. In at least one embodiment, graphics core array2914provides a unified block of execution resources for use in processing shader programs. In at least one embodiment, a multi-purpose execution logic (e.g., execution units) within graphics core(s)2915A-2915B of graphic core array2914includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. In at least one embodiment, graphics core array2914also includes execution logic to perform media functions, such as video and/or image processing. In at least one embodiment, execution units additionally include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. In at least one embodiment, output data generated by threads executing on graphics core array2914can output data to memory in a unified return buffer (URB)2918. In at least one embodiment, URB2918can store data for multiple threads. In at least one embodiment, URB2918may be used to send data between different threads executing on graphics core array2914. In at least one embodiment, URB2918may additionally be used for synchronization between threads on graphics core array2914and fixed function logic within shared function logic2920. In at least one embodiment, graphics core array2914is scalable, such that graphics core array2914includes a variable number of graphics cores, each having a variable number of execution units based on a target power and performance level of GPE2910. In at least one embodiment, execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed. In at least one embodiment, graphics core array2914is coupled to shared function logic2920that includes multiple resources that are shared between graphics cores in graphics core array2914. In at least one embodiment, shared functions performed by shared function logic2920are embodied in hardware logic units that provide specialized supplemental functionality to graphics core array2914. In at least one embodiment, shared function logic2920includes but is not limited to a sampler unit2921, a math unit2922, and inter-thread communication (ITC) logic2923. In at least one embodiment, one or more cache(s)2925are included in, or coupled to, shared function logic2920. In at least one embodiment, a shared function is used if demand for a specialized function is insufficient for inclusion within graphics core array2914. In at least one embodiment, a single instantiation of a specialized function is used in shared function logic2920and shared among other execution resources within graphics core array2914. In at least one embodiment, specific shared functions within shared function logic2920that are used extensively by graphics core array2914may be included within shared function logic2926within graphics core array2914. In at least one embodiment, shared function logic2926within graphics core array2914can include some or all logic within shared function logic2920. In at least one embodiment, all logic elements within shared function logic2920may be duplicated within shared function logic2926of graphics core array2914. In at least one embodiment, shared function logic2920is excluded in favor of shared function logic2926within graphics core array2914. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment portions or all of inference and/or training logic715may be incorporated into graphics processor2910. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in 3D pipeline2912, graphics core(s)2915, shared function logic2926, shared function logic2920, or other logic inFIG.29. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor2910to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.30is a block diagram of hardware logic of a graphics processor core3000, according to at least one embodiment described herein. In at least one embodiment, graphics processor core3000includes graphics core1800. In at least one embodiment, graphics processor core3000is included within a graphics core array. In at least one embodiment, graphics processor core3000, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core3000is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. In at least one embodiment, each graphics core3000can include a fixed function block3030coupled with multiple sub-cores3001A-3001F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. In at least one embodiment, fixed function block3030includes a geometry and fixed function pipeline3036that can be shared by all sub-cores in graphics processor3000, for example, in lower performance and/or lower power graphics processor implementations. In at least one embodiment, geometry and fixed function pipeline3036includes a 3D fixed function pipeline, a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers. In at least one embodiment, fixed function block3030also includes a graphics SoC interface3037, a graphics microcontroller3038, and a media pipeline3039. In at least one embodiment, graphics SoC interface3037provides an interface between graphics core3000and other processor cores within a system on a chip integrated circuit. In at least one embodiment, graphics microcontroller3038is a programmable sub-processor that is configurable to manage various functions of graphics processor3000, including thread dispatch, scheduling, and pre-emption. In at least one embodiment, media pipeline3039includes logic to facilitate decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. In at least one embodiment, media pipeline3039implements media operations via requests to compute or sampling logic within sub-cores3001A-3001F. In at least one embodiment, SoC interface3037enables graphics core3000to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, SoC interface3037can also enable communication with fixed function devices within an SoC, such as camera imaging pipelines, and enables use of and/or implements global memory atomics that may be shared between graphics core3000and CPUs within an SoC. In at least one embodiment, graphics SoC interface3037can also implement power management controls for graphics processor core3000and enable an interface between a clock domain of graphics processor core3000and other clock domains within an SoC. In at least one embodiment, SoC interface3037enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions can be dispatched to media pipeline3039, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline3036, and/or a geometry and fixed function pipeline3014) when graphics processing operations are to be performed. In at least one embodiment, graphics microcontroller3038can be configured to perform various scheduling and management tasks for graphics core3000. In at least one embodiment, graphics microcontroller3038can perform graphics and/or compute workload scheduling on various graphics parallel engines within execution unit (EU) arrays3002A-3002F,3004A-3004F within sub-cores3001A-3001F. In at least one embodiment, host software executing on a CPU core of an SoC including graphics core3000can submit workloads to one of multiple graphic processor paths, which invokes a scheduling operation on an appropriate graphics engine. In at least one embodiment, scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In at least one embodiment, graphics microcontroller3038can also facilitate low-power or idle states for graphics core3000, providing graphics core3000with an ability to save and restore registers within graphics core3000across low-power state transitions independently from an operating system and/or graphics driver software on a system. In at least one embodiment, graphics core3000may have greater than or fewer than illustrated sub-cores3001A-3001F, up to N modular sub-cores. For each set of N sub-cores, in at least one embodiment, graphics core3000can also include shared function logic3010, shared and/or cache memory3012, geometry/fixed function pipeline3014, as well as additional fixed function logic3016to accelerate various graphics and compute processing operations. In at least one embodiment, shared function logic3010can include logic units (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within graphics core3000. In at least one embodiment, shared and/or cache memory3012can be a last-level cache for N sub-cores3001A-3001F within graphics core3000and can also serve as shared memory that is accessible by multiple sub-cores. In at least one embodiment, geometry/fixed function pipeline3014can be included instead of geometry/fixed function pipeline3036within fixed function block3030and can include similar logic units. In at least one embodiment, graphics core3000includes additional fixed function logic3016that can include various fixed function acceleration logic for use by graphics core3000. In at least one embodiment, additional fixed function logic3016includes an additional geometry pipeline for use in position-only shading. In position-only shading, at least two geometry pipelines exist, whereas in a full geometry pipeline within geometry and fixed function pipelines3014,3036, and a cull pipeline, which is an additional geometry pipeline that may be included within additional fixed function logic3016. In at least one embodiment, a cull pipeline is a trimmed down version of a full geometry pipeline. In at least one embodiment, a full pipeline and a cull pipeline can execute different instances of an application, each instance having a separate context. In at least one embodiment, position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, in at least one embodiment, cull pipeline logic within additional fixed function logic3016can execute position shaders in parallel with a main application and generally generates critical results faster than a full pipeline, as a cull pipeline fetches and shades position attributes of vertices, without performing rasterization and rendering of pixels to a frame buffer. In at least one embodiment, a cull pipeline can use generated critical results to compute visibility information for all triangles without regard to whether those triangles are culled. In at least one embodiment, a full pipeline (which in this instance may be referred to as a replay pipeline) can consume visibility information to skip culled triangles to shade only visible triangles that are finally passed to a rasterization phase. In at least one embodiment, additional fixed function logic3016can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing. In at least one embodiment, within each graphics sub-core3001A-3001F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. In at least one embodiment, graphics sub-cores3001A-3001F include multiple EU arrays3002A-3002F,3004A-3004F, thread dispatch and inter-thread communication (TD/IC) logic3003A-3003F, a 3D (e.g., texture) sampler3005A-3005F, a media sampler3006A-3006F, a shader processor3007A-3007F, and shared local memory (SLM)3008A-3008F. In at least one embodiment, EU arrays3002A-3002F,3004A-3004F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. In at least one embodiment, TD/IC logic3003A-3003F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitates communication between threads executing on execution units of a sub-core. In at least one embodiment, 3D samplers3005A-3005F can read texture or other 3D graphics related data into memory. In at least one embodiment, 3D samplers can read texture data differently based on a configured sample state and texture format associated with a given texture. In at least one embodiment, media samplers3006A-3006F can perform similar read operations based on a type and format associated with media data. In at least one embodiment, each graphics sub-core3001A-3001F can alternately include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each of sub-cores3001A-3001F can make use of shared local memory3008A-3008F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, portions or all of inference and/or training logic715may be incorporated into graphics processor3000. For example, in at least one embodiment, training and/or inferencing techniques described herein may use one or more of ALUs embodied in a 3D pipeline, graphics microcontroller3038, geometry and fixed function pipeline3014and3036, or other logic inFIG.30. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs of graphics processor3000to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIGS.31A-31Billustrate thread execution logic3100including an array of processing elements of a graphics processor core according to at least one embodiment.FIG.31Aillustrates at least one embodiment, in which thread execution logic3100is used.FIG.31Billustrates exemplary internal details of a graphics execution unit3108, according to at least one embodiment. As illustrated inFIG.31A, in at least one embodiment, thread execution logic3100includes a shader processor3102, a thread dispatcher3104, an instruction cache3106, a scalable execution unit array including a plurality of execution units3107A-3107N and3108A-3108N, a sampler3110, a data cache3112, and a data port3114. In at least one embodiment, a scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit3108A-N or3107A-N) based on computational requirements of a workload, for example. In at least one embodiment, scalable execution units are interconnected via an interconnect fabric that links to each execution unit. In at least one embodiment, thread execution logic3100includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache3106, data port3114, sampler3110, and execution units3107or3108. In at least one embodiment, each execution unit (e.g.,3107A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In at least one embodiment, array of execution units3107and/or3108is scalable to include any number individual execution units. In at least one embodiment, execution units3107and/or3108are primarily used to execute shader programs. In at least one embodiment, shader processor3102can process various shader programs and dispatch execution threads associated with shader programs via a thread dispatcher3104. In at least one embodiment, thread dispatcher3104includes logic to arbitrate thread initiation requests from graphics and media pipelines and instantiate requested threads on one or more execution units in execution units3107and/or3108. For example, in at least one embodiment, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to thread execution logic for processing. In at least one embodiment, thread dispatcher3104can also process runtime thread spawning requests from executing shader programs. In at least one embodiment, execution units3107and/or3108support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. In at least one embodiment, execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, and/or vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). In at least one embodiment, each of execution units3107and/or3108, which include one or more arithmetic logic units (ALUs), is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment despite higher latency memory accesses. In at least one embodiment, each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. In at least one embodiment, execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. In at least one embodiment, while waiting for data from memory or one of shared functions, dependency logic within execution units3107and/or3108causes a waiting thread to sleep until requested data has been returned. In at least one embodiment, while an awaiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, in at least one embodiment, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader. In at least one embodiment, each execution unit in execution units3107and/or3108operates on arrays of data elements. In at least one embodiment, a number of data elements is an “execution size,” or number of channels for an instruction. In at least one embodiment, an execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. In at least one embodiment, a number of channels may be independent of a number of physical arithmetic logic units (ALUs) or floating point units (FPUs) for a particular graphics processor. In at least one embodiment, execution units3107and/or3108support integer and floating-point data types. In at least one embodiment, an execution unit instruction set includes SIMD instructions. In at least one embodiment, various data elements can be stored as a packed data type in a register and execution unit will process various elements based on data size of elements. For example, in at least one embodiment, when operating on a 256-bit wide vector, 256 bits of a vector are stored in a register and an execution unit operates on a vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, in at least one embodiment, different vector widths and register sizes are possible. In at least one embodiment, one or more execution units can be combined into a fused execution unit3109A-3109N having thread control logic (3111A-3111N) that is common to fused EUs such as execution unit3107A fused with execution unit3108A into fused execution unit3109A. In at least one embodiment, multiple EUs can be fused into an EU group. In at least one embodiment, each EU in a fused EU group can be configured to execute a separate SIMD hardware thread, with a number of EUs in a fused EU group possibly varying according to various embodiments. In at least one embodiment, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. In at least one embodiment, each fused graphics execution unit3109A-3109N includes at least two execution units. For example, in at least one embodiment, fused execution unit3109A includes a first EU3107A, second EU3108A, and thread control logic3111A that is common to first EU3107A and second EU3108A. In at least one embodiment, thread control logic3111A controls threads executed on fused graphics execution unit3109A, allowing each EU within fused execution units3109A-3109N to execute using a common instruction pointer register. In at least one embodiment, one or more internal instruction caches (e.g.,3106) are included in thread execution logic3100to cache thread instructions for execution units. In at least one embodiment, one or more data caches (e.g.,3112) are included to cache thread data during thread execution. In at least one embodiment, sampler3110is included to provide texture sampling for 3D operations and media sampling for media operations. In at least one embodiment, sampler3110includes specialized texture or media sampling functionality to process texture or media data during sampling process before providing sampled data to an execution unit. During execution, in at least one embodiment, graphics and media pipelines send thread initiation requests to thread execution logic3100via thread spawning and dispatch logic. In at least one embodiment, once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor3102is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In at least one embodiment, a pixel shader or a fragment shader calculates values of various vertex attributes that are to be interpolated across a rasterized object. In at least one embodiment, pixel processor logic within shader processor3102then executes an application programming interface (API)-supplied pixel or fragment shader program. In at least one embodiment, to execute a shader program, shader processor3102dispatches threads to an execution unit (e.g.,3108A) via thread dispatcher3104. In at least one embodiment, shader processor3102uses texture sampling logic in sampler3110to access texture data in texture maps stored in memory. In at least one embodiment, arithmetic operations on texture data and input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. In at least one embodiment, data port3114provides a memory access mechanism for thread execution logic3100to output processed data to memory for further processing on a graphics processor output pipeline. In at least one embodiment, data port3114includes or couples to one or more cache memories (e.g., data cache3112) to cache data for memory access via a data port. As illustrated inFIG.31B, in at least one embodiment, a graphics execution unit3108can include an instruction fetch unit3137, a general register file array (GRF)3124, an architectural register file array (ARF)3126, a thread arbiter3122, a send unit3130, a branch unit3132, a set of SIMD floating point units (FPUs)3134, and a set of dedicated integer SIMD ALUs3135. In at least one embodiment, GRF3124and ARF3126includes a set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in graphics execution unit3108. In at least one embodiment, per thread architectural state is maintained in ARF3126, while data used during thread execution is stored in GRF3124. In at least one embodiment, execution state of each thread, including instruction pointers for each thread, can be held in thread-specific registers in ARF3126. In at least one embodiment, graphics execution unit3108has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). In at least one embodiment, architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. In at least one embodiment, graphics execution unit3108can co-issue multiple instructions, which may each be different instructions. In at least one embodiment, thread arbiter3122of graphics execution unit thread3108can dispatch instructions to one of send unit3130, branch unit3132, or SIMD FPU(s)3134for execution. In at least one embodiment, each execution thread can access128general-purpose registers within GRF3124, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In at least one embodiment, each execution unit thread has access to 4 kilobytes within GRF3124, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In at least one embodiment, up to seven threads can execute simultaneously, although a number of threads per execution unit can also vary according to embodiments. In at least one embodiment, in which seven threads may access 4 kilobytes, GRF3124can store a total of 28 kilobytes. In at least one embodiment, flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures. In at least one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via “send” instructions that are executed by message passing to send unit3130. In at least one embodiment, branch instructions are dispatched to branch unit3132to facilitate SIMD divergence and eventual convergence. In at least one embodiment, graphics execution unit3108includes one or more SIMD floating point units (FPU(s))3134to perform floating-point operations. In at least one embodiment, FPU(s)3134also support integer computation. In at least one embodiment, FPU(s)3134can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In at least one embodiment, at least one FPU provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In at least one embodiment, a set of 8-bit integer SIMD ALUs3135are also present, and may be specifically optimized to perform operations associated with machine learning computations. In at least one embodiment, arrays of multiple instances of graphics execution unit3108can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). In at least one embodiment, execution unit3108can execute instructions across a plurality of execution channels. In at least one embodiment, each thread executed on graphics execution unit3108is executed on a different channel. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, portions or all of inference and/or training logic715may be incorporated into thread execution logic3100. Moreover, in at least one embodiment, inferencing and/or training operations described herein may be done using logic other than logic illustrated inFIG.7A or7B. In at least one embodiment, weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure ALUs thread of execution logic3100to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.32illustrates a parallel processing unit (“PPU”)3200, according to at least one embodiment. In at least one embodiment, PPU3200is configured with machine-readable code that, if executed by PPU3200, causes PPU3200to perform some or all of processes and techniques described throughout this disclosure. In at least one embodiment, PPU3200is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel. In at least one embodiment, PPU3200includes one or more graphics cores1800In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU3200. In at least one embodiment, PPU3200is a graphics processing unit (“GPU”) configured to implement a graphics rendering pipeline for processing three-dimensional (“3D”) graphics data in order to generate two-dimensional (“2D”) image data for display on a display device such as a liquid crystal display (“LCD”) device. In at least one embodiment, PPU3200is utilized to perform computations such as linear algebra operations and machine-learning operations.FIG.32illustrates an example parallel processor for illustrative purposes only and should be construed as a non-limiting example of processor architectures contemplated within scope of this disclosure and that any suitable processor may be employed to supplement and/or substitute for same. In at least one embodiment, one or more PPUs3200are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, PPU3200is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, deep learning, high-accuracy speech, image, text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and more. In at least one embodiment, PPU3200includes, without limitation, an Input/Output (“I/O”) unit3206, a front-end unit3210, a scheduler (sequencer) unit3212, a work distribution unit3214, a hub3216, a crossbar (“XBar”)3220, one or more general processing clusters (“GPCs”)3218, and one or more partition units (“memory partition units”)3222. In at least one embodiment, PPU3200is connected to a host processor or other PPUs3200via one or more high-speed GPU interconnects (“GPU interconnects”)3208. In at least one embodiment, PPU3200is connected to a host processor or other peripheral devices via a system bus3202. In at least one embodiment, PPU3200is connected to a local memory comprising one or more memory devices (“memory”)3204. In at least one embodiment, memory devices3204include, without limitation, one or more dynamic random access memory (“DRAM”) devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as high-bandwidth memory (“HBM”) subsystems, with multiple DRAM dies stacked within each device. In at least one embodiment, high-speed GPU interconnect3208may refer to a wire-based multi-lane communications link that is used by systems to scale and include one or more PPUs3200combined with one or more central processing units (“CPUs”), supports cache coherence between PPUs3200and CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transmitted by high-speed GPU interconnect3208through hub3216to/from other units of PPU3200such as one or more copy engines, video encoders, video decoders, power management units, and other components which may not be explicitly illustrated inFIG.32. In at least one embodiment, I/O unit3206is configured to transmit and receive communications (e.g., commands, data) from a host processor (not illustrated inFIG.32) over system bus3202. In at least one embodiment, I/O unit3206communicates with host processor directly via system bus3202or through one or more intermediate devices such as a memory bridge. In at least one embodiment, I/O unit3206may communicate with one or more other processors, such as one or more of PPUs3200via system bus3202. In at least one embodiment, I/O unit3206implements a Peripheral Component Interconnect Express (“PCIe”) interface for communications over a PCIe bus. In at least one embodiment, I/O unit3206implements interfaces for communicating with external devices. In at least one embodiment, I/O unit3206decodes packets received via system bus3202. In at least one embodiment, at least some packets represent commands configured to cause PPU3200to perform various operations. In at least one embodiment, I/O unit3206transmits decoded commands to various other units of PPU3200as specified by commands. In at least one embodiment, commands are transmitted to front-end unit3210and/or transmitted to hub3216or other units of PPU3200such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly illustrated inFIG.32). In at least one embodiment, I/O unit3206is configured to route communications between and among various logical units of PPU3200. In at least one embodiment, a program executed by host processor encodes a command stream in a buffer that provides workloads to PPU3200for processing. In at least one embodiment, a workload comprises instructions and data to be processed by those instructions. In at least one embodiment, a buffer is a region in a memory that is accessible (e.g., read/write) by both a host processor and PPU3200—a host interface unit may be configured to access that buffer in a system memory connected to system bus3202via memory requests transmitted over system bus3202by I/O unit3206. In at least one embodiment, a host processor writes a command stream to a buffer and then transmits a pointer to a start of a command stream to PPU3200such that front-end unit3210receives pointers to one or more command streams and manages one or more command streams, reading commands from command streams and forwarding commands to various units of PPU3200. In at least one embodiment, front-end unit3210is coupled to scheduler unit3212(which may be referred to as a sequencer unit, a thread sequencer, and/or an asynchronous compute engine) that configures various GPCs3218to process tasks defined by one or more command streams. In at least one embodiment, scheduler unit3212is configured to track state information related to various tasks managed by scheduler unit3212where state information may indicate which of GPCs3218a task is assigned to, whether task is active or inactive, a priority level associated with task, and so forth. In at least one embodiment, scheduler unit3212manages execution of a plurality of tasks on one or more of GPCs3218. In at least one embodiment, scheduler unit3212is coupled to work distribution unit3214that is configured to dispatch tasks for execution on GPCs3218. In at least one embodiment, work distribution unit3214tracks a number of scheduled tasks received from scheduler unit3212and work distribution unit3214manages a pending task pool and an active task pool for each of GPCs3218. In at least one embodiment, pending task pool comprises a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC3218; an active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by GPCs3218such that as one of GPCs3218completes execution of a task, that task is evicted from that active task pool for GPC3218and another task from a pending task pool is selected and scheduled for execution on GPC3218. In at least one embodiment, if an active task is idle on GPC3218, such as while waiting for a data dependency to be resolved, then that active task is evicted from GPC3218and returned to that pending task pool while another task in that pending task pool is selected and scheduled for execution on GPC3218. In at least one embodiment, work distribution unit3214communicates with one or more GPCs3218via XBar3220. In at least one embodiment, XBar3220is an interconnect network that couples many of units of PPU3200to other units of PPU3200and can be configured to couple work distribution unit3214to a particular GPC3218. In at least one embodiment, one or more other units of PPU3200may also be connected to XBar3220via hub3216. In at least one embodiment, tasks are managed by scheduler unit3212and dispatched to one of GPCs3218by work distribution unit3214. In at least one embodiment, GPC3218is configured to process task and generate results. In at least one embodiment, results may be consumed by other tasks within GPC3218, routed to a different GPC3218via XBar3220, or stored in memory3204. In at least one embodiment, results can be written to memory3204via partition units3222, which implement a memory interface for reading and writing data to/from memory3204. In at least one embodiment, results can be transmitted to another PPU or CPU via high-speed GPU interconnect3208. In at least one embodiment, PPU3200includes, without limitation, a number U of partition units3222that is equal to a number of separate and distinct memory devices3204coupled to PPU3200, as described in more detail herein in conjunction withFIG.34. In at least one embodiment, a host processor executes a driver kernel that implements an application programming interface (“API”) that enables one or more applications executing on a host processor to schedule operations for execution on PPU3200. In at least one embodiment, multiple compute applications are simultaneously executed by PPU3200and PPU3200provides isolation, quality of service (“QoS”), and independent address spaces for multiple compute applications. In at least one embodiment, an application generates instructions (e.g., in form of API calls) that cause a driver kernel to generate one or more tasks for execution by PPU3200and that driver kernel outputs tasks to one or more streams being processed by PPU3200. In at least one embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp, wavefront, and/or wave. In at least one embodiment, a warp, wavefront, and/or wave comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In at least one embodiment, cooperating threads can refer to a plurality of threads including instructions to perform task and that exchange data through shared memory. In at least one embodiment, threads and cooperating threads are described in more detail in conjunction withFIG.34. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to PPU3200. In at least one embodiment, deep learning application processor is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by PPU3200. In at least one embodiment, PPU3200may be used to perform one or more neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.33illustrates a general processing cluster (“GPC”)3300, according to at least one embodiment. In at least one embodiment, GPC3300is GPC3218ofFIG.32. In at least one embodiment, each GPC3300includes, without limitation, a number of hardware units for processing tasks and each GPC3300includes, without limitation, a pipeline manager3302, a pre-raster operations unit (“preROP”)3304, a raster engine3308, a work distribution crossbar (“WDX”)3316, a memory management unit (“MMU”)3318, one or more Data Processing Clusters (“DPCs”)3306, and any suitable combination of parts. In at least one embodiment, operation of GPC3300is controlled by pipeline manager3302. In at least one embodiment, pipeline manager3302manages configuration of one or more DPCs3306for processing tasks allocated to GPC3300. In at least one embodiment, pipeline manager3302configures at least one of one or more DPCs3306to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC3306is configured to execute a vertex shader program on a programmable streaming multi-processor (“SM”)3314. In at least one embodiment, pipeline manager3302is configured to route packets received from a work distribution unit to appropriate logical units within GPC3300, in at least one embodiment, and some packets may be routed to fixed function hardware units in preROP3304and/or raster engine3308while other packets may be routed to DPCs3306for processing by a primitive engine3312or SM3314. In at least one embodiment, pipeline manager3302configures at least one of DPCs3306to implement a neural network model and/or a computing pipeline. In at least one embodiment, preROP unit3304is configured, in at least one embodiment, to route data generated by raster engine3308and DPCs3306to a Raster Operations (“ROP”) unit in partition unit3222, described in more detail above in conjunction withFIG.32. In at least one embodiment, preROP unit3304is configured to perform optimizations for color blending, organize pixel data, perform address translations, and more. In at least one embodiment, raster engine3308includes, without limitation, a number of fixed function hardware units configured to perform various raster operations, in at least one embodiment, and raster engine3308includes, without limitation, a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, a tile coalescing engine, and any suitable combination thereof. In at least one embodiment, setup engine receives transformed vertices and generates plane equations associated with geometric primitive defined by vertices; plane equations are transmitted to a coarse raster engine to generate coverage information (e.g., an x, y coverage mask for a tile) for primitive; output of a coarse raster engine is transmitted to a culling engine where fragments associated with a primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. In at least one embodiment, fragments that survive clipping and culling are passed to a fine raster engine to generate attributes for pixel fragments based on plane equations generated by a setup engine. In at least one embodiment, an output of raster engine3308comprises fragments to be processed by any suitable entity, such as by a fragment shader implemented within DPC3306. In at least one embodiment, each DPC3306included in GPC3300comprises, without limitation, an M-Pipe Controller (“MPC”)3310; primitive engine3312; one or more SMs3314; and any suitable combination thereof. In at least one embodiment, MPC3310controls operation of DPC3306, routing packets received from pipeline manager3302to appropriate units in DPC3306. In at least one embodiment, packets associated with a vertex are routed to primitive engine3312, which is configured to fetch vertex attributes associated with a vertex from memory; in contrast, packets associated with a shader program may be transmitted to SM3314. In at least one embodiment, SM3314comprises, without limitation, a programmable streaming processor that is configured to process tasks represented by a number of threads. In at least one embodiment, SM3314is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently and implements a Single-Instruction, Multiple-Data (“SIMD”) architecture where each thread in a group of threads (e.g., a warp, wavefront, wave) is configured to process a different set of data based on same set of instructions. In at least one embodiment, all threads in group of threads execute a common set of instructions. In at least one embodiment, SM3314implements a Single-Instruction, Multiple Thread (“SIMT”) architecture wherein each thread in a group of threads is configured to process a different set of data based on that common set of instructions, but where individual threads in a group of threads are allowed to diverge during execution. In at least one embodiment, a program counter, call stack, and execution state is maintained for each warp (which may be referred to as wavefronts and/or waves), enabling concurrency between warps and serial execution within warps when threads within a warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. In at least one embodiment, execution state is maintained for each individual thread and threads executing common instructions may be converged and executed in parallel for better efficiency. At least one embodiment of SM3314is described in more detail herein. In at least one embodiment, MMU3318provides an interface between GPC3300and a memory partition unit (e.g., partition unit3222ofFIG.32) and MMU3318provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In at least one embodiment, MMU3318provides one or more translation lookaside buffers (“TLBs”) for performing translation of virtual addresses into physical addresses in memory. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to GPC3300. In at least one embodiment, GPC3300is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by GPC3300. In at least one embodiment, GPC3300may be used to perform one or more neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.34illustrates a memory partition unit3400of a parallel processing unit (“PPU”), in accordance with at least one embodiment. In at least one embodiment, memory partition unit3400includes, without limitation, a Raster Operations (“ROP”) unit3402, a level two (“L2”) cache3404, a memory interface3406, and any suitable combination thereof. In at least one embodiment, memory interface3406is coupled to memory. In at least one embodiment, memory interface3406may implement 32, 64, 128, 1024-bit data buses, or like, for high-speed data transfer. In at least one embodiment, PPU incorporates U memory interfaces3406where U is a positive integer, with one memory interface3406per pair of partition units3400, where each pair of partition units3400is connected to a corresponding memory device. For example, in at least one embodiment, PPU may be connected to up to Y memory devices, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random access memory (“GDDR5 SDRAM”). In at least one embodiment, memory interface3406implements a high bandwidth memory second generation (“HBM2”) memory interface and Y equals half of U. In at least one embodiment, HBM2 memory stacks are located on a physical package with a PPU, providing substantial power and area savings compared with conventional GDDR5 SDRAM systems. In at least one embodiment, each HBM2 stack includes, without limitation, four memory dies with Y=4, with each HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits. In at least one embodiment, that memory supports Single-Error Correcting Double-Error Detecting (“SECDED”) Error Correction Code (“ECC”) to protect data. In at least one embodiment, ECC can provide higher reliability for compute applications that are sensitive to data corruption. In at least one embodiment, PPU implements a multi-level memory hierarchy. In at least one embodiment, memory partition unit3400supports a unified memory to provide a single unified virtual address space for central processing unit (“CPU”) and PPU memory, enabling data sharing between virtual memory systems. In at least one embodiment frequency of accesses by a PPU to a memory located on other processors is traced to ensure that memory pages are moved to physical memory of PPU that is accessing pages more frequently. In at least one embodiment, high-speed GPU interconnect3208supports address translation services allowing PPU to directly access a CPU's page tables and providing full access to CPU memory by a PPU. In at least one embodiment, copy engines transfer data between multiple PPUs or between PPUs and CPUs. In at least one embodiment, copy engines can generate page faults for addresses that are not mapped into page tables and memory partition unit3400then services page faults, mapping addresses into page table, after which copy engine performs a transfer. In at least one embodiment, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing available memory. In at least one embodiment, with hardware page faulting, addresses can be passed to copy engines without regard as to whether memory pages are resident, and a copy process is transparent. Data from memory3204ofFIG.32or other system memory is fetched by memory partition unit3400and stored in L2 cache3404, which is located on-chip and is shared between various GPCs, in accordance with at least one embodiment. Each memory partition unit3400, in at least one embodiment, includes, without limitation, at least a portion of L2 cache associated with a corresponding memory device. In at least one embodiment, lower level caches are implemented in various units within GPCs. In at least one embodiment, each of SMs3314inFIG.33may implement a Level 1 (“L1”) cache wherein that L1 cache is private memory that is dedicated to a particular SM3314and data from L2 cache3404is fetched and stored in each L1 cache for processing in functional units of SMs3314. In at least one embodiment, L2 cache3404is coupled to memory interface3406and XBar3220shown inFIG.32. ROP unit3402performs graphics raster operations related to pixel color, such as color compression, pixel blending, and more, in at least one embodiment. ROP unit3402, in at least one embodiment, implements depth testing in conjunction with raster engine3308, receiving a depth for a sample location associated with a pixel fragment from a culling engine of raster engine3308. In at least one embodiment, depth is tested against a corresponding depth in a depth buffer for a sample location associated with a fragment. In at least one embodiment, if that fragment passes that depth test for that sample location, then ROP unit3402updates depth buffer and transmits a result of that depth test to raster engine3308. It will be appreciated that a number of partition units3400may be different than a number of GPCs and, therefore, each ROP unit3402can, in at least one embodiment, be coupled to each GPC. In at least one embodiment, ROP unit3402tracks packets received from different GPCs and determines whether a result generated by ROP unit3402is to be routed to through XBar3220. FIG.35illustrates a streaming multi-processor (“SM”)3500, according to at least one embodiment. In at least one embodiment, SM3500is SM ofFIG.33. In at least one embodiment, SM3500includes, without limitation, an instruction cache3502, one or more scheduler units3504(which may be referred to as sequencer units), a register file3508, one or more processing cores (“cores”)3510, one or more special function units (“SFUs”)3512, one or more load/store units (“LSUs”)3514, an interconnect network3516, a shared memory/level one (“L1”) cache3518, and/or any suitable combination thereof. In at least one embodiment, LSUs3514perform load of store operations corresponding to loading/storing data (e.g., instructions) to perform an operation (e.g., perform an API, an API call). In at least one embodiment, a work distribution unit dispatches tasks for execution on general processing clusters (“GPCs”) of parallel processing units (“PPUs”) and each task is allocated to a particular Data Processing Cluster (“DPC”) within a GPC and, if a task is associated with a shader program, that task is allocated to one of SMs3500(which may be referred to as CUs and/or slices). In at least one embodiment, scheduler unit3504(which may be referred to as a sequencer and/or asynchronous compute engine) receives tasks from a work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM3500. In at least one embodiment, scheduler unit3504schedules thread blocks for execution as warps (which may be referred to as wavefronts and/or waves) of parallel threads, wherein each thread block is allocated at least one warp. In at least one embodiment, each warp executes threads. In at least one embodiment, scheduler unit3504manages a plurality of different thread blocks, allocating warps to different thread blocks and then dispatching instructions from plurality of different cooperative groups to various functional units (e.g., processing cores3510, SFUs3512, and LSUs3514) during each clock cycle. In at least one embodiment, Cooperative Groups (which may also be referred to as wavefronts and/or waves) may refer to a programming model for organizing groups of communicating threads that allows developers to express granularity at which threads are communicating, enabling expression of richer, more efficient parallel decompositions. In at least one embodiment, cooperative launch APIs support synchronization amongst thread blocks for execution of parallel algorithms. In at least one embodiment, applications of conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., syncthreads( ) function). However, in at least one embodiment, programmers may define groups of threads at smaller than thread block granularities and synchronize within defined groups to enable greater performance, design flexibility, and software reuse in form of collective group-wide function interfaces. In at least one embodiment, Cooperative Groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on threads in a cooperative group. In at least one embodiment, that programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. In at least one embodiment, Cooperative Groups primitives enable new patterns of cooperative parallelism, including, without limitation, producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks. In at least one embodiment, a dispatch unit3506is configured to transmit instructions to one or more functional units and scheduler unit3504and includes, without limitation, two dispatch units3506that enable two different instructions from a common warp to be dispatched during each clock cycle. In at least one embodiment, each scheduler unit3504includes a single dispatch unit3506or additional dispatch units3506. In at least one embodiment, each SM3500(which may be referred to as a CU and/or slice), in at least one embodiment, includes, without limitation, register file3508that provides a set of registers for functional units of SM3500. In at least one embodiment, register file3508is divided between each functional unit such that each functional unit is allocated a dedicated portion of register file3508. In at least one embodiment, register file3508is divided between different warps being executed by SM3500and register file3508provides temporary storage for operands connected to data paths of functional units. In at least one embodiment, each SM3500comprises, without limitation, a plurality of L processing cores3510, where L is a positive integer. In at least one embodiment, SM3500includes, without limitation, a large number (e.g., 128 or more) of distinct processing cores3510. In at least one embodiment, each processing core3510includes, without limitation, a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes, without limitation, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, floating point arithmetic logic units implement IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores3510include, without limitation, 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores. Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores3510. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation, D=A×B+C, where A, B, C, and D are 4×4 matrices. In at least one embodiment, matrix multiply inputs A and B are 16-bit floating point matrices and accumulation matrices C and D are 16-bit floating point or 32-bit floating point matrices. In at least one embodiment, tensor cores operate on 16-bit floating point input data with 32-bit floating point accumulation. In at least one embodiment, 16-bit floating point multiply uses 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with other intermediate products for a 4×4×4 matrix multiply. Tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements, in at least one embodiment. In at least one embodiment, an API, such as a CUDA 9 C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. In at least one embodiment, at a CUDA level, a warp-level interface assumes 16×16 size matrices spanning all 32 threads of warp (which may be referred to as a wavefront and/or wave). In at least one embodiment, each SM3500comprises, without limitation, M SFUs3512that perform special functions (e.g., attribute evaluation, reciprocal square root, and like). In at least one embodiment, SFUs3512include, without limitation, a tree traversal unit configured to traverse a hierarchical tree data structure. In at least one embodiment, SFUs3512include, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, texture units are configured to load texture maps (e.g., a 2D array of texels) from memory and sample texture maps to produce sampled texture values for use in shader programs executed by SM3500. In at least one embodiment, texture maps are stored in shared memory/L1 cache3518. In at least one embodiment, texture units implement texture operations such as filtering operations using mip-maps (e.g., texture maps of varying levels of detail), in accordance with at least one embodiment. In at least one embodiment, each SM3500includes, without limitation, two texture units. Each SM3500comprises, without limitation, N LSUs3514that implement load and store operations between shared memory/L1 cache3518and register file3508, in at least one embodiment. Interconnect network3516connects each functional unit to register file3508and LSU3514to register file3508and shared memory/L1 cache3518in at least one embodiment. In at least one embodiment, interconnect network3516is a crossbar that can be configured to connect any functional units to any registers in register file3508and connect LSUs3514to register file3508and memory locations in shared memory/L1 cache3518. In at least one embodiment, shared memory/L1 cache3518is an array of on-chip memory that allows for data storage and communication between SM3500and primitive engine and between threads in SM3500, in at least one embodiment. In at least one embodiment, shared memory/L1 cache3518comprises, without limitation, 128 KB of storage capacity and is in a path from SM3500to a partition unit. In at least one embodiment, shared memory/L1 cache3518, in at least one embodiment, is used to cache reads and writes. In at least one embodiment, one or more of shared memory/L1 cache3518, L2 cache, and memory are backing stores. Combining data cache and shared memory functionality into a single memory block provides improved performance for both types of memory accesses, in at least one embodiment. In at least one embodiment, capacity is used or is usable as a cache by programs that do not use shared memory, such as if shared memory is configured to use half of a capacity, and texture and load/store operations can use remaining capacity. Integration within shared memory/L1 cache3518enables shared memory/L1 cache3518to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data, in accordance with at least one embodiment. In at least one embodiment, when configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. In at least one embodiment, fixed function graphics processing units are bypassed, creating a much simpler programming model. In a general purpose parallel computation configuration, a work distribution unit assigns and distributes blocks of threads directly to DPCs, in at least one embodiment. In at least one embodiment, threads in a block execute a common program, using a unique thread ID in calculation to ensure each thread generates unique results, using SM3500to execute program and perform calculations, shared memory/L1 cache3518to communicate between threads, and LSU3514to read and write global memory through shared memory/L1 cache3518and memory partition unit. In at least one embodiment, when configured for general purpose parallel computation, SM3500writes commands that scheduler unit3504can use to launch new work on DPCs. In at least one embodiment, a PPU is included in or coupled to a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (“PDA”), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and more. In at least one embodiment, a PPU is embodied on a single semiconductor substrate. In at least one embodiment, a PPU is included in a system-on-a-chip (“SoC”) along with one or more other devices such as additional PPUs, memory, a reduced instruction set computer (“RISC”) CPU, a memory management unit (“MMU”), a digital-to-analog converter (“DAC”), and like. In at least one embodiment, a PPU may be included on a graphics card that includes one or more memory devices. In at least one embodiment, that graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In at least one embodiment, that PPU may be an integrated graphics processing unit (“iGPU”) included in chipset of a motherboard. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, deep learning application processor is used to train a machine learning model, such as a neural network, to predict or infer information provided to SM3500. In at least one embodiment, SM3500is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by SM3500. In at least one embodiment, SM3500may be used to perform one or more neural network use cases described herein. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. Embodiments are disclosed related a virtualized computing platform for advanced computing, such as image inferencing and image processing in medical applications. Without limitation, embodiments may include radiography, magnetic resonance imaging (MRI), nuclear medicine, ultrasound, sonography, elastography, photoacoustic imaging, tomography, echocardiography, functional near-infrared spectroscopy, and magnetic particle imaging, or a combination thereof. In at least one embodiment, a virtualized computing platform and associated processes described herein may additionally or alternatively be used, without limitation, in forensic science analysis, sub-surface detection and imaging (e.g., oil exploration, archaeology, paleontology, etc.), topography, oceanography, geology, osteology, meteorology, intelligent area or object tracking and monitoring, sensor data processing (e.g., RADAR, SONAR, LIDAR, etc.), and/or genomics and gene sequencing. With reference toFIG.36,FIG.36is an example data flow diagram for a process3600of generating and deploying an image processing and inferencing pipeline, in accordance with at least one embodiment. In at least one embodiment, process3600may be deployed for use with imaging devices, processing devices, genomics devices, gene sequencing devices, radiology devices, and/or other device types at one or more facilities3602, such as medical facilities, hospitals, healthcare institutes, clinics, research or diagnostic labs, etc. In at least one embodiment, process3600may be deployed to perform genomics analysis and inferencing on sequencing data. Examples of genomic analyses that may be performed using systems and processes described herein include, without limitation, variant calling, mutation detection, and gene expression quantification. In at least one embodiment, process3600may be executed within a training system3604and/or a deployment system3606. In at least one embodiment, training system3604may be used to perform training, deployment, and implementation of machine learning models (e.g., neural networks, object detection algorithms, computer vision algorithms, etc.) for use in deployment system3606. In at least one embodiment, deployment system3606may be configured to offload processing and compute resources among a distributed computing environment to reduce infrastructure requirements at facility3602. In at least one embodiment, deployment system3606may provide a streamlined platform for selecting, customizing, and implementing virtual instruments for use with imaging devices (e.g., MRI, CT Scan, X-Ray, Ultrasound, etc.) or sequencing devices at facility3602. In at least one embodiment, virtual instruments may include software-defined applications for performing one or more processing operations with respect to imaging data generated by imaging devices, sequencing devices, radiology devices, and/or other device types. In at least one embodiment, one or more applications in a pipeline may use or call upon services (e.g., inference, visualization, compute, AI, etc.) of deployment system3606during execution of applications. In at least one embodiment, some of applications used in advanced processing and inferencing pipelines may use machine learning models or other AI to perform one or more processing steps. In at least one embodiment, machine learning models may be trained at facility3602using data3608(such as imaging data) generated at facility3602(and stored on one or more picture archiving and communication system (PACS) servers at facility3602), may be trained using imaging or sequencing data3608from another facility or facilities (e.g., a different hospital, lab, clinic, etc.), or a combination thereof. In at least one embodiment, training system3604may be used to provide applications, services, and/or other resources for generating working, deployable machine learning models for deployment system3606. In at least one embodiment, a model registry3624may be backed by object storage that may support versioning and object metadata. In at least one embodiment, object storage may be accessible through, for example, a cloud storage (e.g., a cloud3726ofFIG.37) compatible application programming interface (API) from within a cloud platform. In at least one embodiment, machine learning models within model registry3624may uploaded, listed, modified, or deleted by developers or partners of a system interacting with an API. In at least one embodiment, an API may provide access to methods that allow users with appropriate credentials to associate models with applications, such that models may be executed as part of execution of containerized instantiations of applications. In at least one embodiment, a training pipeline3704(FIG.37) may include a scenario where facility3602is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data3608generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data3608is received, AI-assisted annotation3610may be used to aid in generating annotations corresponding to imaging data3608to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation3610may include one or more machine learning models (e.g., convolutional neural networks (CNNs)) that may be trained to generate annotations corresponding to certain types of imaging data3608(e.g., from certain devices) and/or certain types of anomalies in imaging data3608. In at least one embodiment, AI-assisted annotations3610may then be used directly, or may be adjusted or fine-tuned using an annotation tool (e.g., by a researcher, a clinician, a doctor, a scientist, etc.), to generate ground truth data. In at least one embodiment, in some examples, labeled clinic data3612(e.g., annotations provided by a clinician, doctor, scientist, technician, etc.) may be used as ground truth data for training a machine learning model. In at least one embodiment, AI-assisted annotations3610, labeled clinic data3612, or a combination thereof may be used as ground truth data for training a machine learning model. In at least one embodiment, a trained machine learning model may be referred to as an output model3616, and may be used by deployment system3606, as described herein. In at least one embodiment, training pipeline3704(FIG.37) may include a scenario where facility3602needs a machine learning model for use in performing one or more processing tasks for one or more applications in deployment system3606, but facility3602may not currently have such a machine learning model (or may not have a model that is optimized, efficient, or effective for such purposes). In at least one embodiment, an existing machine learning model may be selected from model registry3624. In at least one embodiment, model registry3624may include machine learning models trained to perform a variety of different inference tasks on imaging data. In at least one embodiment, machine learning models in model registry3624may have been trained on imaging data from different facilities than facility3602(e.g., facilities remotely located). In at least one embodiment, machine learning models may have been trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when being trained on imaging data from a specific location, training may take place at that location, or at least in a manner that protects confidentiality of imaging data or restricts imaging data from being transferred off-premises (e.g., to comply with HIPAA regulations, privacy regulations, etc.). In at least one embodiment, once a model is trained—or partially trained—at one location, a machine learning model may be added to model registry3624. In at least one embodiment, a machine learning model may then be retrained, or updated, at any number of other facilities, and a retrained or updated model may be made available in model registry3624. In at least one embodiment, a machine learning model may then be selected from model registry3624—and referred to as output model3616—and may be used in deployment system3606to perform one or more processing tasks for one or more applications of a deployment system. In at least one embodiment, training pipeline3704(FIG.37) may be used in a scenario that includes facility3602requiring a machine learning model for use in performing one or more processing tasks for one or more applications in deployment system3606, but facility3602may not currently have such a machine learning model (or may not have a model that is optimized, efficient, or effective for such purposes). In at least one embodiment, a machine learning model selected from model registry3624might not be fine-tuned or optimized for imaging data3608generated at facility3602because of differences in populations, genetic variations, robustness of training data used to train a machine learning model, diversity in anomalies of training data, and/or other issues with training data. In at least one embodiment, AI-assisted annotation3610may be used to aid in generating annotations corresponding to imaging data3608to be used as ground truth data for retraining or updating a machine learning model. In at least one embodiment, labeled clinic data3612(e.g., annotations provided by a clinician, doctor, scientist, etc.) may be used as ground truth data for training a machine learning model. In at least one embodiment, retraining or updating a machine learning model may be referred to as model training3614. In at least one embodiment, model training3614—e.g., AI-assisted annotations3610, labeled clinic data3612, or a combination thereof—may be used as ground truth data for retraining or updating a machine learning model. In at least one embodiment, deployment system3606may include software3618, services3620, hardware3622, and/or other components, features, and functionality. In at least one embodiment, deployment system3606may include a software “stack,” such that software3618may be built on top of services3620and may use services3620to perform some or all of processing tasks, and services3620and software3618may be built on top of hardware3622and use hardware3622to execute processing, storage, and/or other compute tasks of deployment system3606. In at least one embodiment, software3618may include any number of different containers, where each container may execute an instantiation of an application. In at least one embodiment, each application may perform one or more processing tasks in an advanced processing and inferencing pipeline (e.g., inferencing, object detection, feature detection, segmentation, image enhancement, calibration, etc.). In at least one embodiment, for each type of imaging device (e.g., CT, MM, X-Ray, ultrasound, sonography, echocardiography, etc.), sequencing device, radiology device, genomics device, etc., there may be any number of containers that may perform a data processing task with respect to imaging data3608(or other data types, such as those described herein) generated by a device. In at least one embodiment, an advanced processing and inferencing pipeline may be defined based on selections of different containers that are desired or required for processing imaging data3608, in addition to containers that receive and configure imaging data for use by each container and/or for use by facility3602after processing through a pipeline (e.g., to convert outputs back to a usable data type, such as digital imaging and communications in medicine (DICOM) data, radiology information system (RIS) data, clinical information system (CIS) data, remote procedure call (RPC) data, data substantially compliant with a representation state transfer (REST) interface, data substantially compliant with a file-based interface, and/or raw data, for storage and display at facility3602). In at least one embodiment, a combination of containers within software3618(e.g., that make up a pipeline) may be referred to as a virtual instrument (as described in more detail herein), and a virtual instrument may leverage services3620and hardware3622to execute some or all processing tasks of applications instantiated in containers. In at least one embodiment, a data processing pipeline may receive input data (e.g., imaging data3608) in a DICOM, RIS, CIS, REST compliant, RPC, raw, and/or other format in response to an inference request (e.g., a request from a user of deployment system3606, such as a clinician, a doctor, a radiologist, etc.). In at least one embodiment, input data may be representative of one or more images, video, and/or other data representations generated by one or more imaging devices, sequencing devices, radiology devices, genomics devices, and/or other device types. In at least one embodiment, data may undergo pre-processing as part of data processing pipeline to prepare data for processing by one or more applications. In at least one embodiment, post-processing may be performed on an output of one or more inferencing tasks or other processing tasks of a pipeline to prepare an output data for a next application and/or to prepare output data for transmission and/or use by a user (e.g., as a response to an inference request). In at least one embodiment, inferencing tasks may be performed by one or more machine learning models, such as trained or deployed neural networks, which may include output models3616of training system3604. In at least one embodiment, tasks of data processing pipeline may be encapsulated in a container(s) that each represent a discrete, fully functional instantiation of an application and virtualized computing environment that is able to reference machine learning models. In at least one embodiment, containers or applications may be published into a private (e.g., limited access) area of a container registry (described in more detail herein), and trained or deployed models may be stored in model registry3624and associated with one or more applications. In at least one embodiment, images of applications (e.g., container images) may be available in a container registry, and once selected by a user from a container registry for deployment in a pipeline, an image may be used to generate a container for an instantiation of an application for use by a user's system. In at least one embodiment, developers (e.g., software developers, clinicians, doctors, etc.) may develop, publish, and store applications (e.g., as containers) for performing image processing and/or inferencing on supplied data. In at least one embodiment, development, publishing, and/or storing may be performed using a software development kit (SDK) associated with a system (e.g., to ensure that an application and/or container developed is compliant with or compatible with a system). In at least one embodiment, an application that is developed may be tested locally (e.g., at a first facility, on data from a first facility) with an SDK which may support at least some of services3620as a system (e.g., system3700ofFIG.37). In at least one embodiment, because DICOM objects may contain anywhere from one to hundreds of images or other data types, and due to a variation in data, a developer may be responsible for managing (e.g., setting constructs for, building pre-processing into an application, etc.) extraction and preparation of incoming DICOM data. In at least one embodiment, once validated by system3700(e.g., for accuracy, safety, patient privacy, etc.), an application may be available in a container registry for selection and/or implementation by a user (e.g., a hospital, clinic, lab, healthcare provider, etc.) to perform one or more processing tasks with respect to data at a facility (e.g., a second facility) of a user. In at least one embodiment, developers may then share applications or containers through a network for access and use by users of a system (e.g., system3700ofFIG.37). In at least one embodiment, completed and validated applications or containers may be stored in a container registry and associated machine learning models may be stored in model registry3624. In at least one embodiment, a requesting entity (e.g., a user at a medical facility)—who provides an inference or image processing request—may browse a container registry and/or model registry3624for an application, container, dataset, machine learning model, etc., select a desired combination of elements for inclusion in data processing pipeline, and submit an imaging processing request. In at least one embodiment, a request may include input data (and associated patient data, in some examples) that is necessary to perform a request, and/or may include a selection of application(s) and/or machine learning models to be executed in processing a request. In at least one embodiment, a request may then be passed to one or more components of deployment system3606(e.g., a cloud) to perform processing of data processing pipeline. In at least one embodiment, processing by deployment system3606may include referencing selected elements (e.g., applications, containers, models, etc.) from a container registry and/or model registry3624. In at least one embodiment, once results are generated by a pipeline, results may be returned to a user for reference (e.g., for viewing in a viewing application suite executing on a local, on-premises workstation or terminal). In at least one embodiment, a radiologist may receive results from an data processing pipeline including any number of application and/or containers, where results may include anomaly detection in X-rays, CT scans, MRIs, etc. In at least one embodiment, to aid in processing or execution of applications or containers in pipelines, services3620may be leveraged. In at least one embodiment, services3620may include compute services, artificial intelligence (AI) services, visualization services, and/or other service types. In at least one embodiment, services3620may provide functionality that is common to one or more applications in software3618, so functionality may be abstracted to a service that may be called upon or leveraged by applications. In at least one embodiment, functionality provided by services3620may run dynamically and more efficiently, while also scaling well by allowing applications to process data in parallel (e.g., using a parallel computing platform3730(FIG.37)). In at least one embodiment, rather than each application that shares a same functionality offered by a service3620being required to have a respective instance of service3620, service3620may be shared between and among various applications. In at least one embodiment, services may include an inference server or engine that may be used for executing detection or segmentation tasks, as non-limiting examples. In at least one embodiment, a model training service may be included that may provide machine learning model training and/or retraining capabilities. In at least one embodiment, a data augmentation service may further be included that may provide GPU accelerated data (e.g., DICOM, RIS, CIS, REST compliant, RPC, raw, etc.) extraction, resizing, scaling, and/or other augmentation. In at least one embodiment, a visualization service may be used that may add image rendering effects—such as ray-tracing, rasterization, denoising, sharpening, etc.—to add realism to two-dimensional (2D) and/or three-dimensional (3D) models. In at least one embodiment, virtual instrument services may be included that provide for beam-forming, segmentation, inferencing, imaging, and/or support for other applications within pipelines of virtual instruments. In at least one embodiment, where a service3620includes an AI service (e.g., an inference service), one or more machine learning models associated with an application for anomaly detection (e.g., tumors, growth abnormalities, scarring, etc.) may be executed by calling upon (e.g., as an API call) an inference service (e.g., an inference server) to execute machine learning model(s), or processing thereof, as part of application execution. In at least one embodiment, where another application includes one or more machine learning models for segmentation tasks, an application may call upon an inference service to execute machine learning models for performing one or more of processing operations associated with segmentation tasks. In at least one embodiment, software3618implementing advanced processing and inferencing pipeline that includes segmentation application and anomaly detection application may be streamlined because each application may call upon a same inference service to perform one or more inferencing tasks. In at least one embodiment, hardware3622may include GPUs, CPUs, graphics cards, an AI/deep learning system (e.g., an AI supercomputer, such as NVIDIA's DGX supercomputer system), a cloud platform, or a combination thereof. In at least one embodiment, different types of hardware3622may be used to provide efficient, purpose-built support for software3618and services3620in deployment system3606. In at least one embodiment, use of GPU processing may be implemented for processing locally (e.g., at facility3602), within an AI/deep learning system, in a cloud system, and/or in other processing components of deployment system3606to improve efficiency, accuracy, and efficacy of image processing, image reconstruction, segmentation, Mill exams, stroke or heart attack detection (e.g., in real-time), image quality in rendering, etc. In at least one embodiment, a facility may include imaging devices, genomics devices, sequencing devices, and/or other device types on-premises that may leverage GPUs to generate imaging data representative of a subject's anatomy. In at least one embodiment, software3618and/or services3620may be optimized for GPU processing with respect to deep learning, machine learning, and/or high-performance computing, as non-limiting examples. In at least one embodiment, at least some of computing environment of deployment system3606and/or training system3604may be executed in a datacenter one or more supercomputers or high performance computing systems, with GPU optimized software (e.g., hardware and software combination of NVIDIA's DGX system). In at least one embodiment, datacenters may be compliant with provisions of HIPAA, such that receipt, processing, and transmission of imaging data and/or other patient data is securely handled with respect to privacy of patient data. In at least one embodiment, hardware3622may include any number of GPUs that may be called upon to perform processing of data in parallel, as described herein. In at least one embodiment, cloud platform may further include GPU processing for GPU-optimized execution of deep learning tasks, machine learning tasks, or other computing tasks. In at least one embodiment, cloud platform (e.g., NVIDIA's NGC) may be executed using an AI/deep learning supercomputer(s) and/or GPU-optimized software (e.g., as provided on NVIDIA's DGX systems) as a hardware abstraction and scaling platform. In at least one embodiment, cloud platform may integrate an application container clustering system or orchestration system (e.g., KUBERNETES) on multiple GPUs to enable seamless scaling and load balancing. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.37is a system diagram for an example system3700for generating and deploying an imaging deployment pipeline, in accordance with at least one embodiment. In at least one embodiment, system3700may be used to implement process3600ofFIG.36and/or other processes including advanced processing and inferencing pipelines. In at least one embodiment, system3700may include training system3604and deployment system3606. In at least one embodiment, training system3604and deployment system3606may be implemented using software3618, services3620, and/or hardware3622, as described herein. In at least one embodiment, system3700(e.g., training system3604and/or deployment system3606) may implemented in a cloud computing environment (e.g., using cloud3726). In at least one embodiment, system3700may be implemented locally with respect to a healthcare services facility, or as a combination of both cloud and local computing resources. In at least one embodiment, in embodiments where cloud computing is implemented, patient data may be separated from, or unprocessed by, by one or more components of system3700that would render processing non-compliant with HIPAA and/or other data handling and privacy regulations or laws. In at least one embodiment, access to APIs in cloud3726may be restricted to authorized users through enacted security measures or protocols. In at least one embodiment, a security protocol may include web tokens that may be signed by an authentication (e.g., AuthN, AuthZ, Gluecon, etc.) service and may carry appropriate authorization. In at least one embodiment, APIs of virtual instruments (described herein), or other instantiations of system3700, may be restricted to a set of public IPs that have been vetted or authorized for interaction. In at least one embodiment, various components of system3700may communicate between and among one another using any of a variety of different network types, including but not limited to local area networks (LANs) and/or wide area networks (WANs) via wired and/or wireless communication protocols. In at least one embodiment, communication between facilities and components of system3700(e.g., for transmitting inference requests, for receiving results of inference requests, etc.) may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc. In at least one embodiment, training system3604may execute training pipelines3704, similar to those described herein with respect toFIG.36. In at least one embodiment, where one or more machine learning models are to be used in deployment pipelines3710by deployment system3606, training pipelines3704may be used to train or retrain one or more (e.g., pre-trained) models, and/or implement one or more of pre-trained models3706(e.g., without a need for retraining or updating). In at least one embodiment, as a result of training pipelines3704, output model(s)3616may be generated. In at least one embodiment, training pipelines3704may include any number of processing steps, such as but not limited to imaging data (or other input data) conversion or adaption (e.g., using DICOM adapter3702A to convert DICOM images to another format suitable for processing by respective machine learning models, such as Neuroimaging Informatics Technology Initiative (NIfTI) format), AI-assisted annotation3610, labeling or annotating of imaging data3608to generate labeled clinic data3612, model selection from a model registry, model training3614, training, retraining, or updating models, and/or other processing steps. In at least one embodiment, for different machine learning models used by deployment system3606, different training pipelines3704may be used. In at least one embodiment, training pipeline3704similar to a first example described with respect toFIG.36may be used for a first machine learning model, training pipeline3704similar to a second example described with respect toFIG.36may be used for a second machine learning model, and training pipeline3704similar to a third example described with respect toFIG.36may be used for a third machine learning model. In at least one embodiment, any combination of tasks within training system3604may be used depending on what is required for each respective machine learning model. In at least one embodiment, one or more of machine learning models may already be trained and ready for deployment so machine learning models may not undergo any processing by training system3604, and may be implemented by deployment system3606. In at least one embodiment, output model(s)3616and/or pre-trained model(s)3706may include any types of machine learning models depending on implementation or embodiment. In at least one embodiment, and without limitation, machine learning models used by system3700may include machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, Long/Short Term Memory (LSTM), Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models. In at least one embodiment, training pipelines3704may include AI-assisted annotation, as described in more detail herein with respect to at leastFIG.40B. In at least one embodiment, labeled clinic data3612(e.g., traditional annotation) may be generated by any number of techniques. In at least one embodiment, labels or other annotations may be generated within a drawing program (e.g., an annotation program), a computer aided design (CAD) program, a labeling program, another type of program suitable for generating annotations or labels for ground truth, and/or may be hand drawn, in some examples. In at least one embodiment, ground truth data may be synthetically produced (e.g., generated from computer models or renderings), real produced (e.g., designed and produced from real-world data), machine-automated (e.g., using feature analysis and learning to extract features from data and then generate labels), human annotated (e.g., labeler, or annotation expert, defines location of labels), and/or a combination thereof. In at least one embodiment, for each instance of imaging data3608(or other data type used by machine learning models), there may be corresponding ground truth data generated by training system3604. In at least one embodiment, AI-assisted annotation may be performed as part of deployment pipelines3710; either in addition to, or in lieu of AI-assisted annotation included in training pipelines3704. In at least one embodiment, system3700may include a multi-layer platform that may include a software layer (e.g., software3618) of diagnostic applications (or other application types) that may perform one or more medical imaging and diagnostic functions. In at least one embodiment, system3700may be communicatively coupled to (e.g., via encrypted links) PACS server networks of one or more facilities. In at least one embodiment, system3700may be configured to access and referenced data (e.g., DICOM data, RIS data, raw data, CIS data, REST compliant data, RPC data, raw data, etc.) from PACS servers (e.g., via a DICOM adapter3702, or another data type adapter such as RIS, CIS, REST compliant, RPC, raw, etc.) to perform operations, such as training machine learning models, deploying machine learning models, image processing, inferencing, and/or other operations. In at least one embodiment, a software layer may be implemented as a secure, encrypted, and/or authenticated API through which applications or containers may be invoked (e.g., called) from an external environment(s) (e.g., facility3602). In at least one embodiment, applications may then call or execute one or more services3620for performing compute, AI, or visualization tasks associated with respective applications, and software3618and/or services3620may leverage hardware3622to perform processing tasks in an effective and efficient manner. In at least one embodiment, deployment system3606may execute deployment pipelines3710. In at least one embodiment, deployment pipelines3710may include any number of applications that may be sequentially, non-sequentially, or otherwise applied to imaging data (and/or other data types) generated by imaging devices, sequencing devices, genomics devices, etc.—including AI-assisted annotation, as described above. In at least one embodiment, as described herein, a deployment pipeline3710for an individual device may be referred to as a virtual instrument for a device (e.g., a virtual ultrasound instrument, a virtual CT scan instrument, a virtual sequencing instrument, etc.). In at least one embodiment, for a single device, there may be more than one deployment pipeline3710depending on information desired from data generated by a device. In at least one embodiment, where detections of anomalies are desired from an MRI machine, there may be a first deployment pipeline3710, and where image enhancement is desired from output of an MRI machine, there may be a second deployment pipeline3710. In at least one embodiment, applications available for deployment pipelines3710may include any application that may be used for performing processing tasks on imaging data or other data from devices. In at least one embodiment, different applications may be responsible for image enhancement, segmentation, reconstruction, anomaly detection, object detection, feature detection, treatment planning, dosimetry, beam planning (or other radiation treatment procedures), and/or other analysis, image processing, or inferencing tasks. In at least one embodiment, deployment system3606may define constructs for each of applications, such that users of deployment system3606(e.g., medical facilities, labs, clinics, etc.) may understand constructs and adapt applications for implementation within their respective facility. In at least one embodiment, an application for image reconstruction may be selected for inclusion in deployment pipeline3710, but data type generated by an imaging device may be different from a data type used within an application. In at least one embodiment, DICOM adapter3702B (and/or a DICOM reader) or another data type adapter or reader (e.g., RIS, CIS, REST compliant, RPC, raw, etc.) may be used within deployment pipeline3710to convert data to a form useable by an application within deployment system3606. In at least one embodiment, access to DICOM, RIS, CIS, REST compliant, RPC, raw, and/or other data type libraries may be accumulated and pre-processed, including decoding, extracting, and/or performing any convolutions, color corrections, sharpness, gamma, and/or other augmentations to data. In at least one embodiment, DICOM, RIS, CIS, REST compliant, RPC, and/or raw data may be unordered and a pre-pass may be executed to organize or sort collected data. In at least one embodiment, because various applications may share common image operations, in some embodiments, a data augmentation library (e.g., as one of services3620) may be used to accelerate these operations. In at least one embodiment, to avoid bottlenecks of conventional processing approaches that rely on CPU processing, parallel computing platform3730may be used for GPU acceleration of these processing tasks. In at least one embodiment, an image reconstruction application may include a processing task that includes use of a machine learning model. In at least one embodiment, a user may desire to use their own machine learning model, or to select a machine learning model from model registry3624. In at least one embodiment, a user may implement their own machine learning model or select a machine learning model for inclusion in an application for performing a processing task. In at least one embodiment, applications may be selectable and customizable, and by defining constructs of applications, deployment and implementation of applications for a particular user are presented as a more seamless user experience. In at least one embodiment, by leveraging other features of system3700—such as services3620and hardware3622—deployment pipelines3710may be even more user friendly, provide for easier integration, and produce more accurate, efficient, and timely results. In at least one embodiment, deployment system3606may include a user interface3714(e.g., a graphical user interface, a web interface, etc.) that may be used to select applications for inclusion in deployment pipeline(s)3710, arrange applications, modify or change applications or parameters or constructs thereof, use and interact with deployment pipeline(s)3710during set-up and/or deployment, and/or to otherwise interact with deployment system3606. In at least one embodiment, although not illustrated with respect to training system3604, user interface3714(or a different user interface) may be used for selecting models for use in deployment system3606, for selecting models for training, or retraining, in training system3604, and/or for otherwise interacting with training system3604. In at least one embodiment, pipeline manager3712may be used, in addition to an application orchestration system3728, to manage interaction between applications or containers of deployment pipeline(s)3710and services3620and/or hardware3622. In at least one embodiment, pipeline manager3712may be configured to facilitate interactions from application to application, from application to service3620, and/or from application or service to hardware3622. In at least one embodiment, although illustrated as included in software3618, this is not intended to be limiting, and in some examples (e.g., as illustrated inFIG.38) pipeline manager3712may be included in services3620. In at least one embodiment, application orchestration system3728(e.g., Kubernetes, DOCKER, etc.) may include a container orchestration system that may group applications into containers as logical units for coordination, management, scaling, and deployment. In at least one embodiment, by associating applications from deployment pipeline(s)3710(e.g., a reconstruction application, a segmentation application, etc.) with individual containers, each application may execute in a self-contained environment (e.g., at a kernel level) to increase speed and efficiency. In at least one embodiment, each application and/or container (or image thereof) may be individually developed, modified, and deployed (e.g., a first user or developer may develop, modify, and deploy a first application and a second user or developer may develop, modify, and deploy a second application separate from a first user or developer), which may allow for focus on, and attention to, a task of a single application and/or container(s) without being hindered by tasks of another application(s) or container(s). In at least one embodiment, communication, and cooperation between different containers or applications may be aided by pipeline manager3712and application orchestration system3728. In at least one embodiment, so long as an expected input and/or output of each container or application is known by a system (e.g., based on constructs of applications or containers), application orchestration system3728and/or pipeline manager3712may facilitate communication among and between, and sharing of resources among and between, each of applications or containers. In at least one embodiment, because one or more of applications or containers in deployment pipeline(s)3710may share same services and resources, application orchestration system3728may orchestrate, load balance, and determine sharing of services or resources between and among various applications or containers. In at least one embodiment, a scheduler may be used to track resource requirements of applications or containers, current usage or planned usage of these resources, and resource availability. In at least one embodiment, a scheduler may thus allocate resources to different applications and distribute resources between and among applications in view of requirements and availability of a system. In some examples, a scheduler (and/or other component of application orchestration system3728such as a sequencer and/or asynchronous compute engine) may determine resource availability and distribution based on constraints imposed on a system (e.g., user constraints), such as quality of service (QoS), urgency of need for data outputs (e.g., to determine whether to execute real-time processing or delayed processing), etc. In at least one embodiment, services3620leveraged by and shared by applications or containers in deployment system3606may include compute services3716, AI services3718, visualization services3720, and/or other service types. In at least one embodiment, applications may call (e.g., execute) one or more of services3620to perform processing operations for an application. In at least one embodiment, compute services3716may be leveraged by applications to perform super-computing or other high-performance computing (HPC) tasks. In at least one embodiment, compute service(s)3716may be leveraged to perform parallel processing (e.g., using a parallel computing platform3730) for processing data through one or more of applications and/or one or more tasks of a single application, substantially simultaneously. In at least one embodiment, parallel computing platform3730(e.g., NVIDIA's CUDA) may enable general purpose computing on GPUs (GPGPU) (e.g., GPUs3722). In at least one embodiment, a software layer of parallel computing platform3730may provide access to virtual instruction sets and parallel computational elements of GPUs, for execution of compute kernels. In at least one embodiment, parallel computing platform3730may include memory and, in some embodiments, a memory may be shared between and among multiple containers, and/or between and among different processing tasks within a single container. In at least one embodiment, inter-process communication (IPC) calls may be generated for multiple containers and/or for multiple processes within a container to use same data from a shared segment of memory of parallel computing platform3730(e.g., where multiple different stages of an application or multiple applications are processing same information). In at least one embodiment, rather than making a copy of data and moving data to different locations in memory (e.g., a read/write operation), same data in same location of a memory may be used for any number of processing tasks (e.g., at a same time, at different times, etc.). In at least one embodiment, as data is used to generate new data as a result of processing, this information of a new location of data may be stored and shared between various applications. In at least one embodiment, location of data and a location of updated or modified data may be part of a definition of how a payload is understood within containers. In at least one embodiment, AI services3718may be leveraged to perform inferencing services for executing machine learning model(s) associated with applications (e.g., tasked with performing one or more processing tasks of an application). In at least one embodiment, AI services3718may leverage AI system3724to execute machine learning model(s) (e.g., neural networks, such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other inferencing tasks. In at least one embodiment, applications of deployment pipeline(s)3710may use one or more of output models3616from training system3604and/or other models of applications to perform inferencing on imaging data (e.g., DICOM data, RIS data, CIS data, REST compliant data, RPC data, raw data, etc.). In at least one embodiment, two or more examples of inferencing using application orchestration system3728(e.g., a scheduler, sequencer, and/or asynchronous compute engine) may be available. In at least one embodiment, a first category may include a high priority/low latency path that may achieve higher service level agreements, such as for performing inference on urgent requests during an emergency, or for a radiologist during diagnosis. In at least one embodiment, a second category may include a standard priority path that may be used for requests that may be non-urgent or where analysis may be performed at a later time. In at least one embodiment, application orchestration system3728may distribute resources (e.g., services3620and/or hardware3622) based on priority paths for different inferencing tasks of AI services3718. In at least one embodiment, shared storage may be mounted to AI services3718within system3700. In at least one embodiment, shared storage may operate as a cache (or other storage device type) and may be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, a request may be received by a set of API instances of deployment system3606, and one or more instances may be selected (e.g., for best fit, for load balancing, etc.) to process a request. In at least one embodiment, to process a request, a request may be entered into a database, a machine learning model may be located from model registry3624if not already in a cache, a validation step may ensure appropriate machine learning model is loaded into a cache (e.g., shared storage), and/or a copy of a model may be saved to a cache. In at least one embodiment, a scheduler (e.g., of pipeline manager3712) may be used to launch an application that is referenced in a request if an application is not already running or if there are not enough instances of an application. In at least one embodiment, if an inference server is not already launched to execute a model, an inference server may be launched. In at least one embodiment, any number of inference servers may be launched per model. In at least one embodiment, in a pull model, in which inference servers are clustered, models may be cached whenever load balancing is advantageous. In at least one embodiment, inference servers may be statically loaded in corresponding, distributed servers. In at least one embodiment, inferencing may be performed using an inference server that runs in a container. In at least one embodiment, an instance of an inference server may be associated with a model (and optionally a plurality of versions of a model). In at least one embodiment, if an instance of an inference server does not exist when a request to perform inferencing on a model is received, a new instance may be loaded. In at least one embodiment, when starting an inference server, a model may be passed to an inference server such that a same container may be used to serve different models so long as inference server is running as a different instance. In at least one embodiment, during application execution, an inference request for a given application may be received, and a container (e.g., hosting an instance of an inference server) may be loaded (if not already), and a start procedure may be called. In at least one embodiment, pre-processing logic in a container may load, decode, and/or perform any additional pre-processing on incoming data (e.g., using a CPU(s) and/or GPU(s)). In at least one embodiment, once data is prepared for inference, a container may perform inferencing as necessary on data. In at least one embodiment, this may include a single inference call on one image (e.g., a hand X-ray), or may require inference on hundreds of images (e.g., a chest CT). In at least one embodiment, an application may summarize results before completing, which may include, without limitation, a single confidence score, pixel level-segmentation, voxel-level segmentation, generating a visualization, or generating text to summarize findings. In at least one embodiment, different models or applications may be assigned different priorities. For example, some models may have a real-time (TAT less than one minute) priority while others may have lower priority (e.g., TAT less than 10 minutes). In at least one embodiment, model execution times may be measured from requesting institution or entity and may include partner network traversal time, as well as execution on an inference service. In at least one embodiment, transfer of requests between services3620and inference applications may be hidden behind a software development kit (SDK), and robust transport may be provide through a queue. In at least one embodiment, a request will be placed in a queue via an API for an individual application/tenant ID combination and an SDK will pull a request from a queue and give a request to an application. In at least one embodiment, a name of a queue may be provided in an environment from where an SDK will pick it up. In at least one embodiment, asynchronous communication through a queue may be useful as it may allow any instance of an application to pick up work as it becomes available. In at least one embodiment, results may be transferred back through a queue, to ensure no data is lost. In at least one embodiment, queues may also provide an ability to segment work, as highest priority work may go to a queue with most instances of an application connected to it, while lowest priority work may go to a queue with a single instance connected to it that processes tasks in an order received. In at least one embodiment, an application may run on a GPU-accelerated instance generated in cloud3726, and an inference service may perform inferencing on a GPU. In at least one embodiment, visualization services3720may be leveraged to generate visualizations for viewing outputs of applications and/or deployment pipeline(s)3710. In at least one embodiment, GPUs3722may be leveraged by visualization services3720to generate visualizations. In at least one embodiment, rendering effects, such as ray-tracing, may be implemented by visualization services3720to generate higher quality visualizations. In at least one embodiment, visualizations may include, without limitation, 2D image renderings, 3D volume renderings, 3D volume reconstruction, 2D tomographic slices, virtual reality displays, augmented reality displays, etc. In at least one embodiment, virtualized environments may be used to generate a virtual interactive display or environment (e.g., a virtual environment) for interaction by users of a system (e.g., doctors, nurses, radiologists, etc.). In at least one embodiment, visualization services3720may include an internal visualizer, cinematics, and/or other rendering or image processing capabilities or functionality (e.g., ray tracing, rasterization, internal optics, etc.). In at least one embodiment, hardware3622may include GPUs3722, AI system3724, cloud3726, and/or any other hardware used for executing training system3604and/or deployment system3606. In at least one embodiment, GPUs3722(e.g., NVIDIA's TESLA and/or QUADRO GPUs) may include any number of GPUs that may be used for executing processing tasks of compute services3716, AI services3718, visualization services3720, other services, and/or any of features or functionality of software3618. For example, with respect to AI services3718, GPUs3722may be used to perform pre-processing on imaging data (or other data types used by machine learning models), post-processing on outputs of machine learning models, and/or to perform inferencing (e.g., to execute machine learning models). In at least one embodiment, cloud3726, AI system3724, and/or other components of system3700may use GPUs3722. In at least one embodiment, cloud3726may include a GPU-optimized platform for deep learning tasks. In at least one embodiment, AI system3724may use GPUs, and cloud3726— or at least a portion tasked with deep learning or inferencing—may be executed using one or more AI systems3724. As such, although hardware3622is illustrated as discrete components, this is not intended to be limiting, and any components of hardware3622may be combined with, or leveraged by, any other components of hardware3622. In at least one embodiment, AI system3724may include a purpose-built computing system (e.g., a super-computer or an HPC) configured for inferencing, deep learning, machine learning, and/or other artificial intelligence tasks. In at least one embodiment, AI system3724(e.g., NVIDIA's DGX) may include GPU-optimized software (e.g., a software stack) that may be executed using a plurality of GPUs3722, in addition to CPUs, RAM, storage, and/or other components, features, or functionality. In at least one embodiment, one or more AI systems3724may be implemented in cloud3726(e.g., in a data center) for performing some or all of AI-based processing tasks of system3700. In at least one embodiment, cloud3726may include a GPU-accelerated infrastructure (e.g., NVIDIA's NGC) that may provide a GPU-optimized platform for executing processing tasks of system3700. In at least one embodiment, cloud3726may include an AI system(s)3724for performing one or more of AI-based tasks of system3700(e.g., as a hardware abstraction and scaling platform). In at least one embodiment, cloud3726may integrate with application orchestration system3728leveraging multiple GPUs to enable seamless scaling and load balancing between and among applications and services3620. In at least one embodiment, cloud3726may tasked with executing at least some of services3620of system3700, including compute services3716, AI services3718, and/or visualization services3720, as described herein. In at least one embodiment, cloud3726may perform small and large batch inference (e.g., executing NVIDIA's TENSOR RT), provide an accelerated parallel computing API and platform3730(e.g., NVIDIA's CUDA), execute application orchestration system3728(e.g., KUBERNETES), provide a graphics rendering API and platform (e.g., for ray-tracing, 2D graphics, 3D graphics, and/or other rendering techniques to produce higher quality cinematics), and/or may provide other functionality for system3700. In at least one embodiment, in an effort to preserve patient confidentiality (e.g., where patient data or records are to be used off-premises), cloud3726may include a registry—such as a deep learning container registry. In at least one embodiment, a registry may store containers for instantiations of applications that may perform pre-processing, post-processing, or other processing tasks on patient data. In at least one embodiment, cloud3726may receive data that includes patient data as well as sensor data in containers, perform requested processing for just sensor data in those containers, and then forward a resultant output and/or visualizations to appropriate parties and/or devices (e.g., on-premises medical devices used for visualization or diagnoses), all without having to extract, store, or otherwise access patient data. In at least one embodiment, confidentiality of patient data is preserved in compliance with HIPAA and/or other data regulations. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.38includes an example illustration of a deployment pipeline3710A for processing imaging data, in accordance with at least one embodiment. In at least one embodiment, system3700—and specifically deployment system3606—may be used to customize, update, and/or integrate deployment pipeline(s)3710A into one or more production environments. In at least one embodiment, deployment pipeline3710A ofFIG.38includes a non-limiting example of a deployment pipeline3710A that may be custom defined by a particular user (or team of users) at a facility (e.g., at a hospital, clinic, lab, research environment, etc.). In at least one embodiment, to define deployment pipelines3710A for a CT scanner3802, a user may select—from a container registry, for example—one or more applications that perform specific functions or tasks with respect to imaging data generated by CT scanner3802. In at least one embodiment, applications may be applied to deployment pipeline3710A as containers that may leverage services3620and/or hardware3622of system3700. In addition, deployment pipeline3710A may include additional processing tasks or applications that may be implemented to prepare data for use by applications (e.g., DICOM adapter3702B and DICOM reader3806may be used in deployment pipeline3710A to prepare data for use by CT reconstruction3808, organ segmentation3810, etc.). In at least one embodiment, deployment pipeline3710A may be customized or selected for consistent deployment, one time use, or for another frequency or interval. In at least one embodiment, a user may desire to have CT reconstruction3808and organ segmentation3810for several subjects over a specific interval, and thus may deploy pipeline3710A for that period of time. In at least one embodiment, a user may select, for each request from system3700, applications that a user wants to perform processing on that data for that request. In at least one embodiment, deployment pipeline3710A may be adjusted at any interval and, because of adaptability and scalability of a container structure within system3700, this may be a seamless process. In at least one embodiment, deployment pipeline3710A ofFIG.38may include CT scanner3802generating imaging data of a patient or subject. In at least one embodiment, imaging data from CT scanner3802may be stored on a PACS server(s)3804associated with a facility housing CT scanner3802. In at least one embodiment, PACS server(s)3804may include software and/or hardware components that may directly interface with imaging modalities (e.g., CT scanner3802) at a facility. In at least one embodiment, DICOM adapter3702B may enable sending and receipt of DICOM objects using DICOM protocols. In at least one embodiment, DICOM adapter3702B may aid in preparation or configuration of DICOM data from PACS server(s)3804for use by deployment pipeline3710A. In at least one embodiment, once DICOM data is processed through DICOM adapter3702B, pipeline manager3712may route data through to deployment pipeline3710A. In at least one embodiment, DICOM reader3806may extract image files and any associated metadata from DICOM data (e.g., raw sinogram data, as illustrated in visualization3816A). In at least one embodiment, working files that are extracted may be stored in a cache for faster processing by other applications in deployment pipeline3710A. In at least one embodiment, once DICOM reader3806has finished extracting and/or storing data, a signal of completion may be communicated to pipeline manager3712. In at least one embodiment, pipeline manager3712may then initiate or call upon one or more other applications or containers in deployment pipeline3710A. In at least one embodiment, CT reconstruction3808application and/or container may be executed once data (e.g., raw sinogram data) is available for processing by CT reconstruction3808application. In at least one embodiment, CT reconstruction3808may read raw sinogram data from a cache, reconstruct an image file out of raw sinogram data (e.g., as illustrated in visualization3816B), and store resulting image file in a cache. In at least one embodiment, at completion of reconstruction, pipeline manager3712may be signaled that reconstruction task is complete. In at least one embodiment, once reconstruction is complete, and a reconstructed image file may be stored in a cache (or other storage device), organ segmentation3810application and/or container may be triggered by pipeline manager3712. In at least one embodiment, organ segmentation3810application and/or container may read an image file from a cache, normalize or convert an image file to format suitable for inference (e.g., convert an image file to an input resolution of a machine learning model), and run inference against a normalized image. In at least one embodiment, to run inference on a normalized image, organ segmentation3810application and/or container may rely on services3620, and pipeline manager3712and/or application orchestration system3728may facilitate use of services3620by organ segmentation3810application and/or container. In at least one embodiment, for example, organ segmentation3810application and/or container may leverage AI services3718to perform inferencing on a normalized image, and AI services3718may leverage hardware3622(e.g., AI system3724) to execute AI services3718. In at least one embodiment, a result of an inference may be a mask file (e.g., as illustrated in visualization3816C) that may be stored in a cache (or other storage device). In at least one embodiment, once applications that process DICOM data and/or data extracted from DICOM data have completed processing, a signal may be generated for pipeline manager3712. In at least one embodiment, pipeline manager3712may then execute DICOM writer3812to read results from a cache (or other storage device), package results into a DICOM format (e.g., as DICOM output3814) for use by users at a facility who generated a request. In at least one embodiment, DICOM output3814may then be transmitted to DICOM adapter3702B to prepare DICOM output3814for storage on PACS server(s)3804(e.g., for viewing by a DICOM viewer at a facility). In at least one embodiment, in response to a request for reconstruction and segmentation, visualizations3816B and3816C may be generated and available to a user for diagnoses, research, and/or for other purposes. Although illustrated as consecutive application in deployment pipeline3710A, CT reconstruction3808and organ segmentation3810applications may be processed in parallel in at least one embodiment. In at least one embodiment, where applications do not have dependencies on one another, and data is available for each application (e.g., after DICOM reader3806extracts data), applications may be executed at a same time, substantially at a same time, or with some overlap. In at least one embodiment, where two or more applications require similar services3620, a scheduler of system3700may be used to load balance and distribute compute or processing resources between and among various applications. In at least one embodiment, in some embodiments, parallel computing platform3730may be used to perform parallel processing for applications to decrease run-time of deployment pipeline3710A to provide real-time results. In at least one embodiment, and with reference toFIGS.39A-39B, deployment system3606may be implemented as one or more virtual instruments to perform different functionalities—such as image processing, segmentation, enhancement, AI, visualization, and inferencing—with imaging devices (e.g., CT scanners, X-ray machines, MRI machines, etc.), sequencing devices, genomics devices, and/or other device types. In at least one embodiment, system3700may allow for creation and provision of virtual instruments that may include a software-defined deployment pipeline3710that may receive raw/unprocessed input data generated by a device(s) and output processed/reconstructed data. In at least one embodiment, deployment pipelines3710(e.g.,3710A and3710B) that represent virtual instruments may implement intelligence into a pipeline, such as by leveraging machine learning models, to provide containerized inference support to a system. In at least one embodiment, virtual instruments may execute any number of containers each including instantiations of applications. In at least one embodiment, such as where real-time processing is desired, deployment pipelines3710representing virtual instruments may be static (e.g., containers and/or applications may be set), while in other examples, container and/or applications for virtual instruments may be selected (e.g., on a per-request basis) from a pool of applications or resources (e.g., within a container registry). In at least one embodiment, system3700may be instantiated or executed as one or more virtual instruments on-premise at a facility in, for example, a computing system deployed next to or otherwise in communication with a radiology machine, an imaging device, and/or another device type at a facility. In at least one embodiment, however, an on-premise installation may be instantiated or executed within a computing system of a device itself (e.g., a computing system integral to an imaging device), in a local datacenter (e.g., a datacenter on-premise), and/or in a cloud-environment (e.g., in cloud3726). In at least one embodiment, deployment system3606, operating as a virtual instrument, may be instantiated by a supercomputer or other HPC system in some examples. In at least one embodiment, on-premise installation may allow for high-bandwidth uses (via, for example, higher throughput local communication interfaces, such as RF over Ethernet) for real-time processing. In at least one embodiment, real-time or near real-time processing may be particularly useful where a virtual instrument supports an ultrasound device or other imaging modality where immediate visualizations are expected or required for accurate diagnoses and analyses. In at least one embodiment, a cloud-computing architecture may be capable of dynamic bursting to a cloud computing service provider, or other compute cluster, when local demand exceeds on-premise capacity or capability. In at least one embodiment, a cloud architecture, when implemented, may be tuned for training neural networks or other machine learning models, as described herein with respect to training system3604. In at least one embodiment, with training pipelines in place, machine learning models may be continuously learn and improve as they process additional data from devices they support. In at least one embodiment, virtual instruments may be continually improved using additional data, new data, existing machine learning models, and/or new or updated machine learning models. In at least one embodiment, a computing system may include some or all of hardware3622described herein, and hardware3622may be distributed in any of a number of ways including within a device, as part of a computing device coupled to and located proximate a device, in a local datacenter at a facility, and/or in cloud3726. In at least one embodiment, because deployment system3606and associated applications or containers are created in software (e.g., as discrete containerized instantiations of applications), behavior, operation, and configuration of virtual instruments, as well as outputs generated by virtual instruments, may be modified or customized as desired, without having to change or alter raw output of a device that a virtual instrument supports. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.39Aincludes an example data flow diagram of a virtual instrument supporting an ultrasound device, in accordance with at least one embodiment. In at least one embodiment, deployment pipeline3710B may leverage one or more of services3620of system3700. In at least one embodiment, deployment pipeline3710B and services3620may leverage hardware3622of a system either locally or in cloud3726. In at least one embodiment, although not illustrated, process3900may be facilitated by pipeline manager3712, application orchestration system3728, and/or parallel computing platform3730. In at least one embodiment, process3900may include receipt of imaging data from an ultrasound device3902. In at least one embodiment, imaging data may be stored on PACS server(s) in a DICOM format (or other format, such as RIS, CIS, REST compliant, RPC, raw, etc.), and may be received by system3700for processing through deployment pipeline3710selected or customized as a virtual instrument (e.g., a virtual ultrasound) for ultrasound device3902. In at least one embodiment, imaging data may be received directly from an imaging device (e.g., ultrasound device3902) and processed by a virtual instrument. In at least one embodiment, a transducer or other signal converter communicatively coupled between an imaging device and a virtual instrument may convert signal data generated by an imaging device to image data that may be processed by a virtual instrument. In at least one embodiment, raw data and/or image data may be applied to DICOM reader3806to extract data for use by applications or containers of deployment pipeline3710B. In at least one embodiment, DICOM reader3806may leverage data augmentation library3914(e.g., NVIDIA's DALI) as a service3620(e.g., as one of compute service(s)3716) for extracting, resizing, rescaling, and/or otherwise preparing data for use by applications or containers. In at least one embodiment, once data is prepared, a reconstruction3906application and/or container may be executed to reconstruct data from ultrasound device3902into an image file. In at least one embodiment, after reconstruction3906, or at a same time as reconstruction3906, a detection3908application and/or container may be executed for anomaly detection, object detection, feature detection, and/or other detection tasks related to data. In at least one embodiment, an image file generated during reconstruction3906may be used during detection3908to identify anomalies, objects, features, etc. In at least one embodiment, detection3908application may leverage an inference engine3916(e.g., as one of AI service(s)3718) to perform inferencing on data to generate detections. In at least one embodiment, one or more machine learning models (e.g., from training system3604) may be executed or called by detection3908application. In at least one embodiment, once reconstruction3906and/or detection3908is/are complete, data output from these application and/or containers may be used to generate visualizations3910, such as visualization3912(e.g., a grayscale output) displayed on a workstation or display terminal. In at least one embodiment, visualization may allow a technician or other user to visualize results of deployment pipeline3710B with respect to ultrasound device3902. In at least one embodiment, visualization3910may be executed by leveraging a render component3918of system3700(e.g., one of visualization service(s)3720). In at least one embodiment, render component3918may execute a 2D, OpenGL, or ray-tracing service to generate visualization3912. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.39Bincludes an example data flow diagram of a virtual instrument supporting a CT scanner, in accordance with at least one embodiment. In at least one embodiment, deployment pipeline3710C may leverage one or more of services3620of system3700. In at least one embodiment, deployment pipeline3710C and services3620may leverage hardware3622of a system either locally or in cloud3726. In at least one embodiment, although not illustrated, process3920may be facilitated by pipeline manager3712, application orchestration system3728, and/or parallel computing platform3730. In at least one embodiment, process3920may include CT scanner3922generating raw data that may be received by DICOM reader3806(e.g., directly, via a PACS server3804, after processing, etc.). In at least one embodiment, a Virtual CT (instantiated by deployment pipeline3710C) may include a first, real-time pipeline for monitoring a patient (e.g., patient movement detection AI3926) and/or for adjusting or optimizing exposure of CT scanner3922(e.g., using exposure control AI3924). In at least one embodiment, one or more of applications (e.g.,3924and3926) may leverage a service3620, such as AI service(s)3718. In at least one embodiment, outputs of exposure control AI3924application (or container) and/or patient movement detection AI3926application (or container) may be used as feedback to CT scanner3922and/or a technician for adjusting exposure (or other settings of CT scanner3922) and/or informing a patient to move less. In at least one embodiment, deployment pipeline3710C may include a non-real-time pipeline for analyzing data generated by CT scanner3922. In at least one embodiment, a second pipeline may include CT reconstruction3808application and/or container, a coarse detection AI3928application and/or container, a fine detection AI3932application and/or container (e.g., where certain results are detected by coarse detection AI3928), a visualization3930application and/or container, and a DICOM writer3812(and/or other data type writer, such as RIS, CIS, REST compliant, RPC, raw, etc.) application and/or container. In at least one embodiment, raw data generated by CT scanner3922may be passed through pipelines of deployment pipeline3710C (instantiated as a virtual CT instrument) to generate results. In at least one embodiment, results from DICOM writer3812may be transmitted for display and/or may be stored on PACS server(s)3804for later retrieval, analysis, or display by a technician, practitioner, or other user. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.40Aillustrates a data flow diagram for a process4000to train, retrain, or update a machine learning model, in accordance with at least one embodiment. In at least one embodiment, process4000may be executed using, as a non-limiting example, system3700ofFIG.37. In at least one embodiment, process4000may leverage services3620and/or hardware3622of system3700, as described herein. In at least one embodiment, refined models4012generated by process4000may be executed by deployment system3606for one or more containerized applications in deployment pipelines3710. In at least one embodiment, model training3614may include retraining or updating an initial model4004(e.g., a pre-trained model) using new training data (e.g., new input data, such as customer dataset4006, and/or new ground truth data associated with input data). In at least one embodiment, to retrain, or update, initial model4004, output or loss layer(s) of initial model4004may be reset, or deleted, and/or replaced with an updated or new output or loss layer(s). In at least one embodiment, initial model4004may have previously fine-tuned parameters (e.g., weights and/or biases) that remain from prior training, so training or retraining3614may not take as long or require as much processing as training a model from scratch. In at least one embodiment, during model training3614, by having reset or replaced output or loss layer(s) of initial model4004, parameters may be updated and re-tuned for a new data set based on loss calculations associated with accuracy of output or loss layer(s) at generating predictions on new, customer dataset4006(e.g., image data3608ofFIG.36). In at least one embodiment, pre-trained models3706may be stored in a data store, or registry (e.g., model registry3624ofFIG.36). In at least one embodiment, pre-trained models3706may have been trained, at least in part, at one or more facilities other than a facility executing process4000. In at least one embodiment, to protect privacy and rights of patients, subjects, or clients of different facilities, pre-trained models3706may have been trained, on-premise, using customer or patient data generated on-premise. In at least one embodiment, pre-trained models3706may be trained using cloud3726and/or other hardware3622, but confidential, privacy protected patient data may not be transferred to, used by, or accessible to any components of cloud3726(or other off premise hardware). In at least one embodiment, where a pre-trained model3706is trained at using patient data from more than one facility, pre-trained model3706may have been individually trained for each facility prior to being trained on patient or customer data from another facility. In at least one embodiment, such as where a customer or patient data has been released of privacy concerns (e.g., by waiver, for experimental use, etc.), or where a customer or patient data is included in a public data set, a customer or patient data from any number of facilities may be used to train pre-trained model3706on-premise and/or off premise, such as in a datacenter or other cloud computing infrastructure. In at least one embodiment, when selecting applications for use in deployment pipelines3710, a user may also select machine learning models to be used for specific applications. In at least one embodiment, a user may not have a model for use, so a user may select a pre-trained model3706to use with an application. In at least one embodiment, pre-trained model3706may not be optimized for generating accurate results on customer dataset4006of a facility of a user (e.g., based on patient diversity, demographics, types of medical imaging devices used, etc.). In at least one embodiment, prior to deploying pre-trained model3706into deployment pipeline3710for use with an application(s), pre-trained model3706may be updated, retrained, and/or fine-tuned for use at a respective facility. In at least one embodiment, a user may select pre-trained model3706that is to be updated, retrained, and/or fine-tuned, and pre-trained model3706may be referred to as initial model4004for training system3604within process4000. In at least one embodiment, customer dataset4006(e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training3614(which may include, without limitation, transfer learning) on initial model4004to generate refined model4012. In at least one embodiment, ground truth data corresponding to customer dataset4006may be generated by training system3604. In at least one embodiment, ground truth data may be generated, at least in part, by clinicians, scientists, doctors, practitioners, at a facility (e.g., as labeled clinic data3612ofFIG.36). In at least one embodiment, AI-assisted annotation3610may be used in some examples to generate ground truth data. In at least one embodiment, AI-assisted annotation3610(e.g., implemented using an AI-assisted annotation SDK) may leverage machine learning models (e.g., neural networks) to generate suggested or predicted ground truth data for a customer dataset. In at least one embodiment, user4010may use annotation tools within a user interface (a graphical user interface (GUI)) on computing device4008. In at least one embodiment, user4010may interact with a GUI via computing device4008to edit or fine-tune annotations or auto-annotations. In at least one embodiment, a polygon editing feature may be used to move vertices of a polygon to more accurate or fine-tuned locations. In at least one embodiment, once customer dataset4006has associated ground truth data, ground truth data (e.g., from AI-assisted annotation, manual labeling, etc.) may be used by during model training3614to generate refined model4012. In at least one embodiment, customer dataset4006may be applied to initial model4004any number of times, and ground truth data may be used to update parameters of initial model4004until an acceptable level of accuracy is attained for refined model4012. In at least one embodiment, once refined model4012is generated, refined model4012may be deployed within one or more deployment pipelines3710at a facility for performing one or more processing tasks with respect to medical imaging data. In at least one embodiment, refined model4012may be uploaded to pre-trained models3706in model registry3624to be selected by another facility. In at least one embodiment, his process may be completed at any number of facilities such that refined model4012may be further refined on new datasets any number of times to generate a more universal model. In at least one embodiment, such components can be used to perform images segmentation as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. FIG.40Bis an example illustration of a client-server architecture4032to enhance annotation tools with pre-trained annotation models, in accordance with at least one embodiment. In at least one embodiment, AI-assisted annotation tools4036may be instantiated based on a client-server architecture4032. In at least one embodiment, annotation tools4036in imaging applications may aid radiologists, for example, identify organs and abnormalities. In at least one embodiment, imaging applications may include software tools that help user4010to identify, as a non-limiting example, a few extreme points on a particular organ of interest in raw images4034(e.g., in a 3D MRI or CT scan) and receive auto-annotated results for all 2D slices of a particular organ. In at least one embodiment, results may be stored in a data store as training data4038and used as (for example and without limitation) ground truth data for training. In at least one embodiment, when computing device4008sends extreme points for AI-assisted annotation3610, a deep learning model, for example, may receive this data as input and return inference results of a segmented organ or abnormality. In at least one embodiment, pre-instantiated annotation tools, such as AI-Assisted Annotation Tool4036B inFIG.40B, may be enhanced by making API calls (e.g., API Call4044) to a server, such as an Annotation Assistant Server4040that may include a set of pre-trained models4042stored in an annotation model registry, for example. In at least one embodiment, an annotation model registry may store pre-trained models4042(e.g., machine learning models, such as deep learning models) that are pre-trained to perform AI-assisted annotation on a particular organ or abnormality. In at least one embodiment, these models may be further updated by using training pipelines3704. In at least one embodiment, pre-installed annotation tools may be improved over time as new labeled clinic data3612is added. Inference and/or training logic715are used to perform inferencing and/or training operations associated with one or more embodiments. Details regarding inference and/or training logic715are provided herein in conjunction withFIGS.7A and/or7B. In at least one embodiment, such components can be used to train a network, or perform inferencing used such a trained network, as discussed above, such as with respect to a system ofFIG.1,2A, or3, or processes ofFIG.4or5. In at least one embodiment, this can include verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. At least one embodiment of the disclosure can be described in view of the following clauses: 1. A processor, comprising:one or more circuits to verify whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. 2. The processor of claim1, wherein the network of computing nodes includes two or more computing resources connected by one or more data switches according to a network structure for a task to be performed by the network of computing nodes. 3. The processor of claim2, wherein the one or more circuits are further to assign a unique data string to each of the computing nodes, the unique data strings for individual nodes of the network of computing nodes being unavailable to the other computing nodes. 4. The processor of claim3, wherein the one or more circuits are further to cause each of a set of sending nodes, of the network of computing nodes, to send at least their unique data strings to one or more levels of switches of the network of computing nodes according to the network structure, wherein the respective switches are to concatenate the unique data strings received to the respective switches and forward the concatenated strings to a next recipient node in the network structure. 5. The processor of claim4, wherein the one or more circuits are further to compare one or more received data strings, produced by one or more end nodes of the network of computing nodes and including the concatenated unique data strings, against the one or more expected data strings in order to verify that the network of computing nodes is properly configured. 6. The processor of claim2, wherein the one or more circuits are further to perform the task in response to verifying that the network of computing nodes is properly configured. 7. A system comprising:one or more processors to verify whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. 8. The system of claim7, wherein the network of computing nodes includes two or more computing resources connected by one or more data switches according to a network structure for a task to be performed by the network of computing nodes. 9. The system of claim8, wherein the one or more processors are further to assign a unique data string to each of the computing nodes, the unique data strings for individual nodes of the network of computing nodes being unavailable to the other computing nodes. 10. The system of claim9, wherein the one or more processors are further to cause each of a set of sending nodes, of the network of computing nodes, to send at least their unique data strings to one or more levels of switches of the network of computing nodes according to the network structure, wherein the respective switches are to concatenate the unique data strings received to the respective switches and forward the concatenated strings to a next recipient node in the network structure. 11. The system of claim10, wherein the one or more processors are further to compare one or more received data strings, produced by one or more end nodes of the network of computing nodes and including the concatenated unique data strings, against the one or more expected data strings in order to verify that the network of computing nodes is properly configured. 12. The system of claim8, wherein the one or more processors are further to perform the task in response to verifying that the network of computing nodes is properly configured. 13. A method comprising:verifying whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. 14. The method of claim13, wherein the network of computing nodes includes two or more computing resources connected by one or more data switches according to a network structure for a task to be performed by the network of computing nodes. 15. The method of claim14, further comprising:assigning a unique data string to each of the computing nodes, the unique data strings for individual nodes of the network of computing nodes being unavailable to the other computing nodes. 16. The method of claim15, further comprising:causing each of a set of sending nodes, of the network of computing nodes, to send at least their unique data strings to one or more levels of switches of the network of computing nodes according to the network structure, wherein the respective switches are to concatenate the unique data strings received to the respective switches and forward the concatenated strings to a next recipient node in the network structure. 17. The method of claim16, further comprising:comparing one or more received data strings, produced by one or more end nodes of the network of computing nodes and including the concatenated unique data strings, against the one or more expected data strings in order to verify that the network of computing nodes is properly configured. 18. The method of claim14, further comprising:performing the task in response to verifying that the network of computing nodes is properly configured. 19. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least:verify whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes. 20. The machine-readable medium of claim19, wherein the network of computing nodes includes two or more computing resources connected by one or more data switches according to a network structure for a task to be performed by the network of computing nodes. 21. The machine-readable medium of claim20, wherein the instructions if performed further cause the one or more processors to:assign a unique data string to each of the computing nodes, the unique data strings for individual nodes of the network of computing nodes being unavailable to the other computing nodes. 22. The machine-readable medium of claim21, wherein the instructions if performed further cause the one or more processors to:cause each of a set of sending nodes, of the network of computing nodes, to send at least their unique data strings to one or more levels of switches of the network of computing nodes according to the network structure, wherein the respective switches are to concatenate the unique data strings received to the respective switches and forward the concatenated strings to a next recipient node in the network structure. 23. The machine-readable medium of claim22, wherein the instructions if performed further cause the one or more processors to:compare one or more received data strings, produced by one or more end nodes of the network of computing nodes and including the concatenated unique data strings, against the one or more expected data strings in order to verify that the network of computing nodes is properly configured. 24. The machine-readable medium of claim20, wherein the instructions if performed further cause the one or more processors to:perform the task in response to verifying that the network of computing nodes is properly configured. 25. A network verification system, comprising:one or more processors to verify whether a network of computing nodes is properly configured based, at least in part, on one or more expected data strings generated by the network of computing nodes; and memory for storing network parameters for the one or more first neural networks. 26. The network verification system of claim25, wherein the network of computing nodes includes two or more computing resources connected by one or more data switches according to a network structure for a task to be performed by the network of computing nodes. 27. The network verification system of claim26, wherein the one or more processors are further to assign a unique data string to each of the computing nodes, the unique data strings for individual nodes of the network of computing nodes being unavailable to the other computing nodes. 28. The network verification system of claim27, wherein the one or more processors are further to cause each of a set of sending nodes, of the network of computing nodes, to send at least their unique data strings to one or more levels of switches of the network of computing nodes according to the network structure, wherein the respective switches are to concatenate the unique data strings received to the respective switches and forward the concatenated strings to a next recipient node in the network structure. 29. The network verification system of claim28, wherein the one or more processors are further to compare one or more received data strings, produced by one or more end nodes of the network of computing nodes and including the concatenated unique data strings, against the one or more expected data strings in order to verify that the network of computing nodes is properly configured. 30. The network verification system of claim26, wherein the one or more processors are further to perform the task in response to verifying that the network of computing nodes is properly configured. In at least one embodiment, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit or chip. In at least one embodiment, multi-chip modules may be used with increased connectivity which simulate on-chip operation, and make substantial improvements over utilizing a conventional central processing unit (“CPU”) and bus implementation. In at least one embodiment, various modules may also be situated separately or in various combinations of semiconductor platforms per desires of user. In at least one embodiment, referring back toFIG.13, computer programs in form of machine-readable executable code or computer control logic algorithms are stored in main memory1304and/or secondary storage. Computer programs, if executed by one or more processors, enable system1300to perform various functions in accordance with at least one embodiment. In at least one embodiment, memory1304, storage, and/or any other storage are possible examples of computer-readable media. In at least one embodiment, secondary storage may refer to any suitable storage device or system such as a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (“DVD”) drive, recording device, universal serial bus (“USB”) flash memory, etc. In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of CPU1302, parallel processing system1312, an integrated circuit capable of at least a portion of capabilities of both CPU1302, parallel processing system1312, a chipset (e.g., a group of integrated circuits designed to work and sold as a unit for performing related functions, etc.), and/or any suitable combination of integrated circuit(s). In at least one embodiment, architecture and/or functionality of various previous figures are implemented in context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and more. In at least one embodiment, computer system1300may take form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (“PDA”), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic. In at least one embodiment, parallel processing system1312includes, without limitation, a plurality of parallel processing units (“PPUs”)1314and associated memories1316. In at least one embodiment, PPUs1314are connected to a host processor or other peripheral devices via an interconnect1318and a switch1320or multiplexer. In at least one embodiment, parallel processing system1312distributes computational tasks across PPUs1314which can be parallelizable—for example, as part of distribution of computational tasks across multiple graphics processing unit (“GPU”) thread blocks. In at least one embodiment, memory is shared and accessible (e.g., for read and/or write access) across some or all of PPUs1314, although such shared memory may incur performance penalties relative to use of local memory and registers resident to a PPU1314. In at least one embodiment, operation of PPUs1314is synchronized through use of a command such as syncthreads( ) wherein all threads in a block (e.g., executed across multiple PPUs1314) to reach a certain point of execution of code before proceeding. In at least one embodiment, one or more techniques described herein utilize a oneAPI programming model. In at least one embodiment, a oneAPI programming model refers to a programming model for interacting with various compute accelerator architectures. In at least one embodiment, oneAPI refers to an application programming interface (API) designed to interact with various compute accelerator architectures. In at least one embodiment, a oneAPI programming model utilizes a DPC++ programming language. In at least one embodiment, a DPC++ programming language refers to a high-level language for data parallel programming productivity. In at least one embodiment, a DPC++ programming language is based at least in part on C and/or C++ programming languages. In at least one embodiment, a oneAPI programming model is a programming model such as those developed by Intel Corporation of Santa Clara, CA. In at least one embodiment, oneAPI and/or oneAPI programming model is utilized to interact with various accelerator, GPU, processor, and/or variations thereof, architectures. In at least one embodiment, oneAPI includes a set of libraries that implement various functionalities. In at least one embodiment, oneAPI includes at least a oneAPI DPC++ library, a oneAPI math kernel library, a oneAPI data analytics library, a oneAPI deep neural network library, a oneAPI collective communications library, a oneAPI threading building blocks library, a oneAPI video processing library, and/or variations thereof. In at least one embodiment, a oneAPI DPC++ library, also referred to as oneDPL, is a library that implements algorithms and functions to accelerate DPC++ kernel programming. In at least one embodiment, oneDPL implements one or more standard template library (STL) functions. In at least one embodiment, oneDPL implements one or more parallel STL functions. In at least one embodiment, oneDPL provides a set of library classes and functions such as parallel algorithms, iterators, function object classes, range-based API, and/or variations thereof. In at least one embodiment, oneDPL implements one or more classes and/or functions of a C++ standard library. In at least one embodiment, oneDPL implements one or more random number generator functions. In at least one embodiment, a oneAPI math kernel library, also referred to as oneMKL, is a library that implements various optimized and parallelized routines for various mathematical functions and/or operations. In at least one embodiment, oneMKL implements one or more basic linear algebra subprograms (BLAS) and/or linear algebra package (LAPACK) dense linear algebra routines. In at least one embodiment, oneMKL implements one or more sparse BLAS linear algebra routines. In at least one embodiment, oneMKL implements one or more random number generators (RNGs). In at least one embodiment, oneMKL implements one or more vector mathematics (VM) routines for mathematical operations on vectors. In at least one embodiment, oneMKL implements one or more Fast Fourier Transform (FFT) functions. In at least one embodiment, a oneAPI data analytics library, also referred to as oneDAL, is a library that implements various data analysis applications and distributed computations. In at least one embodiment, oneDAL implements various algorithms for preprocessing, transformation, analysis, modeling, validation, and decision making for data analytics, in batch, online, and distributed processing modes of computation. In at least one embodiment, oneDAL implements various C++ and/or Java APIs and various connectors to one or more data sources. In at least one embodiment, oneDAL implements DPC++ API extensions to a traditional C++ interface and enables GPU usage for various algorithms. In at least one embodiment, a oneAPI deep neural network library, also referred to as oneDNN, is a library that implements various deep learning functions. In at least one embodiment, oneDNN implements various neural network, machine learning, and deep learning functions, algorithms, and/or variations thereof. In at least one embodiment, a oneAPI collective communications library, also referred to as oneCCL, is a library that implements various applications for deep learning and machine learning workloads. In at least one embodiment, oneCCL is built upon lower-level communication middleware, such as message passing interface (MPI) and libfabrics. In at least one embodiment, oneCCL enables a set of deep learning specific optimizations, such as prioritization, persistent operations, out of order executions, and/or variations thereof. In at least one embodiment, oneCCL implements various CPU and GPU functions. In at least one embodiment, a oneAPI threading building blocks library, also referred to as oneTBB, is a library that implements various parallelized processes for various applications. In at least one embodiment, oneTBB is utilized for task-based, shared parallel programming on a host. In at least one embodiment, oneTBB implements generic parallel algorithms. In at least one embodiment, oneTBB implements concurrent containers. In at least one embodiment, oneTBB implements a scalable memory allocator. In at least one embodiment, oneTBB implements a work-stealing task scheduler. In at least one embodiment, oneTBB implements low-level synchronization primitives. In at least one embodiment, oneTBB is compiler-independent and usable on various processors, such as GPUs, PPUs, CPUs, and/or variations thereof. In at least one embodiment, a oneAPI video processing library, also referred to as oneVPL, is a library that is utilized for accelerating video processing in one or more applications. In at least one embodiment, oneVPL implements various video decoding, encoding, and processing functions. In at least one embodiment, oneVPL implements various functions for media pipelines on CPUs, GPUs, and other accelerators. In at least one embodiment, oneVPL implements device discovery and selection in media centric and video analytics workloads. In at least one embodiment, oneVPL implements API primitives for zero-copy buffer sharing. In at least one embodiment, a oneAPI programming model utilizes a DPC++ programming language. In at least one embodiment, a DPC++ programming language is a programming language that includes, without limitation, functionally similar versions of CUDA mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, a DPC++ programming language may include a subset of functionality of a CUDA programming language. In at least one embodiment, one or more CUDA programming model operations are performed using a oneAPI programming model using a DPC++ programming language. In at least one embodiment, any application programming interface (API) described herein is compiled into one or more instructions, operations, or any other signal by a compiler, interpreter, or other software tool. In at least one embodiment, compilation comprises generating one or more machine-executable instructions, operations, or other signals from source code. In at least one embodiment, an API compiled into one or more instructions, operations, or other signals, when performed, causes one or more processors such as graphics processors2800, graphics cores1800, parallel processor2000, processor2300, processor core2300, or any other logic circuit further described herein to perform one or more computing operations. It should be noted that, while example embodiments described herein may relate to a CUDA programming model, techniques described herein can be utilized with any suitable programming model, such HIP, oneAPI, and/or variations thereof. Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims. Use of terms “a” and “an” and “the” and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (meaning “including, but not limited to,”) unless otherwise noted. “Connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. In at least one embodiment, use of term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term “subset” of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal. Conjunctive language, such as phrases of form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. For instance, in illustrative example of a set having three members, conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). In at least one embodiment, number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase “based on” means “based at least in part on” and not “based solely on.” Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium, for example, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. In at least one embodiment, set of non-transitory computer-readable storage media comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors—for example, a non-transitory computer-readable storage medium store instructions and a main central processing unit (“CPU”) executes some of instructions while a graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions. In at least one embodiment, an arithmetic logic unit is a set of combinational logic circuitry that takes one or more inputs to produce a result. In at least one embodiment, an arithmetic logic unit is used by a processor to implement mathematical operation such as addition, subtraction, or multiplication. In at least one embodiment, an arithmetic logic unit is used to implement logical operations such as logical AND/OR or XOR. In at least one embodiment, an arithmetic logic unit is stateless, and made from physical switching components such as semiconductor transistors arranged to form logical gates. In at least one embodiment, an arithmetic logic unit may operate internally as a stateful logic circuit with an associated clock. In at least one embodiment, an arithmetic logic unit may be constructed as an asynchronous logic circuit with an internal state not maintained in an associated register set. In at least one embodiment, an arithmetic logic unit is used by a processor to combine operands stored in one or more registers of the processor and produce an output that can be stored by the processor in another register or a memory location. In at least one embodiment, as a result of processing an instruction retrieved by the processor, the processor presents one or more inputs or operands to an arithmetic logic unit, causing the arithmetic logic unit to produce a result based at least in part on an instruction code provided to inputs of the arithmetic logic unit. In at least one embodiment, the instruction codes provided by the processor to the ALU are based at least in part on the instruction executed by the processor. In at least one embodiment combinational logic in the ALU processes the inputs and produces an output which is placed on a bus within the processor. In at least one embodiment, the processor selects a destination register, memory location, output device, or output storage location on the output bus so that clocking the processor causes the results produced by the ALU to be sent to the desired location. In the scope of this application, the term arithmetic logic unit, or ALU, is used to refer to any computational logic circuit that processes operands to produce a result. For example, in the present document, the term ALU can refer to a floating point unit, a DSP, a tensor core, a shader core, a coprocessor, or a CPU. Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations. Use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In description and claims, terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms may be not intended as synonyms for each other. Rather, in particular examples, “connected” or “coupled” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. “Coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as “processing,” “computing,” “calculating,” “determining,” or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system's registers and/or memories into other data similarly represented as physical quantities within computing system's memories, registers or other such information storage, transmission or display devices. In a similar manner, term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting examples, “processor” may be a CPU or a GPU. A “computing platform” may comprise one or more processors. As used herein, “software” processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. In at least one embodiment, terms “system” and “method” are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system. In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer-implemented machine. In at least one embodiment, process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In at least one embodiment, processes of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In at least one embodiment, processes of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. In at least one embodiment, references may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, processes of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism. Although descriptions herein set forth example implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities may be defined above for purposes of description, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims. | 496,303 |
11863391 | DETAILED DESCRIPTION OF THE DISCLOSURE Again, the present disclosure relates to a highly scalable Representational State Transfer (REST) RESTful framework, to a distributed Telemetry and Policy Gateway (TPG) using the RESTful framework for policy, configuration, and metric publication, to an election mechanism for randomly selecting devices for metric measurement, and to geo tagging of the metrics The present disclosure provides a modular communication approach that can be used between software modules in monolith software system. The present disclosure uses HTTP for communication between modules/services, moving towards a RESTful framework for transferring state and data between modules/services. The goal is not to decompose the whole monolith but instead help achieve higher modularity for new requirements and rewrites that are not so time sensitive by creating microservices. § 1.0 Example Cloud-Based System Architecture FIG.1is a network diagram of a cloud-based system100offering security as a service. Specifically, the cloud-based system100can offer a Secure Internet and Web Gateway as a service to various users102, as well as other cloud services. In this manner, the cloud-based system100is located between the users102and the Internet as well as any cloud services106(or applications) accessed by the users102. As such, the cloud-based system100provides inline monitoring inspecting traffic between the users102, the Internet104, and the cloud services106, including Secure Sockets Layer (SSL) traffic. The cloud-based system100can offer access control, threat prevention, data protection, etc. The access control can include a cloud-based firewall, cloud-based intrusion detection, Uniform Resource Locator (URL) filtering, bandwidth control, Domain Name System (DNS) filtering, etc. The threat prevention can include cloud-based intrusion prevention, protection against advanced threats (malware, spam, Cross-Site Scripting (XSS), phishing, etc.), cloud-based sandbox, antivirus, DNS security, etc. The data protection can include Data Loss Prevention (DLP), cloud application security such as via a Cloud Access Security Broker (CASB), file type control, etc. The cloud-based firewall can provide Deep Packet Inspection (DPI) and access controls across various ports and protocols as well as being application and user aware. The URL filtering can block, allow, or limit website access based on policy for a user, group of users, or entire organization, including specific destinations or categories of URLs (e.g., gambling, social media, etc.). The bandwidth control can enforce bandwidth policies and prioritize critical applications such as relative to recreational traffic. DNS filtering can control and block DNS requests against known and malicious destinations. The cloud-based intrusion prevention and advanced threat protection can deliver full threat protection against malicious content such as browser exploits, scripts, identified botnets and malware callbacks, etc. The cloud-based sandbox can block zero-day exploits (just identified) by analyzing unknown files for malicious behavior. Advantageously, the cloud-based system100is multi-tenant and can service a large volume of the users102. As such, newly discovered threats can be promulgated throughout the cloud-based system100for all tenants practically instantaneously. The antivirus protection can include antivirus, antispyware, antimalware, etc. protection for the users102, using signatures sourced and constantly updated. The DNS security can identify and route command-and-control connections to threat detection engines for full content inspection. The DLP can use standard and/or custom dictionaries to continuously monitor the users102, including compressed and/or SSL-encrypted traffic. Again, being in a cloud implementation, the cloud-based system100can scale this monitoring with near-zero latency on the users102. The cloud application security can include CASB functionality to discover and control user access to known and unknown cloud services106. The file type controls enable true file type control by the user, location, destination, etc. to determine which files are allowed or not. For illustration purposes, the users102of the cloud-based system100can include a mobile device110, a headquarters (HQ)112which can include or connect to a data center (DC)114, Internet of Things (IoT) devices116, a branch office/remote location118, etc., and each includes one or more user devices (an example user device300is illustrated inFIG.5). The devices110,116, and the locations112,114,118are shown for illustrative purposes, and those skilled in the art will recognize there are various access scenarios and other users102for the cloud-based system100, all of which are contemplated herein. The users102can be associated with a tenant, which may include an enterprise, a corporation, an organization, etc. That is, a tenant is a group of users who share a common access with specific privileges to the cloud-based system100, a cloud service, etc. In an embodiment, the headquarters112can include an enterprise's network with resources in the data center114. The mobile device110can be a so-called road warrior, i.e., users that are off-site, on-the-road, etc. Those skilled in the art will recognize a user102has to use a corresponding user device300for accessing the cloud-based system100and the like, and the description herein may use the user102and/or the user device300interchangeably. Further, the cloud-based system100can be multi-tenant, with each tenant having its own users102and configuration, policy, rules, etc. One advantage of the multi-tenancy and a large volume of users is the zero-day/zero-hour protection in that a new vulnerability can be detected and then instantly remediated across the entire cloud-based system100. The same applies to policy, rule, configuration, etc. changes—they are instantly remediated across the entire cloud-based system100. As well, new features in the cloud-based system100can also be rolled up simultaneously across the user base, as opposed to selective and time-consuming upgrades on every device at the locations112,114,118, and the devices110,116. Logically, the cloud-based system100can be viewed as an overlay network between users (at the locations112,114,118, and the devices110,116) and the Internet104and the cloud services106. Previously, the IT deployment model included enterprise resources and applications stored within the data center114(i.e., physical devices) behind a firewall (perimeter), accessible by employees, partners, contractors, etc. on-site or remote via Virtual Private Networks (VPNs), etc. The cloud-based system100is replacing the conventional deployment model. The cloud-based system100can be used to implement these services in the cloud without requiring the physical devices and management thereof by enterprise IT administrators. As an ever-present overlay network, the cloud-based system100can provide the same functions as the physical devices and/or appliances regardless of geography or location of the users102, as well as independent of platform, operating system, network access technique, network access provider, etc. There are various techniques to forward traffic between the users102at the locations112,114,118, and via the devices110,116, and the cloud-based system100. Typically, the locations112,114,118can use tunneling where all traffic is forward through the cloud-based system100. For example, various tunneling protocols are contemplated, such as GRE, L2TP, IPsec, customized tunneling protocols, etc. The devices110,116, when not at one of the locations112,114,118can use a local application that forwards traffic, a proxy such as via a Proxy Auto-Config (PAC) file, and the like. An application of the local application is the application350described in detail herein as a connector application. A key aspect of the cloud-based system100is all traffic between the users102and the Internet104or the cloud services106is via the cloud-based system100. As such, the cloud-based system100has visibility to enable various functions, all of which are performed off the user device in the cloud. The cloud-based system100can also include a management system120for tenant access to provide global policy and configuration as well as real-time analytics. This enables IT administrators to have a unified view of user activity, threat intelligence, application usage, etc. For example, IT administrators can drill-down to a per-user level to understand events and correlate threats, to identify compromised devices, to have application visibility, and the like. The cloud-based system100can further include connectivity to an Identity Provider (IDP)122for authentication of the users102and to a Security Information and Event Management (SIEM) system124for event logging. The system124can provide alert and activity logs on a per-user102basis. FIG.2is a network diagram of an example implementation of the cloud-based system100. In an embodiment, the cloud-based system100includes a plurality of enforcement nodes (EN)150, labeled as enforcement nodes150-1,150-2,150-N, interconnected to one another and interconnected to a central authority (CA)152. Note, the nodes150are called “enforcement” nodes150but they can be simply referred to as nodes150in the cloud-based system100. The nodes150and the central authority152, while described as nodes, can include one or more servers, including physical servers, virtual machines (VM) executed on physical hardware, etc. An example of a server is illustrated inFIG.4. The cloud-based system100further includes a log router154that connects to a storage cluster156for supporting log maintenance from the enforcement nodes150. The central authority152provide centralized policy, real-time threat updates, etc. and coordinates the distribution of this data between the enforcement nodes150. The enforcement nodes150provide an onramp to the users102and are configured to execute policy, based on the central authority152, for each user102. The enforcement nodes150can be geographically distributed, and the policy for each user102follows that user102as he or she connects to the nearest (or other criteria) enforcement node150. Of note, the cloud-based system is an external system meaning it is separate from tenant's private networks (enterprise networks) as well as from networks associated with the devices110,116, and locations112,118. The enforcement nodes150are full-featured secure internet gateways that provide integrated internet security, i.e., proxies. They inspect all web traffic bi-directionally for malware and enforce security, compliance, and firewall policies, as described herein, as well as various additional functionality. In an embodiment, each enforcement node150has two main modules for inspecting traffic and applying policies: a web module and a firewall module. The enforcement nodes150are deployed around the world and can handle hundreds of thousands of concurrent users with millions of concurrent sessions. Because of this, regardless of where the users102are, they can access the Internet104from any device, and the enforcement nodes150protect the traffic and apply corporate policies. The enforcement nodes150can implement various inspection engines therein, and optionally, send sandboxing to another system. The enforcement nodes150include significant fault tolerance capabilities, such as deployment in active-active mode to ensure availability and redundancy as well as continuous monitoring. In an embodiment, customer traffic is not passed to any other component within the cloud-based system100, and the enforcement nodes150can be configured never to store any data to disk. Packet data is held in memory for inspection and then, based on policy, is either forwarded or dropped. Log data generated for every transaction is compressed, tokenized, and exported over secure Transport Layer Security (TLS) connections to the log routers154that direct the logs to the storage cluster156, hosted in the appropriate geographical region, for each organization. In an embodiment, all data destined for or received from the Internet is processed through one of the enforcement nodes150. In another embodiment, specific data specified by each tenant, e.g., only email, only executable files, etc., is processed through one of the enforcement nodes150. Each of the enforcement nodes150may generate a decision vector D=[d1, d2, . . . , dn] for a content item of one or more parts C=[c1, c2, . . . , cm]. Each decision vector may identify a threat classification, e.g., clean, spyware, malware, undesirable content, innocuous, spam email, unknown, etc. For example, the output of each element of the decision vector D may be based on the output of one or more data inspection engines. In an embodiment, the threat classification may be reduced to a subset of categories, e.g., violating, non-violating, neutral, unknown. Based on the subset classification, the enforcement node150may allow the distribution of the content item, preclude distribution of the content item, allow distribution of the content item after a cleaning process, or perform threat detection on the content item. In an embodiment, the actions taken by one of the enforcement nodes150may be determinative on the threat classification of the content item and on a security policy of the tenant to which the content item is being sent from or from which the content item is being requested by. A content item is violating if, for any part C=[c1, c2, . . . , cm] of the content item, at any of the enforcement nodes150, any one of the data inspection engines generates an output that results in a classification of “violating.” The central authority152hosts all customer (tenant) policy and configuration settings. It monitors the cloud and provides a central location for software and database updates and threat intelligence. Given the multi-tenant architecture, the central authority152is redundant and backed up in multiple different data centers. The enforcement nodes150establish persistent connections to the central authority152to download all policy configurations. When a new user connects to an enforcement node150, a policy request is sent to the central authority152through this connection. The central authority152then calculates the policies that apply to that user102and sends the policy to the enforcement node150as a highly compressed bitmap. The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. Once downloaded, a tenant's policy is cached until a policy change is made in the management system120. The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. When this happens, all of the cached policies are purged, and the enforcement nodes150request the new policy when the user102next makes a request. In an embodiment, the enforcement node150exchange “heartbeats” periodically, so all enforcement nodes150are informed when there is a policy change. Any enforcement node150can then pull the change in policy when it sees a new request. The cloud-based system100can be a private cloud, a public cloud, a combination of a private cloud and a public cloud (hybrid cloud), or the like. Cloud computing systems and methods abstract away physical servers, storage, networking, etc., and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser or the like, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based and other applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “Software as a Service” (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The cloud-based system100is illustrated herein as an example embodiment of a cloud-based system, and other implementations are also contemplated. As described herein, the terms cloud services and cloud applications may be used interchangeably. The cloud service106is any service made available to users on-demand via the Internet, as opposed to being provided from a company's on-premises servers. A cloud application, or cloud app, is a software program where cloud-based and local components work together. The cloud-based system100can be utilized to provide example cloud services, including Zscaler Internet Access (ZIA), Zscaler Private Access (ZPA), and Zscaler Digital Experience (ZDX), all from Zscaler, Inc. (the assignee and applicant of the present application). Also, there can be multiple different cloud-based systems100, including ones with different architectures and multiple cloud services. The ZIA service can provide the access control, threat prevention, and data protection described above with reference to the cloud-based system100. ZPA can include access control, microservice segmentation, etc. The ZDX service can provide monitoring of user experience, e.g., Quality of Experience (QoE), Quality of Service (QoS), etc., in a manner that can gain insights based on continuous, inline monitoring. For example, the ZIA service can provide a user with Internet Access, and the ZPA service can provide a user with access to enterprise resources instead of traditional Virtual Private Networks (VPNs), namely ZPA provides Zero Trust Network Access (ZTNA). Those of ordinary skill in the art will recognize various other types of cloud services106are also contemplated. Also, other types of cloud architectures are also contemplated, with the cloud-based system100presented for illustration purposes. § 2.0 Example Server Architecture FIG.3is a block diagram of a server200, which may be used in the cloud-based system100, in other systems, or standalone. For example, the enforcement nodes150and the central authority152may be formed as one or more of the servers200. The server200may be a digital computer that, in terms of hardware architecture, generally includes a processor202, input/output (I/O) interfaces204, a network interface206, a data store208, and memory210. It should be appreciated by those of ordinary skill in the art thatFIG.4depicts the server200in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (202,204,206,208, and210) are communicatively coupled via a local interface212. The local interface212may be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface212may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface212may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor202is a hardware device for executing software instructions. The processor202may be any custom made or commercially available processor, a Central Processing Unit (CPU), an auxiliary processor among several processors associated with the server200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the server200is in operation, the processor202is configured to execute software stored within the memory210, to communicate data to and from the memory210, and to generally control operations of the server200pursuant to the software instructions. The I/O interfaces204may be used to receive user input from and/or for providing system output to one or more devices or components. The network interface206may be used to enable the server200to communicate on a network, such as the Internet104. The network interface206may include, for example, an Ethernet card or adapter or a Wireless Local Area Network (WLAN) card or adapter. The network interface206may include address, control, and/or data connections to enable appropriate communications on the network. A data store208may be used to store data. The data store208may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store208may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store208may be located internal to the server200, such as, for example, an internal hard drive connected to the local interface212in the server200. Additionally, in another embodiment, the data store208may be located external to the server200such as, for example, an external hard drive connected to the I/O interfaces204(e.g., SCSI or USB connection). In a further embodiment, the data store208may be connected to the server200through a network, such as, for example, a network-attached file server. The memory210may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory210may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory210may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor202. The software in memory210may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory210includes a suitable Operating System (O/S)214and one or more programs216. The operating system214essentially controls the execution of other computer programs, such as the one or more programs216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs216may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein. § 3.0 Example User Device Architecture FIG.4is a block diagram of a user device300, which may be used with the cloud-based system100or the like. Specifically, the user device300can form a device used by one of the users102, and this may include common devices such as laptops, smartphones, tablets, netbooks, personal digital assistants, MP3 players, cell phones, e-book readers, IoT devices, servers, desktops, printers, televisions, streaming media devices, and the like. The present disclosure relates to mobile devices, which are one subset of the user device300. The user device300can be a digital device that, in terms of hardware architecture, generally includes a processor302, I/O interfaces304, a network interface306, a data store308, and memory310. It should be appreciated by those of ordinary skill in the art thatFIG.4depicts the user device300in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (302,304,306,308, and302) are communicatively coupled via a local interface312. The local interface312can be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface312can have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface312may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor302is a hardware device for executing software instructions. The processor302can be any custom made or commercially available processor, a CPU, an auxiliary processor among several processors associated with the user device300, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the user device300is in operation, the processor302is configured to execute software stored within the memory310, to communicate data to and from the memory310, and to generally control operations of the user device300pursuant to the software instructions. In an embodiment, the processor302may include a mobile-optimized processor such as optimized for power consumption and mobile applications. The I/O interfaces304can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, a barcode scanner, and the like. System output can be provided via a display device such as a Liquid Crystal Display (LCD), touch screen, and the like. The network interface306enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the network interface306, including any protocols for wireless communication. The data store308may be used to store data. The data store308may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store308may incorporate electronic, magnetic, optical, and/or other types of storage media. The memory310may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory310may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory310may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor302. The software in memory310can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example ofFIG.3, the software in the memory310includes a suitable operating system314and programs316. The operating system314essentially controls the execution of other computer programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The programs316may include various applications, add-ons, etc. configured to provide end-user functionality with the user device300. For example, example programs316may include, but not limited to, a web browser, social networking applications, streaming media applications, games, mapping and location applications, electronic mail applications, financial applications, and the like. In a typical example, the end-user typically uses one or more of the programs316along with a network such as the cloud-based system100. § 4.0 User Device Application for Traffic Forwarding and Monitoring FIG.5is a network diagram of the cloud-based system100illustrating an application350on user devices300with users102configured to operate through the cloud-based system100. Different types of user devices300are proliferating, including Bring Your Own Device (BYOD) as well as IT-managed devices. The conventional approach for a user device300to operate with the cloud-based system100as well as for accessing enterprise resources includes complex policies, VPNs, poor user experience, etc. The application350can automatically forward user traffic with the cloud-based system100as well as ensuring that security and access policies are enforced, regardless of device, location, operating system, or application. The application350automatically determines if a user102is looking to access the open Internet104, a SaaS app, or an internal app running in public, private, or the datacenter and routes mobile traffic through the cloud-based system100. The application350can support various cloud services, including ZIA, ZPA, ZDX, etc., allowing the best-in-class security with zero trust access to internal apps. As described herein, the application350can also be referred to as a connector application. The application350is configured to auto-route traffic for a seamless user experience. This can be protocol as well as application-specific, and the application350can route traffic with a nearest or best fit enforcement node150. Further, the application350can detect trusted networks, allowed applications, etc. and support secure network access. The application350can also support the enrollment of the user device300prior to accessing applications. The application350can uniquely detect the users102based on fingerprinting the user device300, using criteria like device model, platform, operating system, etc. The application350can support Mobile Device Management (MDM) functions, allowing IT personnel to deploy and manage the user devices300seamlessly. This can also include the automatic installation of client and SSL certificates during enrollment. Finally, the application350provides visibility into device and app usage of the user102of the user device300. The application350supports a secure, lightweight tunnel between the user device300and the cloud-based system100. For example, the lightweight tunnel can be HTTP-based. With the application350, there is no requirement for PAC files, an IPSec VPN, authentication cookies, or end user102setup. § 5.0 Zero Trust Network Access Using the Cloud-Based System FIG.6is a network diagram of a Zero Trust Network Access (ZTNA) application utilizing the cloud-based system100. For ZTNA, the cloud-based system100can dynamically create a connection through a secure tunnel between an endpoint (e.g., users102A,102B) that are remote and an on-premises connector400that is either located in cloud file shares and applications402and/or in an enterprise network420that includes enterprise file shares and applications404. The connection between the cloud-based system100and on-premises connector400is dynamic, on-demand, and orchestrated by the cloud-based system100. A key feature is its security at the edge—there is no need to punch any holes in the existing on-premises firewall. The connector400inside the enterprise (on-premises) “dials out” and connects to the cloud-based system100as if too were an endpoint. This on-demand dial-out capability and tunneling authenticated traffic back to the enterprise is a key differentiator for ZTNA. Also, this functionality can be implemented in part by the application350on the user device300. The paradigm of virtual private access systems and methods is to give users network access to get to an application and/or file share, not to the entire network. If a user is not authorized to get the application, the user should not be able even to see that it exists, much less access it. The virtual private access systems and methods provide an approach to deliver secure access by decoupling applications402,404from the network, instead of providing access with a connector400, in front of the applications402,404, an application on the user device300, a central authority152to push policy410, and the cloud-based system100to stitch the applications402,404and the software connectors400together, on a per-user, per-application basis. With the virtual private access, users can only see the specific applications402,404allowed by the policy410. Everything else is “invisible” or “dark” to them. Because the virtual private access separates the application from the network, the physical location of the application402,404becomes irrelevant—if applications402,404are located in more than one place, the user is automatically directed to the instance that will give them the best performance. The virtual private access also dramatically reduces configuration complexity, such as policies/firewalls in the data centers. Enterprises can, for example, move applications to Amazon Web Services or Microsoft Azure, and take advantage of the elasticity of the cloud, making private, internal applications behave just like the marketing leading enterprise applications. Advantageously, there is no hardware to buy or deploy, because the virtual private access is a service offering to end-users and enterprises.FIG.5can include the ZPA service from Zscaler, Inc. § 6.0 Digital Experience Monitoring FIG.7is a network diagram of the cloud-based system100in an application of digital experience monitoring. Here, the cloud-based system100providing security as a service as well as ZTNA, can also be used to provide real-time, continuous digital experience monitoring, as opposed to conventional approaches (synthetic probes). A key aspect of the architecture of the cloud-based system100is the inline monitoring. This means data is accessible in real-time for individual users from end-to-end. As described herein, digital experience monitoring can include monitoring, analyzing, and improving the digital user experience. The cloud-based system100connects users102at the locations112,118to the applications402,404, the Internet104, the cloud services106, etc. The inline, end-to-end visibility of all users enables digital experience monitoring. The cloud-based system100can monitor, diagnose, generate alerts, and perform remedial actions with respect to network endpoints, network components, network links, etc. The network endpoints can include servers, virtual machines, containers, storage systems, or anything with an IP address, including the Internet of Things (IoT), cloud, and wireless endpoints. With these components, these network endpoints can be monitored directly in combination with a network perspective. Thus, the cloud-based system100provides a unique architecture that can enable digital experience monitoring, network application monitoring, infrastructure component interactions, etc. Of note, these various monitoring aspects require no additional components—the cloud-based system100leverages the existing infrastructure to provide this service. Again, digital experience monitoring includes the capture of data about how end-to-end application availability, latency, and quality appear to the end user from a network perspective. This is limited to the network traffic visibility and not within components, such as what application performance monitoring can accomplish. Networked application monitoring provides the speed and overall quality of networked application delivery to the user in support of key business activities. Infrastructure component interactions include a focus on infrastructure components as they interact via the network, as well as the network delivery of services or applications. This includes the ability to provide network path analytics. The cloud-based system100can enable real-time performance and behaviors for troubleshooting in the current state of the environment, historical performance and behaviors to understand what occurred or what is trending over time, predictive behaviors by leveraging analytics technologies to distill and create actionable items from the large dataset collected across the various data sources, and the like. The cloud-based system100includes the ability to directly ingest any of the following data sources network device-generated health data, network device-generated traffic data, including flow-based data sources inclusive of NetFlow and IPFIX, raw network packet analysis to identify application types and performance characteristics, HTTP request metrics, etc. The cloud-based system100can operate at 10 gigabits (10G) Ethernet and higher at full line rate and support a rate of 100,000 or more flows per second or higher. The applications402,404can include enterprise applications, Office 365, Salesforce, Skype, Google apps, internal applications, etc. These are critical business applications where user experience is important. The objective here is to collect various data points so that user experience can be quantified for a particular user, at a particular time, for purposes of analyzing the experience as well as improving the experience. In an embodiment, the monitored data can be from different categories, including application-related, network-related, device-related (also can be referred to as endpoint-related), protocol-related, etc. Data can be collected at the application350or the cloud edge to quantify user experience for specific applications, i.e., the application-related and device-related data. The cloud-based system100can further collect the network-related and the protocol-related data (e.g., Domain Name System (DNS) response time). Application-Related Data Page Load TimeRedirect count (#)Page Response TimeThroughput (bps)Document Object ModelTotal size (bytes)(DOM) Load TimeTotal Downloaded bytesPage error count (#)App availability (%)Page element count by category (#) Network-Related Data HTTP Request metricsBandwidthServer response timeJitterPing packet loss (%)Trace RoutePing round tripDNS lookup tracePacket loss (%)GRE/IPSec tunnel monitoringLatencyMTU and bandwidth measurements Device-Related Data (Endpoint-Related Data) System detailsNetwork (config)Central Processing Unit (CPU)DiskMemory (RAM)ProcessesNetwork (interfaces)Applications Metrics could be combined. For example, device health can be based on a combination of CPU, memory, etc. Network health could be a combination of Wi-Fi/LAN connection health, latency, etc. Application health could be a combination of response time, page loads, etc. The cloud-based system100can generate service health as a combination of CPU, memory, and the load time of the service while processing a user's request. The network health could be based on the number of network path(s), latency, packet loss, etc. The lightweight connector400can also generate similar metrics for the applications402,404. In an embodiment, the metrics can be collected while a user is accessing specific applications that user experience is desired for monitoring. In another embodiment, the metrics can be enriched by triggering synthetic measurements in the context of an inline transaction by the application350or cloud edge. The metrics can be tagged with metadata (user, time, app, etc.) and sent to a logging and analytics service for aggregation, analysis, and reporting. Further, network administrators can get UEX reports from the cloud-based system100. Due to the inline nature and the fact the cloud-based system100is an overlay (in-between users and services/applications), the cloud-based system100enables the ability to capture user experience metric data continuously and to log such data historically. As such, a network administrator can have a long-term detailed view of the network and associated user experience. § 7.0 RESTful Framework The present disclosure includes a RESTful framework that is a library for creating a RESTful server, such as in the cloud-based system100. The library provides Application Programming Interfaces (APIs) for app configuration, request handling, and logging. The present disclosure includes an approach using HTTP for communication between modules or services, moving towards a RESTful framework for transferring state and data. That is, the goal is not to decompose a whole monolith but instead help achieve higher modularity for new requirements and rewrites that are not so time sensitive by creating microservices, i.e., a hybrid between a monolith and microservices where the monolith is designed for time sensitive operations while microservices are used for non-time sensitive operations. The RESTful framework describes support for microservices and applications with a monolith. FIG.8is a diagram of stacks500,502illustrating where a RESTful framework510of the present disclosure operates. Specifically, the stack500illustrates all applications512on top of the RESTful framework510. In the present disclosure, the stack502includes a proxy stack514which can be viewed as the monolith. For example, the proxy stack514can perform some or all of the functions described herein with respect to the enforcement node150. In addition to the proxy stack514, the stack502includes the applications512on top of the RESTful framework510, and both the applications512and the proxy stack514utilize an HTTP parser516, but the proxy stack514is direct without the RESTful framework510. The HTTP parser516is on top of a network stack518which is top of an operating system kernel520. FIG.9is a diagram illustrating functionality of the RESTful framework510. The RESTful framework510interacts with the applications512, that can be loaded dynamically as shared libraries, via deserialized queries and bodies with a C structure, for user configuration, and responses as a C structure, or any other language. The RESTful framework510can support multiple applications512on the same server200, and uses path to redirect requests to corresponding applications512. The RESTful framework510supports the following HTTP methods, GET, POST, PUT and DELETE, and the applications512can choose to run as HTTPS only, HTTP only or Both. A POST request using content length is supported. Chunk encoding is supported, a client needs to specify it in the request header. The RESTful framework510supports decoding gzip-encoded payload, an encoding response payload. The RESTful framework510supports deserializing of input JSON to C structures and serialization of C structures to JSON output. The RESTful framework510supports IP whitelist Based Authentication where the application512can specify the list of IP addresses to allow access to the application endpoints. In an embodiment, the RESTful framework510can be used for the applications512to upload metrics periodically to a central hub. § 7.1 RESTful Framework Modes The RESTful framework510can be deployed in two modes. One is a standalone mode and the other is embedded mode. In standalone mode the instance or binary runs the RESTful stack500as the primary stack whereas in embedded mode the stack502has to co-exists with other stacks like the proxy stack502. In the embedded mode, the RESTful framework510can be run along with the functions described herein for the enforcement node150as an embedded service. The RESTful framework510has a port enabled for listening to HTTP or HTTPS data. Each application512implements its own input/output controls (IOCTLs) just like HTTP endpoints. The application512can define IOCTLs and put the config in the config.json file. Application specific IOCTLs can be sent using HTTP as well. § 7.2 SNI Based Routing FIG.10is a diagram of Server Name Indication (SNI)-based routing with the RESTful framework510. Here, an inbound request530, such as through the cloud-based system100and via a tunnel540, can be routed by the RESTful framework510to the appropriate application512using SNI. This allows new applications512to be seamlessly integrated into existing services in the cloud-based system100. There are no requirements for opening new ports and traffic is routed to appropriate applications512during SSL handshake based on the domain name. § 7.3 Load Balancing FIG.11is a diagram of load balancing multiple nodes150implementing the RESTful framework510with a load balancer550. The load balancer550can be part of the cloud-based system100and the RESTful framework510can be implemented on the enforcement nodes150in the cloud-based system100. The load balancer550can monitor the applications512and the RESTful framework510on the nodes150using Internet Control Message Protocol (ICMP), HTTP, or the like. In an embodiment, HTTP is utilized to directly monitor the applications512for load balancing. § 7.4 Applications in the RESTful Framework The RESTful framework510is developed similar to the microservices concept where each application512is treated as a single service and APIs are defined. The applications512uses an HTTP connection in the RESTful framework510, but the applications512can be configured to switch to different protocols. Each endpoint defined in config.json must be handled by a callback method. A query and JSON request body must have corresponding deserializer. The JSON response body must have corresponding serializer. § 7.5 Metric Collection Using the RESTful Framework FIG.12is a diagram of applications512provided metrics to a cloud metric store600utilizing the RESTful framework510. For example, assume the cloud-based system100is utilized as a security system as described herein, and the cloud-based system100is further configured to perform user experience monitoring also as described herein. The RESTful framework510is useful to interoperate with the cloud-based system for upload metrics, such as for user experience monitoring. Of course, this can be for any application512. The applications512can define their own metrics such as for monitoring, debugging, etc., and severity levels can also be attached to metrics. Also, the metrics can be grouped by devices300the application512interacts with. The RESTful framework510automatically sends these metrics periodically to a centralized cloud store600, such as via a metrics uploader602. The cloud store600can be in the storage cluster156. The metrics can be counters that are very useful for the debugging and analysis of an application512. The RESTful framework510provides APIs to send metrics (counters whose value has been changed) periodically to a central hub. In an embodiment, the RESTful framework510keeps track of the previous value (the value sent last time) and current value of every counter, metrics can be at some interval, and the RESTful framework510attempts to find counters whose values have changed for each severity and device pair. If there is one or more such counters, the RESTful framework510will send a metrics JSON for this pair. § 7.6 CASB and DLP Support With the RESTful Framework In an example embodiment, the cloud-based system100can provide CASB and/or DLP functionality in addition to the other security functions. A challenge in adding CASB and/or DLP functionality to an existing monolith is a challenge, i.e., a primary challenge is communication between many components, and creating a new custom protocol or interface would have added complexity, RESTful was desired but using a standalone server just for API is not practical. The RESTful framework510can be embedded in the nodes150, i.e., it is a module within existing components and is easier to develop and maintain, providing full JSON-based communication interfaces. § 7.7 RESTful Framework Process FIG.13is a flowchart of a RESTful framework process650. The process650can be implemented in one of the nodes150, as a method, and as instructions in a non-transitory computer-readable medium. Again, the RESTful framework510is a library for creating a RESTful server on the node150. The RESTful framework510leverages the fast HTTP parsing logic that provides large performance gains over traditional web servers. The process650includes operating a first cloud service that is implemented as a monolith system (step651); operating a RESTful framework (Representational State Transfer web service) embedded in the cloud node (step652); and operating one or more applications for one or more cloud services utilizing the RESTful framework, wherein the one or more applications are microservices (step653). The RESTful framework can utilize Hypertext Transfer Protocol (HTTP) methods. The first cloud service can include inline monitoring for security, and the one or more cloud services can include user experience monitoring. The first cloud service can required less latency than the one or more cloud services. The RESTful framework can be configured for Server Name Indication (SNI) routing with the one or more applications. The operating the one or more applications can be based on a load balancer monitoring the one or more applications. The one or more applications can include metric collection where the RESTful framework is configured to update the metrics based on changes. The RESTful framework can utilize a same network and operating stack as the monolith system. § 8.0 Telemetry and Policy Gateway (TPG) FIG.14is a network diagram of a Telemetry and Policy Gateway (TPG)700. As described herein, digital experience monitoring can be a service offered via the cloud-based system100for monitoring user performance. The TPGs700are a primary point of contact for the connector application350for the purpose of download policies and pushing collected statistics. The challenge here is to scale both vertically as well as horizontally since there can be millions or more of applications350communicating with the TPG700. The TPG700can run as an application in servers200utilizing the RESTful framework510which simplifies the communication design between the applications350. Of course, the communication with the application350is one example such as for configuration updates and metric publications; other examples are contemplated. Highlights of the TPG700include optimizing the updates of the latest policies/configuration to the devices300, such as by managing version numbers. This way the connected devices300will download policies only when there is a change in a policy for that device300. Also, there is a reduction of the load on upstream data/config stores by caching policies and customizing the policy on a per device basis. The TPG700acts as not just a cache but has the ability to customize the configuration on a per device basis, thus relieving stress on the upstream data/config stores. The TPG700has the ability to add Geo location information to the metrics uploaded by the devices300based on the data transmitted. The TPG identifies the device location by looking into any of the location identifiable parameters like IP address or Lat/Long and publishes that data into the data store. The TPG700can aggregate data to optimize the upload and the storage of the data into any third-party data store. So when multiple devices upload their data into the TPG, it can aggregate data across multiple devices and push that to any data store helping save compute cycles on expensive data stores. Also, the TPG700can be a stateless and horizontally scaling server—as this is a stateless server, it is possible seamlessly add and remove an instance into the cluster. The management of a transaction state is managed by the entities talking to the TPG700. The TPG700itself is multi-tenant and has a scope of single clouds. The connector applications350use RESTful endpoints to push data (metrics) and request for policies. FIG.15is a diagram illustrating authentication between a user device300and the TPG700. In an embodiment, the TPG700can use digest Authentication to authenticate with a connector application350. The connector application350sends the device credentials (Device ID and password) in the digest authentication HTTP header. The following illustrates an example ApplicationGET/tpg/HTTP/1.1350 to TPGHost: 10.66.106.19700User-Agent: curl/7.50.3Accept: */*TPG 700 toHTTP/1.1 401 UnauthorizedapplicationServer: Zscaler350Cache-control: no-cacheContent-Length: 0WWW-Authenticate: BasicApplicationGET/tpg/policy HTTP/1.1350 to TPGAuthorization: Basic700ZGlkPTcwMTYxJnVpZD02NzY5NyZjbG91ZD16c2NhbGVydHdvLm5ldDoxODl3NjQ1MjczNDc2MzU5User-Agent: PostmanRuntime/7.16.3Accept: */*Cache-Control: no-cachePostman-Token: 75c06a97-3802-4d6b-8f4c-25d2004f4e82Host: 10.66.106.10Accept-Encoding: gzip, deflateContent-Length: 0Connection: keep-aliveTPG 700 toHTTP/1.1 200 OKapplicationServer: Zscaler350Content-Length: 0 FIG.16is a diagram illustrating communication between a user device300and the TPG700. The communication between the application350and the TPG700will be through REST endpoints. The data will be exchanged using JSON format. The TPG700can connect to the central authority node152, such as using a proprietary format. The TPG700can include various RESTful endpoints, and they all require basic authentication. The endpoints can include a GET/policy, a POST/metrics, and a POST/updates. The GET/policy endpoint can provide policy downloads for UPM policy from a central authority node152. The POST/metrics endpoint can accept metrics payload from the applications350, populate location info and user info, and then push it to a data store. § 8.1 TPG Request and Caching The TPG700caches objects it fetches from the central authority node152, such as user configuration (User Performance Management (UPM)) and configuration for each tenant. The following provides an example: https://<TPG Service IP>/tpg/policy?version=<version number>&locid=<location id> Response CodeComments200 OKPolicy Download is Successful204 No ContentIf no new policy is available that is newerthan the requested version. Only Returnedwhen policy version requested is non-zero.429 Too Many RequestsThis allows us to do Flow control. Theserver is busy processing requests. ZAPPshould try again later after the “Retry-After”seconds sent in the response header.401 UnauthorizedAuthentication is required. Send a validauthentication header. The responsecontains the realm for the authentication.403 ForbiddenThe credentials didn't match.400 Bad RequestsInvalid Requests. Make sure the queryparameters are correct.500 Server ErrorError at the server end. Need to raiseescalation for such errors. A version number can be used to control the versioning of configurations. The version number is used to avoid downloading policies when there are no changes. Clients can extract version number from the downloaded config and send the same version number on the next request. It can always request version 0 if it is requesting the config for the first time. § 8.2 Flow For Policy FIG.17is a flowchart illustrating a process720of policy flow associated with the TPG700. A policy request is sent from a connector application (721), and the TPG checks if the user is in the cache and is valid (step722). If not, the TPG can fetch user info from a central authority and cache it (step723). The TPG then authenticates the user (step724). If the authentication is not successful (step725), and if the device was not found (step726), the process720returns to step723, otherwise if the device was found (step726), the process720concludes and sends a code403(step727). If the authentication is successful (step725), the process720includes checking if the configuration in the cache is valid (step728), and, if so, checks if a requested version is the same as the cached version (step729), and, if so, terminates and sends a code204(step730). If the configuration in the cache is not valid (step728), the TPG fetches a UPM configuration for the organization from the central authority and caches it (step731). If the download is successful (step732), the process720checks if the configuration has changed (step733), and, if not, the process720terminates and sends a code204(step733). If the download is bit successful (step732), the process720checks if there is a configuration in the cache (step734), and, if not, the process720terminates and sends a code500(step735). If there is no configuration in the cache (step734) or if the configuration has changed (step733), the process720includes building a device specific UPM (step736), and the process720terminates and sends code200along with the configuration (step737). § 8.3 TPG Process FIG.18is a flowchart of a process750implemented by a TPG700. The process750can be a computer-implemented method, implemented as instructions stored in a computer-readable medium and executed by one or more processors, or by an apparatus such as the enforcement node150or the server200or the TPG700. The process750includes connecting to and authenticating a plurality of user devices (step751); utilizing a plurality of RESTful (Representational State Transfer web service) endpoints to communicate with the plurality of user devices (step752); providing any of policy and configuration to the plurality of user devices utilizing version number via a RESTful endpoint (step753); caching the any of policy and configuration for each device of the plurality of user devices (step754); and receiving metrics based on measurements at the plurality of user devices according to corresponding policy and configuration, via a RESTful endpoint (step755). The process750can further include obtaining the any of policy and configuration from a central authority associated with the cloud-based system. The process750can further include publishing the received metrics to a cloud metric store associated with the cloud-based system. The process750can further include aggregating received metrics from some or all of the plurality of devices; and publishing the aggregated received metrics to a data store. The process750can further include adding geo location information to the received metrics based on location identifiable parameters. The cloud-based system can include the TPG node and one or more additional TPG nodes, each TPG node is stateless with respect to one another. The received metrics can be associated with user experience monitoring. § 9.0 Device Election For Random Population Selection of Remote Devices The user experience monitoring can utilize probe traffic for measuring performance. The probe traffic can go direct to destinations or through the cloud-based system100. For example, the use of such probes is described in commonly-assigned U.S. patent application Ser. No. 17/188,007, filed Mar. 1, 2021, and entitled “Proactively detecting failure points in a network,” and in U.S. patent application Ser. No. 17/235,267, filed Apr. 20, 2021, and entitled “Cached web probes for monitoring user experience,” the contents of each are incorporated by reference in their entirety. When probes are sent through the cloud-based system100, the nodes150can cache probes and optimize the actual number of probes sent to the destination. Since the cloud-based system100acts as the man in the middle, it is possible to throttle the number of outbound probes. Without the throttling, the probes can flood a given destination causing the destination to blacklist sites, such as the cloud-based system100. Similar problems exist even when the user device300does not go through the cloud-based system100. In such cases, there is a need to have the connector applications350throttle the number of probes to avoid flooding a given destination. Here are some example scenarios where probes may go direct to destinations instead of through the cloud-based system100. A user experience monitoring customer does not use the cloud-based system100for inline monitoring or the customer uses the cloud-based system100but has configured bypass for certain traffic. A tenant (organization) can have thousands of users102, and the cloud-based system100can have millions of users102for the user experience monitoring. The cloud-based system100can cache probes to reduce the load. Further, the present disclosure includes an election protocol where a subset of devices300are used. This reduces the footprint of devices300designated for a task, reduces CPU and memory on end devices300as they do not have to always perform the designated tasks, and the like. This further ensures fairness in the selection of devices300considering various parameters such location, organization, device-type, department, group etc., so that the burden of performing the tasks is evenly distributed. As fairness is ensured, this gets a diverse set of devices300giving good data to base decisions on. This further protects the IP address reputation of devices300as they do not send as many probes. § 9.1 Election Mechanism FIG.19is a network diagram of an example election of user devices300for performing measurements. This example includes 24 user devices300, connected to a load balancer550and three TPGs700. Each TPG700can do an election independently as the TPGs700are stateless, so there is no shared state among TPGs. The TPGs700can be connected to 10, 5, and 9 devices respectively, and elect 2, 1, and 2 devices, respectively. The election can include each monitor (TPG700) defining a selection percentage. This value defines what is the percentage of devices from a given IP, that should run this monitor in direct mode. The election is done at the monitor level. For each unique source IP the TPG700sees, it would carry out election based on the source IP/subnet and monitor ID combination. The TPG700maintains a state for each monitor ID, source IP/subnet combination and uses to carry out the election. The election can be valid for time period Tm. This time period Tm should be larger than the frequency of probes that will be carried out. For example, if probes are sent every 5 minutes, then Tm should be greater than 5. The TPGs700can adhere to the same value of Tm and the start of the Time period can be aligned on all the TPGs700. This can be achieved by using GMT epoch time on each TPG to find the start of Tm boundary. The election is random which provides a good deal of fairness. The process does not guarantee that exact level of desired selection percentage will be achieved. The algorithm does its best effort to achieve the desired configured selection percentage. The election information is sent as part of the config download to the application350. The election is sent as a separate object in the config. The following code describes a probability calculation Election State {Current_time_period;Total_devices_seen;Total_devices_selected;Current_selection_percentage;}Do_adaptive_election (desired percentage, source_ip, monitor_id)state = get_election_state( source_ip, monitor_id)If expired (state−>current_time_period)Total_devices_seen = 0Total_devices_selected = 0Current_selection_percentage = 0Selected = trueIf state−>total_devices_seenAdaptive_probability = pow(desired_percentage /state−>current_selection_percentage, 4) *Desired_percentageIf random( ) % 100 > adaptive_proabilitySelected = falseReturn selected FIG.20is a flowchart of an TPG election process780. A device300connects to a TPG700(step781) which looks up policy in its cache (step782). If the policy is not present (step783), the TPG700gets the configuration from a central authority (step784). If the policy is present (step783), the process780checks if there is a change (step785), and, if not, and if the election time period has not changed (step786), the process780terminates and returns a code204(step787). If the election time period has changes (step786), the process780clears the election data from the device state and updates the election time (step788). After the steps784and788and if there is a change in the policy (step785), the process780processes the configuration for the device (step789). For each monitor (TPG700) (step790), the process780gets the last election (step791) and checks if the device was present (step792), and, if not, the process780performs an adaptive election and stores the result (step793). The process then writes the election results (step794). If the device was present (step792) and after step794, the loop checks if all monitors are done (step796), and terminates, if so, returning code200(step797). § 9.2 Election Process FIG.21is a flowchart of a process800for electing devices. The process800can be a computer-implemented method, implemented as instructions stored in a computer-readable medium and executed by one or more processors, or by an apparatus such as the enforcement node150or the server200or the TPG700. The process800includes connecting to and authenticating a set of user devices of a plurality of user devices (step801); determining an election of a subset of user devices of the set of user devices, wherein the election determines which user devices perform metric collection (step802); providing any of policy and configuration to the plurality of user devices including election information (step803); and receiving metrics based on measurements at the subset of user devices of user devices according to corresponding policy and configuration (step804). The election can be for a first time period, and the process800can further include performing a second election for a second time period. The first time period and the second time period are larger than a frequency of the measurements. The election can be based on a combination of source Internet Protocol (IP) address and monitor identifier of each device. The election can be based on a combination of location, organization, device-type, department, and group. The election can be based on a desired percentage of user devices. The cloud-based system can include the node and one or more additional nodes, each node is stateless with respect to one another and performs its election independently, and each node is time synchronized with one another. § 10.0 Geo Location With respect to published metrics from user devices, there is a need to associate a geographical (geo) location therewith. The present disclosure includes a process for determining a nearest city a device is located at to apply location specific policies. This helps map a city to the device with a minimal set of data points. For this we have come up with a method of flattening the earth and using it as a grid to compute the nearest city for a given location. That is, there is a need to figure out a nearest city for a given geo coordinate, latitude and longitude. This approach should be accurate and scalable. With employees working from anywhere, it is imperative to have policy based on the physical/geo location of the user102. Security threats are becoming aware of users' physical location to target them with more success. For example, a threat actor might utilize some local event to run a phishing attack. Different geo regions have different service providers, which make knowing the users' physical location even more important. Service providers publish their outage alerts based on locality/zip code. User experience monitoring will benefit from this, as it can quickly raise an alert if it notices degradation for traffic coming from a particular geo location. Traffic can be redirected to the nearest data center based on users' physical location. Devices can provide Global Positioning Satellite (GPS) latitude and longitude but these need to be converted into human readable addresses. Based on human readable address, different policies can be defined for different geographical areas. Based on human readable address, user experience monitoring can monitor for any possible networking issues for given geo location. There are paid databases for such efforts. These approaches include creating a custom function in Structured Query Language (SQL) to calculate distance between two points, and selecting the entry with the minimum distance from a given point. This approach is O(n) complexity and scales through sharding. The present disclosure includes an efficient model with a highly scalable algorithm. Given GPS longitude and latitude, getting the exact address of a user requires too many resources with database approaches. The present disclosure proposes finding the nearest city from GPS data. Most policies and monitoring are done at city level and serve the required purpose. This will make the system very fast, i.e., O(Log(n)). § 10.1 Cells The proposed geo location approach flattens the Earth and divides it into cells, call this a geo data source. This data source can be hosted in a database, e.g., PSQL, and indexes can be created based on latitude and longitude. A RESTful application512, call it geo locate, can act as an API endpoint. When the geo locate app starts, it can load the full database in memory.FIG.22is a diagram of a tree820structure used to represent the database of Earth as cells. In the tree820, each leaf node will represent a cell of flattened earth. These cells can be equal size or different sizes. For equal size cells, each cell can represent one or x minutes on latitude and longitude, and each cell will be a square, i.e., equal delta for latitude and longitude. This approach has easy cell management, but some cells might not have an entry. For variable sized cells, cells are dynamic and the same number of cities is included in every cell. This approach has no empty cells, but is more complex. Both approaches are efficient (O(Log(n))) and scale by adding additional RESTful nodes. The following describes the fixed or equal size cells approach. The whole earth is divided into 360 Latitude (0.5 degree unit) and 360 (1.0 degree unit) Longitude. This will give 360*360=129600 cells. There is a data structure for city information, such as typedef struct city_info {char *name;char *state;char *country;double latitude;double longitude;struct city_info *next;} city_info_t;typedef city_info_t* city_info_ptr_t; A two dimensional array of city_info_ptr_t will behave as hash, geo_hash henceforth, where the first dimension will be latitude and second will be longitude. A valid latitude will be mapped to geo_hash index as: (int)((x*2)+180) % 360 i.e. {−90, −75.5, 0, 80, 87}=>{0, 28, 180, 340, 354}. 90 degrees latitude must be normalized to 89.9 before getting index. 90 degrees North is not 90 degrees south. A valid longitude will be mapped to geo hash index as: ((int)(x+180)) % 360 i.e. {−180, −150, 0, 175, 179, 180}=>{0, 30, 180, 355, 359, 0}. There is no need to normalize 180 degrees East as 180 degrees East is same 180 degrees West. When a RESTful server starts, it will build hash for all cities from a database, open a connection to the database, and get all entries in the database. For each entry in the database, it will get indexes for latitude and longitude and insert the city info to that index. If there is more than one city on that index, addition cities will be inserted to a tail of a city list, i.e., ‘next’ pointer of geo_city_info. § 10.2 Getting Nearest City When there is an API request to get nearest city for given latitude and longitude, if latitude is not valid or longitude is not valid then return; otherwise, get indexes for latitude and longitude, and set a level to zero. Until at least one city is found, check cities at the current level, update return value with city distance less than current distance. The level is updated, and the current level is processed for a neighborhood search.FIG.23is a visualization of a current level search. The Earth is not flat, so the above approach has potential to give wrong results, for some sparsely populated areas like the Pacific Ocean (somewhere in the middle of nowhere) because of the following. Unit latitude are separated by the same distance, more or less, each degree of latitude is approximately 69 miles apart. This distance is 68.703 miles at equator. This distance is 68.94 miles at the Tropic of Cancer and Tropic of Capricorn. This distance is 69.407 at poles. The max difference is ˜0.7 miles. For this calculation, ignore this difference and take 69 miles as standard distance for latitude. Unit longitude are separated by varying distances, each degree of longitude is 69.172 miles apart at the equator. This distance is 53 miles at 40 degrees North or South. This distance is ZERO at poles. This distance gradually decreases from the equator to poles.FIG.24is a map visually illustrating a globe and associated cells. Considering above two points, unit degree cannot move in each direction when moving to the next level. The varying distance of latitude must be facto while moving up the level. Think of it as increasing the radius of the circle to get more city on radar.FIG.25is a visualization of an accurate traversal. Considering the above “Accuracy” factors, either trace a different number of cells for each row while tracing or come up with better solution to avoid these calculations. Consider following, while building the data structure, it is possible to fix the area of cell, says unit_cell_area. This will give a fix number of rows, says num_row, for latitudes. For every latitude row, it is possible to calculate how many cells should be there to cover all longitudes. That is, calculate the circumference of Earth for that latitude row and divide that by square_root(unit_cell_area). This will give a number of cells for particular latitude row.FIG.26is a visualization of a modified accuracy approach. § 10.3 Data Structure The data structure can include typedef struct geo_cell_info {int cell_count;geo_city_info_ptr_t *city_cell;} geo_cell_info_t;typedef geo_cell_info_t* geo_cell_info_ptr_t; The following describes a process to build the data structure. 1) Define constant cell_width i.e. square cell width. Cell area will be(cell_width * cell_width)2) Set nums_rows = CEIL (180.0 / cell_width)3) Initialize geo_cell_info_t geo_data[nums_rows]4) Get all cities with geo coordinates from database or read it as flat fileinto city_list5) For every city in city_lista) Set city_row = FLOOR((latitude + 90) / nums_rows)b) If geo_data[city_row].cell_count is 0i) Set cell_count = CEIL(circumference at latitude / cell_width)ii) Set geo_data[city_row].cell_count = cell_countiii) Gset eo_data[city_row].city_cell = newgeo_city_info_ptr_t[cell_count]c) Set city_col = ROUND((longitude + 180) / cell_count)d) Append city to list at geo_data[city_row].city_cell[city_col]6) END § 10.4 Find City Process The following describes a process to find a city for a given latitude and longitude. 1) Set row_num = FLOOR((180.0 / nums_row) * lat)2) Set col_num = ROUND((lng + 180) /geo_data[row_num].cell_count)3) Set level = 0, num_cell = 04) While at least one city founda) For row_num − level to row_num + leveli) For col_num − num_cell to col_num + num_cell1) Calculate distance from each city int cell2) Set city to current city if distance is lessb) level = √{square root over (2)} * levelc) num_cell = √{square root over (2 * num_cell)}5) Return city An example SQL Query to get nearest city from database—Either one of the following will work. SELECT city_name, region_name, country_code, latitude, longitude, calculate_distance(latitude, longitude, 37.785144, −122.523189, ‘M’) as distance FROM world_cities ORDER BY distance LIMIT 1; SELECT city_name, region_name, country_code, latitude, longitude, distance FROM find_nearest_city(37.785144, −122.523189, ‘M’); An example API Endpoint includes Method: POSTPayload Type: JSONPayload Example: {“latitude”:37.785144, “longitude”:−122.523189}Response Example:{“name”: “San Francisco”,“state”: “California”,“distance”: 5.73326,“latitude”: 37.7749,“exe_time”: 1.6e−05,“longitude”: −122.419,“country”: “US”} Note: exe_time is code execution time to find nearest city from given latitude-longitude based on distance in miles. Geo Tagging of devices have taken central places to enforce policies which can enhance security and user experience. Attacks are being generated based on place device is moved to. For example, user who visits Las Vega are more susceptible to see gambling ads contain malicious code. Geo Tagging helps the cloud-based system100to detect the location of user and enforce policy based on that. Geo Tagging can also improve user experience. Based on user's location, it is possible to decide which monitors should be enabled. This can save bandwidth and allows user to have seamless browsing experience. This approach has a memory requirement of about ˜250 MB and is 1250× faster than the paid database approach. § 10.5 Geo Location Process FIG.27is a flowchart of a process850for geo location determination. The process850can be a computer-implemented method, implemented as instructions stored in a computer-readable medium and executed by one or more processors, or by an apparatus such as the enforcement node150or the server200or the TPG700. The process850includes loading a data structure into memory, wherein the data structure includes cities mapped to cells where the cells cover all of the Earth (step851); receiving a call with a given latitude and longitude of a user device (step852); finding a closest city to the given latitude and longitude utilizing the data structure (step853); and providing the closest city in response to the call (step854). The process850can include utilizing the closest city for policy in the cloud-based system for the user device. The process850can include redirecting traffic from the user device to a specific data center based on the closest city. The process850can include appending the closet city to metrics from the user device. The cells can be one of fixed size and variable size. The finding can include finding a current cell with cities, determining distance and setting each city with a minimum distance to the given latitude and longitude as a current city. The finding can include starting at a cell as an input point and tracing surrounding cells until the closest city is found. § 11.0 Conclusion It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application-Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments. Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. The foregoing sections include headers for various embodiments and those skilled in the art will appreciate these various embodiments may be used in combination with one another as well as individually. Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. Moreover, it is noted that the various elements described herein can be used in any and all combinations with each other. | 81,725 |
11863392 | DETAILED DESCRIPTION The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. At least some embodiments below relate to evaluating data to determine expected and/or actual compliance of computing devices with new policies. At least some other embodiments below relate to evaluating data to determine risks associated with operation of computing devices (e.g., prior to deployment of a new policy to the computing devices). Various non-limiting embodiments regarding evaluating data to determine risks are described in the section below titled “Pre-Deployment Evaluation Server Capabilities Based on Risk Assessment.” Determining compliance for computing devices when deploying a new policy presents several technical problems. In an enterprise, the large scale of the number of devices used by employees and/or other persons associated with the enterprise presents a significant technical problem in managing the devices. In many cases, numerous differences between various types of devices being managed by the enterprise make it difficult to implement policy changes on the devices. For example, an enterprise administrator that wishes to make a policy change (e.g., deploy a new policy) may not fully appreciate and/or be able to know the impact the policy change will have when deployed/rolled out to thousands or tens of thousands of computing devices. More specifically, there are several technical problems that are presented by the above situation. First, there is a need for a way to emulate/simulate/rehearse such a rollout to see what will happen, without adversely affecting the devices under management. In many cases, there is a need for a way to stage the rollout to groups of devices over time. Finally, there is a need for a way to rollback the policy change. This rollback may be done either on an administrator request, or automatically. Also, the rollback may be done if certain conditions are not met (e.g., limits on an increase in user or device notifications/alerts/etc. are violated during deployment of a policy change). In at least one embodiment, when it is determined that a rollback is required, an analysis can be performed to determine whether the rollback is required for all mobile devices, or only those mobile devices which are associated with a specific context subset. In an embodiment where it is determined that a rollback is required for mobile devices associated with a specific context subset, the rollback can be targeted to only those mobile devices. Various embodiments of the present disclosure associated with determining compliance for computing devices when deploying a new policy as discussed below provide one or more technological solutions to the above technical problems. In one embodiment, a trial deployment is initially performed to test the effects of a new policy prior to full active deployment. For example, the trial deployment allows determining whether deployment of a new policy will create an excessive or unacceptable number of violations. In one example, when an administrator deploys a policy, policy violations might generate a large number of alerts on user mobile devices that create significant disruption to user operation of the device and/or activities associated with an enterprise. In one embodiment, a new policy is deployed in a manner that permits determining an expected compliance of managed devices with the new policy (e.g., the trial deployment is implemented as a “read through” rehearsal). For example, if the expected compliance is determined to exceed a threshold number of violations when the new policy is actually deployed, then the policy can be adjusted. In one example, a read through rehearsal checks to determine how a policy change (if it were to be actually deployed) is expected to affect compliance of computing devices during their operation after the policy change. A report is generated to determine how many devices would be affected. The report is based on historical data that has been previously collected and stored regarding the computing devices. For example, this historical data may include information associated with device configuration, device state, device behavior, installed applications, etc. In one example, this historical data has been previously collected by mobile device management software that controls policy on the computing devices. A comparison of this historical data is made to the policy change to estimate how many devices would be in or out of compliance with the policy change. In one example, the number of user and/or device notifications/alerts which would be issued is also determined. For example, based on the number of devices in or out of compliance and/or the number of notifications, alerts, or other responsive events, a new policy can be adjusted prior to deployment. In one embodiment, the new policy is deployed to computing devices in a manner that monitors operation of the computing devices as to compliance with the new policy. However, instead of providing actual alerts to a user of the computing device when a violation occurs, the trial deployment only implements a reporting mechanism in which violations are reported to an administrator without alerting or otherwise disrupting the user (e.g., the trial deployment is implemented as a “dress rehearsal”). In this manner, the administrator is able to obtain actual compliance data from the trial deployment without user disruption. Based on the actual compliance data received, the administrator can adjust the policy. In one example, a dress rehearsal mode is used to actually roll the policy out to designated devices in parallel. A policy that is in dress rehearsal mode does all the checking that a real active policy would do, but issues no alerts/notifications to the end user, and does not do actual blocking (this type of policy is sometimes referred to herein as a passive policy, in that it is configured to avoid creating alerts and/or other events intended to interact with a user and/or to change operation of a device in a way that a user can perceive). A report is provided from the designated devices back to a rehearsal coordinator (e.g., an administrator server or other server) in the cloud as to what would have happened on the user devices if an active policy were implemented (an active policy corresponds to the passive policy, but an active policy results in actual alerts and other responsive actions being performed on the user device if there is a policy violation). If the rehearsal coordinator is not an administrator server, then the rehearsal coordinator communicates this information to the administrator server. Based on information received back from the designated devices in the dress rehearsal mode, the new policy can be adjusted. In one embodiment, a staged rollout is used to deploy a new policy (e.g., push out a policy update) to a subset of users and/or computing devices. For example, a staged rollout can be implemented as a dress rehearsal rollout, or as a real active policy rollout. In one embodiment, a rollout can be broken up into a number of stages (e.g., six stages), so that a new policy is rolled out one stage at a time (e.g., deployed to a certain number of mobile devices in a given stage). Each time a stage is rolled out, error alerts (and/or other responsive actions) are monitored (e.g., monitored by an evaluation server and/or an administrator server), and if a threshold number of alerts (and/or other responsive actions) is reached, then the devices are rolled back to the prior stage. In one example, each stage adds a new number of computing devices for deployment of the new policy. In one example, each stage is for the same number of computing devices, but implements additional portions of a new policy. In some cases, based on evaluation by an evaluation server of results received from a policy deployment (e.g., deployment of a passive policy in a dress rehearsal, or deployment of an active policy), a rollback of the deployment can be implemented. In one example, a rollback is a reversion of the policy state on each computing device to its prior policy state (e.g., a policy state prior to a dress rehearsal). In one embodiment, an automated rollback can have one or more conditions which are checked on a per-device or a collection-of-devices basis. If the condition(s) are reached or satisfied, then an automated rollback can be implemented for each of the affected devices (e.g., devices for which the conditions are satisfied), or the rollback can be implemented for all devices in the rollout. In some cases, an automated rollback can be implemented for a staged rollout. In one embodiment, a new policy to be deployed is a policy of a monitoring service or another third-party service (e.g., as implemented by an evaluation server as described herein). In one example, the policy defines which applications installed on user devices are malware, and which are not, and/or defines other known risks. In one example, the monitoring service communicates the new policy to an administrator server, which will actually deploy the new policy. In one embodiment, a new policy to be deployed is a configurable customer policy. In one example, an administrator is the customer and defines the new policy. For example, an administrator server can define for each computing device a policy to be implemented. In one example, this new policy defines acceptable device and/or software behavior and/or other characteristics of a device. For example, these definitions can include what application behaviors are acceptable, what operating system must be running on the device, what settings should be enabled on the device, etc. In one embodiment, for configurable customer policies, a capability is provided by an evaluation server to integrate initial trial deployment testing of policies that are defined by a customer (e.g., administrator) in a user interface, and then deployed as described above (e.g., using a read through or dress rehearsal). In one embodiment, a new policy of a monitoring or other third-party service is integrated with a new policy configured by a customer to provide a combined new policy. The combined new policy is deployed using a trial deployment (e.g., in a dress rehearsal mode) as described above. In some cases, when deploying a new policy, an administrator may face additional technical problems due to a lack of data regarding risks associated with the computing devices to which the new policy is to be deployed. For example, the administrator does not know which devices may be at higher risk, and thus have a more urgent need for new policy deployment. Further, the existing risks associated with the computing devices may affect the type of new policy that is to be deployed. Various embodiments of the present disclosure associated with evaluating data to determine risks associated with operation of computing devices as discussed below and provide one or more technological solutions to these additional technical problems above. In one embodiment, an evaluation server receives data associated with certain computing devices on which a new policy is to be deployed. The evaluation server compares the received data to historical data stored in a data repository. The historical data corresponds to risks identified based on information collected from other computing devices (e.g., these other devices are different from the devices onto which the new policy will be deployed, and the other devices may have been observed for an extended time period (e.g., 1-3 years)). For example, this historical data has been collected prior to the deployment by security clients installed on each of the other devices as part of a security management service provided by the evaluation server. The evaluation server generates, based on comparing the received data for the certain computing devices to the historical data, a risk profile for each of the certain computing devices. The evaluation server uses the risk profiles for each of the computing devices to perform one or more actions. In one example, the risk profiles are used to prioritize a deployment to the certain computing devices in a priority order based on the risk profiles. For example, those computing devices that are at a higher risk can be part of a first rollout stage to receive a deployment of the new policy. Factors to determine a risk profile can include the user's historical behavior (e.g., downloading of unauthorized applications, susceptibility to phishing attacks, etc.), an operating system of the device, applications or other software downloaded on the device, and security features associated with the device. In one example, prior to deploying a client security application to fleet computing devices, an administrator connects MDM or other software running at an administrator server to risk assessment software running on an evaluation server. In one example, the MDM software is connected to a tenant provided by the risk assessment software. In one example, the tenant is provided using a multi-tenant cloud architecture application that provides a risk assessment service (e.g., using a software as a service model). The evaluation server requests data about the fleet devices (e.g., installed apps) from the MDM or other software. The evaluation server correlates the received device data against a corpus of mobile risks (e.g., risk data stored in a data repository). Based on the correlation results, the evaluation server performs one or more actions. In one example, a report or user interface display or data is provided including details regarding identified risks associated with the fleet devices. In one example, the report is provided to an administrator server and provides guidance regarding a prioritized deployment to the fleet devices. In one example, the report includes pre-deployment remediation suggestions for action by the MDM software. In one example, the report includes suggested enterprise policy settings (e.g., to be enforced by the MDM software). FIG.1shows a computing system including an evaluation server150used to evaluate a new policy186to be deployed on various computing devices, according to one embodiment. For example, each computing device can be a user terminal or a mobile device. InFIG.1, user terminals (e.g.,141,143, . . . ,145) and/or mobile devices (e.g.,147,149) are used to access, communicate, and/or interact with evaluation server150, an administrator server180, and/or a service provider170over a communication network121(e.g., the Internet, a wide area network, a local network, or other wired or wireless communications network). Network121may be used to download and remotely install applications selected from an application marketplace (e.g., using Google Play or the Android Market). An application1013installed on mobile device149may initiate or originate an access request for a service provided by service provider170. Mobile device149may download new application1013from an application marketplace, administrator server180, service provider170, or a developer server (not shown). New application1013has components104and106. Application1013may generate an access request (e.g., for access to a service provided by service provider170) that is transmitted to a server (e.g., transmitted using a series of computing devices originating with mobile device149). In one embodiment, the access request is sent by mobile device149to evaluation server150, which forwards a communication regarding the request to service provider170. In one embodiment, component104is a software component (e.g., a security component, or client application2207ofFIG.2below) that generates or obtains data regarding a risk configuration of a computing device (e.g., a risk configuration of mobile device149, on which a user initiates a request for access). For example, a user action in a user interface displayed on mobile device149causes component104to initiate an access request for a service provided by a computing device of service provider170. The access request is transmitted to evaluation server150, which can perform a security evaluation of a configuration of mobile device149based on various factors (e.g., as part of determining a context of mobile device149operation). Mobile device149stores a user policy108. The new application1013may be compared to user policy108during or after installation. In one example, evaluation server150includes a data repository of policies as rules116(e.g., user policies required by an admin server). User policy108of mobile device149may be compared to policies116. Administrator server180may provide some rules116and/or policies in new policy186(e.g., as regards usage of or installation of applications onto mobile device149). In one embodiment, it is determined that user policy108is not in compliance with the current state of rules116when applied to a currently-determined context of the mobile device149. The user policy108is stored locally in a memory of mobile device149. In one embodiment, during operation, user policy108may be used to define the handling of components104and106on mobile device149. In one embodiment, a user policy for mobile device149may alternatively (or in addition to user policy108) be stored as one of policies116on the evaluation server150and/or an identity provider (not shown). A user or administrator policy may be enforced on mobile device149using either a local user policy or a remote user policy, or a combination thereof. In one embodiment, an administrator (e.g., administrator server180) defines and deploys policies for an organization. In some embodiments, the organization may be a family or other social group, and the administrator role may be performed by a parent or guardian, or may be performed by a third party service provider. Such a third party service provider may be a provider of security services, the network operator, and/or a provider of content services. The additional levels of protection and control that organizations such as enterprises desire can also be advantageous for consumers, but consumers are typically not knowledgeable enough to perform administrator roles. Thus, there is often a need for third party service providers to act as technically-oriented admins. The consumer or parent or guardian as an admin may specify preferences corresponding to high-level policy decisions, and a technical admin can configure underlying services to meet these high-level policy decisions. An administrator or admin as used in this disclosure includes, but is not limited to, all such administrators (e.g., technical admin, consumer, parent, guardian, service provider, etc.) as described in this paragraph. In one embodiment, evaluation server150determines new policy186. For example, the new policy may be defined by an administrator for deployment to mobile devices147,149. Evaluation server150compares new policy186to previously-collected data for mobile devices147,149. The previously collected data is stored in data repository182. For example, the collected data can include device configuration, device state, and/or device behavior for each of mobile devices147,149. Evaluation server150determines a compliance for each of mobile device147and149associated with implementation of new policy186. This compliance is determined based on comparing new policy186to the collected data in data repository182. For example, this comparison may determine that an operating system on mobile device149is inconsistent with a rule of the new policy186. Evaluation server150uses the determination of compliance for each of mobile device147,149to perform one or more actions. For example, evaluation server150can transmit a message to each mobile device that is not compliant with new policy186, and/or can transmit a message to administrator server180. In one embodiment, evaluation server150determines a risk profile for each of various computing devices that will be included in a deployment of new policy186. These risk profiles can be stored as risk profiles184on evaluation server150. Based on the risk profile for each computing device, evaluation server150selects a first set of the computing devices for deployment of the new policy186. The new policy16is rolled out to the first set as a first stage of deployment. In one embodiment, evaluation server150receives configuration data associated with mobile devices147,149. For example, the configuration data can be previously collected by administrator server180when managing and/or enforcing policy on mobile devices147,149. Evaluation server150compares the configuration data to historical data stored in data repository182. The historical data includes information regarding risks associated with software components. In one example, the software components are installed on computing devices other than those devices to which new policy186will be deployed. Based on the comparison of the configuration data to the historical data, evaluation server150generates a risk profile for each of mobile devices147,149. These risk profiles are stored as risk profiles184. Based on these generated risk profiles184, evaluation server150causes one or more actions to be performed. For example, the action can be generating a report indicating a prioritized order of deployment of software and/or new policy186to mobile devices147,149. For example, the action can be performing a remediation action for one of mobile devices147,149. For example, the action can be generating an update to new policy186prior to deployment to mobile devices147,149. In one embodiment, the generated report is presented to an administrator in a user interface of administrator server180. The user interface permits the administrator to make changes to the priority order of deployment for a policy. The user interface also permits the administrator to initiate deployment of software in a suggested priority order by providing a user input in the user interface. In one example, a component is a part of an application (e.g., an application that is installed by a user from an Android or other software application marketplace and then executes on a mobile device). In one example, a component is provided by the application's creator or by a third party. In another example, the component may be code provided by an ad network or an analytics network. In one example, data repository182includes historical data regarding structural and/or behavioral characteristics of components observed by evaluation server150when monitoring various computing devices (e.g., mobile device147). In yet another example, components are linked libraries/SDKs that are packaged within an application. This is code that is within the application, but the code is developed by a third party and provides the ability for an application developer to integrate certain behaviors of that component into the developer's application (e.g., displaying a certain type of ads from a certain ad network such as LeadBolt). In one example, monitoring of context and/or substitution or modification of components based on such monitoring as described herein is integrated as a security component into a developer's or other entity's application. In another example, a set of data (e.g., in a file or a database) that is used by an application may be considered as a component of that application. Also, in some examples, data used by an application can be considered as known or unknown, or trusted or untrusted. In one embodiment, a user policy (e.g., user policy108) based on component behavior may be enforced on the user's computing device. User policy108may be a result of deployment of new policy186. In one example, the user policy may require that there be no applications that send location to an advertising network. In another example, the user policy may require that no applications send identifiers to an advertising network. In one embodiment, it is determined in a pre-deployment assessment or trial deployment that the context of the computing device is or will be in consistent with new policy186and/or rules116. It may also be determined that one or more actions authorized and/or permissions granted by the computing device, such as under the user policy, are inconsistent with the rules116associated with the present context of the computing device. In one embodiment, evaluation server150monitors the context in which one or more computing devices is operating. For example, evaluation server150determines a context in which user terminal141and/or mobile device149is operating. This context can be part of the data collected and used in pre-deployment assessment for new policy186. After determining the context in which, for example, mobile device149is operating, evaluation server150determines one or more rules116associated with the context. For example, evaluation server150determines a geographic location of mobile device149. This location is used to determine rules116that are applicable to operation of mobile device149for that determined location. In at least one embodiment, the contexts associated with multiple mobile devices are analyzed to determine subsets of mobile devices having similar contexts. In another example, evaluation server150determines a network to which mobile device149is connected or accessing. Based on the determined network, evaluation server150determines rules116that are applicable to usage of the network. For example, rules116that apply to the network may be one or more policies associated with use of the service provided by the network. In one example, the policies are provided by service provider170. In one example, the policies are provided by an enterprise that manages mobile device149, which is used by, for example, an employee of the enterprise. After determining the rules applicable to the present context of the mobile device149, evaluation server150determines whether the computing device is in compliance with the applicable rules. For example, the rules applicable to the present context may include requirements regarding security processing on the mobile device149. Evaluation server150may determine, for example, that encryption and decryption modules on mobile device149do not comply with applicable requirements regarding security processing. In response to determining that the computing device is or will be in violation of one or more applicable rules (e.g., lack of compliance with a new policy to be deployed, or that has already been deployed) above, evaluation server150performs one or more actions. In one example, the actions include one or more actions as described above based on determining compliance and/or risk profiles for computing devices on which a new policy is deployed. In one embodiment, the actions performed by evaluation server150include modifying or substitute a component of software on mobile device149. For example, component106on application1013can be substituted for a new component. The new component can be sent from evaluation server150to mobile device149, or may already be present on mobile device149. In one embodiment, the new component can be sent from another computing device, such as service provider170, or from a developer server. In one embodiment, the new component to be used for substitution is selected from a set of software components. The new component is selected at least based on its being compliant with the applicable rules to the present context. For example, the new component can be selected based on the geographic location, which corresponds to the applicable rules for the present context. In one embodiment, the actions performed by evaluation server150include sending a communication to mobile device149to cause a display of a warning to the user. In one example, the warning indicates that security software on the mobile device149is in violation of a policy. In one embodiment, mobile device149can perform actions in response to determining a violation using a table without requiring communication with evaluation server150. In another embodiment, mobile device149communicates with evaluation server150after determining the violation. In one embodiment, if evaluation server150authorizes access to a service by mobile device149, server150sends a communication over network121to service provider170regarding authorizing access to the service. In one embodiment, server150determines a risk level for mobile device149and includes this risk level in the communication to service provider170. In one embodiment, determining the risk level is part of determining the context of operation for mobile device149. In one embodiment, when component104makes a request for access to the service, the request is first sent to service provider170. Then, service provider170forwards the access request to evaluation server150. Evaluation server150performs a security evaluation of risk factors associated with mobile device149. For example, these risk factors can be used as collected and/or historical data for comparisons above when doing a pre-deployment policy and/or risk assessment. In one embodiment, the risk factors are used to determine the context of the mobile device149. If the evaluation determines that the configuration is not secure and/or that mobile device149is currently operating or will be in violation of one or more rules116(or new policy186), server150blocks access by mobile device149to the service. In one embodiment, the security evaluation is based on data received from the mobile device149. At least a portion of this data can be sent to service provider170along with a result of the security evaluation. In one embodiment, this data is received from component104, or from another software component such as component106that is on mobile device149. The data sent to evaluation server150is obtained from the mobile device using this software component. In one embodiment, the security evaluation by server150includes determining a source of application1013, component104, and/or component106. In one embodiment, the security evaluation includes evaluating authenticity of software on mobile device149and/or analyzing at least one component installed or otherwise stored on mobile device149. In one embodiment, the security evaluation determines an extent of security risk for mobile device149based on a plurality of factors. The extent of access to the service provided to mobile device149is based on this extent of security risk. In one embodiment, the security evaluation determines that a risk configuration of mobile device149passes a security threshold. If the threshold is passed, server150sends a communication to service provider170regarding the passed security threshold. This communication may include data obtained from mobile device149and used in the security evaluation above. In one embodiment, if it is determined by evaluation server150in a security evaluation or as part of a context determination, performed after a user has started receiving a service, that a risk level associated with mobile device149exceeds a threshold or is otherwise un-trusted, then an open session of the user with the service from service provider170can be closed. Also, any token of mobile device149indicating a healthy or safe configuration of the device can be revoked or destroyed. This prevents further access to the service by the device. In one embodiment, if access to a service is terminated as just described, an identity provider can be notified of the change by evaluation server150. Also, a level of access to the service can be decreased based on the newly-determined risk level, instead of terminating all access to the service. In one embodiment, this risk level is used as part of determining a priority order for deployment of new policy186. In one embodiment, if it is determined by evaluation server150that mobile device149is not configured correctly or adequately for a present context as determined by a risk level, various actions may be taken. For example, mobile device149may be instructed to take a photo that is uploaded to server150, acquire a device location and upload to server150, and/or erase sensitive data on mobile device149. Other examples include disabling login credentials, instructing the user how to remediate the problem, allowing login by the user, but denying access to certain services, revoking a token already in use by the device, and/or changing a password for the service. In one embodiment, data used in a context determination or security evaluation by evaluation server150is extracted from one or more communications received from mobile device149, and/or from service provider170. In some cases, such communication can be the communication that includes the access request. In other cases, the communication is received prior to or subsequent to receiving the access request. In one embodiment, the access request is generated by application1013, which is executing on mobile device149. Performing the security evaluation includes determining the authenticity of application1013, for example as discussed below. In one embodiment, the security evaluation can include assessing a context of a user of mobile device149. This context can be determined by various factors including a location of mobile device149, a device location for at least one prior login made by the user (e.g., a prior login to the service), an event associated with the presence of the user on a computing device other than mobile device149(e.g., this other device may be a tablet, a laptop, or a watch device of the user), or credentials associated with the user that have become unsecure (e.g., credentials that have been identified from monitoring of the dark web). In one embodiment, mobile device149is associated with a domain. Evaluation server150performs an evaluation using data from one or more prior communications received by evaluation server150. These prior communications may be provided from other computing devices associated with the domain. In one embodiment, access to the service from service provider170requires that a software component is installed on mobile device149. In response to determining that the software component is not installed, the communication is sent to the mobile device requesting installation of the software component. After sending this communication, evaluation server150determines whether the software component is properly installed on mobile device149. If so, server150sends a communication to cause service provider170or an identity provider to authorize or grant access to the service. In various embodiments, access to a service provided by service provider170is conditioned on a successful evaluation of various risk-based factors. Mechanisms that may be used to authenticate a device, user, and/or application by evaluation server150include one or more of the following: requiring that an SSL client certificate be supplied for each access request by mobile device149, evaluating authentication factors provided from network connection establishment (e.g., Wi-Fi, VPN, cellular, etc.) by mobile device149, or evaluating authentication factors provided from establishment of a network tunnel or proxy connection for mobile device149. In various embodiments, factors used in a context determination or a security evaluation by evaluation server150for a pre-deployment assessment, for collected or historical data for comparisons to a new policy, and/or to allow or deny access to a service are now described below: 1. Various device factors associated with mobile device149include determining whether the device is compromised, such as whether an operating system is compromised, whether the device is up-to-date, such as whether a vulnerable operating system version is in use. Further factors include determining a presence of malware, or determining whether the device has a secure configuration. For example, determining whether a bad SSL root identified for certificate authorities is installed on the device, an anomalous VPN/proxy is identified, whether device encryption enabled, and/or whether a pin code is enabled. Further factors include evaluating hardware-backed authentication associated with mobile device149. For example, determining whether a device key is stored in a secure enclave, or whether a server provides a nonce which mobile device149signs with hardware to prove presence of hardware-stored key. 2. Various user factors may be used in the security evaluation. These factors may include biometric factors such as a fingerprint, or knowledge-based factors such as whether a user of mobile device149is able to answer knowledge-based questions (e.g., about the user's background or prior life or work activities). 3. Various application factors may be used in the security evaluation. These factors may include determining whether application1013on mobile device149is an authorized or allowed version of the application. For example, whether the application is the official enterprise application or an unofficial version. Also, these factors include determining whether the application is up-to-date, such as whether there is a known vulnerability in this particular application. 4. Various context factors may be used in the security evaluation. These factors may include determining a location of device149, other recent user logins and respective devices/locations associated with these logins, and/or other user-present events (e.g., a badge in, CCTV facial recognition, Wi-Fi connections, and Bluetooth beacon detections). In one embodiment, evaluation server150collects data from the device and sends the data to a cloud back-end server system accessible to server150in order to compare the collected data to other data that evaluation server150has collected. Types of data collected include, for example, an application inventory of all apps installed on the device, version numbers for the apps, and what are the hashes and unique identifiers associated with those applications. In one example, this collected data is stored in data repository182. Evaluation server150fingerprints the filesystem of the device (e.g., firmware, etc.) and calculates a fingerprint for the device so evaluation server150can determine when a device is running modified firmware or other (improperly) modified software. In one embodiment, evaluation server150collects information regarding how the network is behaving (e.g., the network communication path between evaluation server150and mobile device149, or communications by mobile device149with other computing devices). For example, evaluation server150runs a series of behavioral tests on each network to which mobile device149connects (e.g., whether the device is sending potentially hack-able communications to random or unknown servers; whether there been any attempt to downgrade the TLS or other secure version of protocol being used for communication; and/or whether the certificates that the device is receiving from these requests are valid, etc.). In at least one embodiment, evaluation server150can run behavioral tests based on context subgroups. The result of the behavioral test can be used to determine whether the rollout will be performed to the mobile devices associated with the context subgroup. For example, if a deployment of software is of high priority or important (e.g., due to a discovered or analyzed risk), but one or more context subgroups are determined to fail a behavioral test(s), the software can be deployed to the mobile devices that are associated with those context subgroups which pass the behavioral test(s). In one embodiment, at least a portion of data associated with the security evaluation by evaluation server150is sent to service provider170. The service provider can configure a policy regarding the type of data that is sent by evaluation server150(e.g., using a console provided to the service provider by evaluation server150). Use of this policy can group the device based on the evaluated data into a risk class (e.g., high-risk or low-risk). Evaluation server150only communicates to service provider170the class of risk based on the previously-determined or configured policy (e.g., using the console) of the service provider. In one embodiment, all of the functions above are provided, but instead of using a separate client application on the device, the attestation functionality is provided via an SDK that controls the active application in the device directly. In other words, a software component is a part of the active application on the device that makes the request for access to the service. In one embodiment, one or more SDK components are present in an application. Evaluation server150determines that the application is in violation of rules116based on the context determination. In response, evaluation server150causes modification or substitution of the one or more SDK components on mobile device149. In one embodiment, the analysis functions performed by the evaluation server150can be done via an SDK that is injected into a client application that the user is currently using on the user's device. One example is an identity provider (e.g., Okta has an app that facilitates single sign-on using a user device). The Okta app can include an SDK that incorporates the security evaluation functionality above so that the app can make risk decisions itself instead of having to consult another application or computing device. In one embodiment, a use case is a business-to-consumer use case. For example, a bank can decide that before customers are permitted to login to a banking application, or attempt to initiate a large balance transfer, the evaluation server checks the risk level of the device. The bank can require that the user install an application that incorporates or uses the security evaluation discussed above. In one embodiment, there are cases where the evaluation server determines that a device should not be trusted without first requiring installation of a client application on the device. For example, based on headers received by the evaluation server, it is determined that the device is running an older operating system that is deemed as being unacceptably old. So, a security evaluation does not necessarily require consulting a client application on the user device. There are cases where the evaluation server can make a decision not to trust the device (e.g., solely from a SAML request) even though no client application is on the device. In other cases, the untrusted device can be included in a higher priority new policy rollout. In one embodiment, the service request to service.com is made by an application on mobile device149that is associated with service.com. This application is configured to communicate with evaluation server150when an access request is made to the service.com domain. Evaluation server150is configured to communicate with the identity provider if server150determines that the device is in a secure state. If server150determines that the device is insecure, server150can request that the user remediate any issue identified. In one embodiment, evaluation server150checks that a device is free of threats and is compliant with a corporate policy corresponding to service provider170. Regarding vulnerabilities and this policy, these can be configured by service provider170based on the service provider's desired risk threshold. For example, for the risk of an operating system version that is too old, the service provider sets the policy as to whether the service provider wants to prevent access to that device. In other cases, regarding behavior and configuration, a determination can be made whether the application running on the device is compliant with policy, whether the way that the device is configured is compliant with policy, whether there is a passcode set, etc. FIG.2shows a computing system for generating risk profiles (e.g., risk profiles184) for various computing devices based on comparing new device data to previously-collected device data, according to one embodiment. For example, evaluation server150generates a risk profile for mobile device2201, similarly as discussed above for mobile device149. In one embodiment, mobile device2201accesses network172over communication network121. For example, mobile device2201accesses a service provided via network172. In one embodiment, an application on mobile device2201is obtained from developer server160. In one example, the application includes an SDK component related to security, which is modified or substituted in response to determining a violation associated with deployment of a new policy to mobile device2201. Mobile device2201includes memory2212that stores a table2213and/or stored data2215. Table2213includes a list of geographic locations and corresponding rules associated with each location. Mobile device2201includes security software2207. For example, security software2207communicates with evaluation server150. Security software2207collects data from one or more sensors of mobile device2201as part of determining a context. One or more of the sensors can be related to determining a geographic location of mobile device2201. Security software2207also may determine one or more permissions2217that have been configured on mobile device2201, such as by the user. Security software2207reports one or more of these permissions2217to evaluation server150. Mobile device2201includes applications2209and components2211. Applications2209are an example of application1013. Components2211are an example of components104or106. Components2211can be stored on mobile device2201for use in future modification or substitution into or with one or more applications2209. For example, a component2211can be used to substitute a component of an application2209in response to determining that mobile device2201is in violation of new policy186, a rule116and/or a rule in table2213. In some embodiments, the manner of usage and/or behavior of an application on a computing device can be monitored and this can be part of a context determination for the computing device (e.g., which is part of the collected data used for new policy comparison above). The usage or behavior of components of the application on the device that are inconsistent with a user or administrator-designated policy can be identified. In such event, the source of the application and/or use of the application can be deemed as untrusted or in violation of a rule116. There are various examples of policies that may be used on mobile or other computing devices. For example, a user policy may define the handling of components104and106on mobile device149. A policy may be defined by behavioral preferences established by a user and/or an administrator, and this policy is enforced on new applications installed on the mobile device. In another example, a policy may apply to a particular identified application. In other examples, policies may be defined and applied to control or restrict the behavior of applications and their components. This can include the identification of advertising networks and defining policies to permit various opt-out actions for these advertising networks. AlthoughFIG.2illustrates an exemplary system implemented in client-server architecture, embodiments of the disclosure can be implemented in various alternative architectures. For example, the evaluation server150may be implemented via a peer to peer network of user terminals in some embodiments, where applications and data/information from mobile devices are shared via peer to peer communication connections. In some embodiments, a combination of client server architecture and peer to peer architecture can be used, in which one or more centralized servers may be used to provide some of the information and/or services and the peer to peer network is used to provide other information and/or services. Thus, embodiments of the disclosure are not limited to a particular architecture. In one embodiment, an enterprise risk level is determined, for sharing security risk information between enterprises by identifying a security response by a first enterprise and then sharing the security response to a second enterprise when a relationship database profile for the first collection indicates the security response may be shared. Methods are also provided for determining whether to allow a request from an originating device where the request may have been initiated by a remote device. In one embodiment, the security risk information is used in the security evaluation performed (e.g., by the evaluation server150ofFIG.1above or by another computing device) in response to the access request above. In one embodiment, data obtained from a mobile communications device is evaluated by the evaluation server150ofFIG.1above to determine if granting the device access to a service presents a security threat. In one embodiment, aggregated information is used in the security evaluation above. In one embodiment, a method is provided for passing aggregated information, such as source information, along with an access request. In the embodiment, aggregated information may be used to determine whether to allow an attempt to access a resource. The aggregated information may include, for example, user authentication information and source information, and source information may include, for example, information about the state of the initiating and originating computing devices, attributes or identifies of applications being used in the access attempt, and similar information from any intermediate (“intervening” or “chained”) application or computing device that is part of the access attempt. The aggregated information may be passed with the access request in a number of ways, including, for example: as SAML security assertion extensions, as additional HTTP headers, or via a separate flow. In a further example, a single sign-on (SSO) provider (or Identity Services Provider) may piggyback the aggregated information onto an access request (or responses), and security components on computing devices in the access request chain may add their contributions to the aggregated information in the SSO information flow. In one embodiment, responses to an access request other than or in addition to “allow” and “deny” are allowed. For example, if the access request related to running an application on the destination computing device and the associated source information indicted that a computing device in the series was untrusted, a security component may allow the request in a limited fashion (e.g., run with output quarantined), or deny the request and initiate or suggest to the user the uninstallation of the target application. In one embodiment, a secure platform enables mobile devices, such as a cell phones, smartphones, or PDAs, to have relationships with services or service providers that are controlled by the state of security on each device. In one example, the platform is comprised of a server that receives data from security software on a mobile device regarding the device's security state. The platform enables access to a service to be granted, denied, or limited based on the security state of the mobile device. The platform may provide two-way communications between a mobile device and a service so that the platform can enforce access security both from the client to the service and from the service to the client. Furthermore, the platform allows services or service providers to evaluate the security state of a device independently of using the platform to communicate with the device. In one embodiment, a system provides, by a software component on a computing device (e.g., for components on any one or more devices in a series of devices transmitting an access request, as discussed above), a dynamic assessment of a security state of a computing device (e.g., this assessment may be performed by the evaluation server150ofFIG.1above). Here, the user of a mobile communications device may request access to a service provider. This may be where the user attempts to access a banking service or other network based service using software installed on a handset. This request may be managed by a server, which receives the request from the computing device. The server may access a database or other memory to determine whether it has updated security state information for the device. If not, then, this security state information is obtained from the device. Once obtained, the security state for the device may be assessed. If the security state is acceptable, then the device may have access to the service provider. If the device security state is unacceptable, then access may be limited or denied. The acceptability of a device's security state and the level of access to the mobile communications device may be set, for example, by the service provider. In various embodiments, the access control may be used to control access to the service provided by service provider170ofFIG.1above. In one embodiment, a system and method is for reporting security information relating to a mobile device. In one embodiment, the security evaluation performed above (e.g., by the evaluation server150ofFIG.1above) is a security assessment. This security assessment is displayed in various formats on the mobile device display or on a client computer. A security component identifies security events on the mobile device that are processed on the mobile device or by a server. The security component then determines a security assessment for the mobile device based upon the detected security events. The security assessment display may be persistent in the form of a desktop widget or dashboard on a client computer, or home-screen item on the mobile device. This allows a user or administrator to verify that security protection on the device is functioning and to be alerted if the device needs attention without having to specifically seek the information, thereby enabling immediate response to potential security problems. In one embodiment, a method is for evaluating security. This method evaluates security during an interactive service operation by a mobile communications device and includes launching, by a mobile communications device, an interactive service configured to access a server over a network during an interactive service operation, and generating a security evaluation based on a plurality of trust factors related to a current state of the mobile communications device, to a security feature of the application, and/or to a security feature of the network. When the security evaluation is generated, an action is performed based on the security evaluation. In one embodiment, the evaluation server150above performs an evaluation, including use of a threshold. In one embodiment, these plurality of trust factors are included in the first data above received by the evaluation server150for use in the evaluation. FIG.3shows a computing system for evaluating a new policy1418to be deployed by an administrator server1310to various mobile devices, according to one embodiment. These mobile devices include mobile device1405. In one embodiment, evaluation server1408monitors mobile device1405for expected or actual compliance with new policy1418and/or policies1416. Evaluation server1408is an example of evaluation server150. For example, policy manager1406is software on evaluation server1408used to monitor and/or evaluate the expected or actual compliance. In one embodiment, administrator server1310is connected to evaluation server1408via a tenant1422. In one example, tenant1422is connected to MDM software1311so that configuration, context, and/or other data associated with mobile device1405that has been collected by MDM software1311can be transmitted to evaluation server1408for use in pre-deployment comparisons to new policy1418, such as described above. In one embodiment, user interface1420of administrator server1310permits an administrator to control deployment of new policy1418to mobile device1405. In one embodiment, user interface1420presents reports and/or other information provided from evaluations performed by evaluation server1408. In one example, the report shows expected compliance with a deployment of new policy1418. In one example, a priority order of deployment of new policy1418is presented in user interface1420. In one embodiment, evaluation server1408also optionally can manage permissions associated with one or more computing devices, according to one embodiment. Evaluation server1408executes policy manager1406to manage permissions associated with various computing devices including mobile device1405. Evaluation server1408stores new policy1418and policies1416in memory (not shown). Policies1416are implemented by policy manager1406on mobile device1405. In one embodiment, policies1416correspond to an enterprise policy. Permissions1409for various software on mobile device1405are maintained by policy manager1406to be in compliance with policies1416. In one example, admin server1310transmits data regarding policies1342to evaluation server1408, which data is used to update policies1416as regards acceptable permissions for mobile device1405. In one embodiment, mobile device management software1311is executed by admin server1310and is used to manage mobile device1405along with other computing devices. In one embodiment, evaluation server1408determines a change of context for mobile device1405. For example, evaluation server1408may determine that mobile device1405is attempting to connect to network1404. In another example, evaluation server1408may determine that mobile device1405is attempting to install software from an application marketplace. In response to determining the change of context and/or in response to a pre-deployment assessment of new policy1418, evaluation server1408determines whether mobile device1405is or will be in violation of new policy1418and/or one or more policies1416associated with a new or expected context of mobile device1405. In response, evaluation server1408can revoke one or more permissions for software on mobile device1405based on the change of context or lack of policy compliance. Security component1412resides on mobile device1405and can be used to revoke or deny permissions on mobile device1405. In one embodiment, security component1412also can implement changes to a configuration1410of operating system1320. In one embodiment, security component1412uses one or more application programming interfaces (APIs)1322in order to make modifications to operating system1320. In one embodiment, these APIs permit security component1412to, in response to determining that mobile device1405is in violation of one or more rules, modify or substitute component1324or1326of application1316. FIG.4shows a block diagram of a computing device (e.g., an evaluation server150, or an administrator server1310) which can be used in various embodiments. WhileFIG.4illustrates various components, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components may also be used. In an embodiment, an evaluation server, an administrator server, an authenticity server, or an identity provider may each reside on separate computing systems, or one or more may run on the same computing device, in various combinations. InFIG.4, computing device201includes an inter-connect202(e.g., bus and system core logic), which interconnects a microprocessor(s)203and memory208. The microprocessor203is coupled to cache memory204in the example ofFIG.4. The inter-connect202interconnects the microprocessor(s)203and the memory208together and also interconnects them to a display controller and display device207and to peripheral devices such as input/output (I/O) devices205through an input/output controller(s)206. Typical I/O devices include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices which are well known in the art. The inter-connect202may include one or more buses connected to one another through various bridges, controllers and/or adapters. In one embodiment the I/O controller206includes a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals. The memory208may include ROM (Read Only Memory), and volatile RAM (Random Access Memory) and non-volatile memory, such as hard drive, flash memory, etc. Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, or an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory. The non-volatile memory can be a local device coupled directly to the rest of the components in the computing device. A non-volatile memory that is remote from the computing device, such as a network storage device coupled to the computing device through a network interface such as a modem or Ethernet interface, can also be used. In one embodiment, a computing device as illustrated inFIG.4is used to implement evaluation server150, an application marketplace, service provider170, administrator server1310, and/or other servers. In another embodiment, a computing device as illustrated inFIG.4is used to implement a user terminal or a mobile device on which an application is installed or being installed. A user terminal may be in the form, for example, of a notebook computer or a personal desktop computer. In some embodiments, one or more servers can be replaced with the service of a peer to peer network of a plurality of data processing systems, or a network of distributed computing systems. The peer to peer network, or a distributed computing system, can be collectively viewed as a computing device. Embodiments of the disclosure can be implemented via the microprocessor(s)203and/or the memory208. For example, the functionalities described can be partially implemented via hardware logic in the microprocessor(s)203and partially using the instructions stored in the memory208. Some embodiments are implemented using the microprocessor(s)203without additional instructions stored in the memory208. Some embodiments are implemented using the instructions stored in the memory208for execution by one or more general purpose microprocessor(s)203. Thus, the disclosure is not limited to a specific configuration of hardware and/or software. FIG.5shows a block diagram of a computing device (e.g., a mobile device of a user, or a user terminal), according to one embodiment. InFIG.5, the computing device includes an inter-connect221connecting the presentation device229, user input device231, a processor233, a memory227, a position identification unit225and a communication device223. InFIG.5, the position identification unit225is used to identify a geographic location. The position identification unit225may include a satellite positioning system receiver, such as a Global Positioning System (GPS) receiver, to automatically identify the current position of the computing device. InFIG.5, the communication device223is configured to communicate with a server to provide data, including application data (e.g., an application identifier and a source identifier for a newly-sourced application). In one embodiment, the user input device231is configured to receive or generate user data or content. The user input device231may include a text input device, a still image camera, a video camera, and/or a sound recorder, etc. FIG.6shows a method for determining expected and/or actual compliance for computing devices associated with deployment of a new policy, according to one embodiment. For example, the method ofFIG.6can be implemented in the system ofFIG.1,2, or3. The method ofFIG.6can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method ofFIG.6is performed at least in part by one or more processors of evaluation server150ofFIGS.1and2, or server1408ofFIG.3. In one embodiment, evaluation server1408is implemented using the processors and memory ofFIG.4or5. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block601, a new policy is determined for deployment to a plurality of computing devices. For example, evaluation server150determines that new policy186will be deployed to mobile devices147,149. At block603, the new policy is compared to collected data for the plurality of computing devices. For example, evaluation server150compares data collected from mobile devices147,149to one or more rules in new policy186. At block605, based on comparing a new policy to the collected data, a compliance is determined for each of the computing devices. This compliance is associated with implementation of the new policy. For example, it may be determined that a given device will not be compliant when the new policy is deployed on that device. For example, evaluation server150determines that mobile device149will not be compliant when new policy186is deployed. This determination is based on the comparison of the new policy186to the collected data. At block607, one or more actions are performed based on determining the compliance for each of the plurality of computing devices. For example, the one or more actions can be performed by evaluation server150. In one example, the one or more actions can be performed by administrator server180in response to receiving a communication from evaluation server150. In one example, a report is provided to administrator server180that indicates a risk profile for each of mobile devices147,149. In one embodiment, a read through rehearsal is used to automatically generate statistical results regarding expected compliance. These results are compared against a database of collected data regarding devices to which a new policy will be deployed. The comparison generates expected results from actual deployment. For example, an expected result can be an expected number of responses, such as alerts, from an actual deployment. In one embodiment, during a dress rehearsal, if the number of responses exceeds the expected number of responses from the read through rehearsal, then a deployment can be rolled back to a prior stage. In one embodiment, the collected data above is collected from a set of devices according to a data collection policy. The data can be, for example, associated with device configuration, device state, and/or device behavior. A historical norm or baseline is established using the collected data. In one embodiment, the historical norm or baseline can be compared to expectations or actual results from deployment of a new policy. If there is a lack of compliance determined based on a deviation outside of a threshold deviation between the norm or baseline and the new policy, a message is transmitted to an administrator server and/or other action is performed. In one embodiment, a method comprises: determining, by a server (e.g., evaluation server150or evaluation server1408), a new policy (e.g., new policy186) for deployment to a plurality of computing devices (e.g., mobile device149, mobile device2201, mobile device1405); comparing, by the server, the new policy to collected data for the plurality of computing devices, the collected data including information associated with at least one of device configuration, device state, or device behavior for each of the computing devices; determining, by the server and based on comparing the new policy to the collected data, a compliance for each of the plurality of computing devices associated with implementation of the new policy; and based on determining the compliance for each of the plurality of computing devices, causing at least one action (e.g., transmitting a message to administrator server180including a report of risk profiles). In one embodiment, determining the compliance of each of the plurality of computing devices is performed prior to deployment of the new policy to the computing devices (e.g., as part of a read through rehearsal). In one embodiment, the method further comprises deploying the new policy to the plurality of computing devices, wherein determining the compliance of each of the plurality of computing devices is performed after deploying the new policy. In one embodiment, the at least one action comprises at least one of transmitting a message to at least one of the plurality of computing devices, or transmitting a message to an administrator server that manages policy for the plurality of computing devices. In one embodiment, the at least one action comprises generating a report comprising information for each of the plurality of computing devices indicating whether the computing device complies with the new policy. In one embodiment, the method further comprises: determining a risk profile for each of the plurality of computing devices (e.g., the determined risk profiles are stored as risk profiles184); selecting, based on the risk profile for each computing device, first devices of the plurality of computing devices for deployment of the new policy; and deploying the new policy to the first devices. In one embodiment, the at least one action is at least one first action, and wherein the new policy is a passive policy that includes at least one second action to be performed on a computing device in the event of a policy violation, the method further comprising: deploying the passive policy to the plurality of computing devices (e.g., as part of a dress rehearsal deployment), wherein determining the compliance for each of the plurality of computing devices comprises monitoring compliance of the computing device with the passive policy during operation after deploying the passive policy, and wherein the at least one second action is not implemented on any computing device operating under the passive policy; receiving a report from the plurality of computing devices, the report comprising an indication of those computing devices that exhibit the policy violation; and based on the report, deploying an active policy to the plurality of computing devices, wherein the active policy corresponds to the passive policy, and wherein the at least one second action is performed on the computing devices that exhibit the policy violation. In one embodiment, determining the compliance for each of the plurality of computing devices is performed prior to deployment of the new policy to the computing devices, and provides an expected compliance from the deployment of the new policy, the method further comprising: deploying the new policy to the plurality of computing devices in stages, each stage corresponding to deployment of the new policy to a portion of the plurality of computing devices; after deploying the new policy to each stage, comparing an actual compliance with the new policy to the expected compliance; and based on comparing the actual compliance to the expected compliance for a first stage of the stages, rolling back deployment from the first stage to a prior stage. In one embodiment, based on a determination that operation of an application will violate at least one rule of a new policy, mobile device1405provides a warning notification by display in a user interface. In one embodiment, this warning notification is provided in response to an attempt by user to launch an application, or shortly after launching the application. In one embodiment, a notification is provided to the user indicating an alternative application that can be downloaded by the user, or that is already present on mobile device1405. Behavioral and/or structural characteristics of a component present in a new application may be identified. This may be, for example, an application1013that has been installed on mobile device149. These characteristics may be inputs to a context determination above. In one embodiment, there are various ways to identify characteristics that are actually present in a component of an application. Information can be gathered from an application on a mobile device for further processing at a server. According to this embodiment, information that has been gathered is then used for component analysis at the identity provider (discussed above) in order to identify characteristics of a component. In another embodiment, behavioral characteristics may be determined or collected using other approaches. For example, behavior may be determined based on network traffic (e.g., SMS, IP) data, or based on the code source of a given behavior (e.g., a class name or a package name responsible for geo-locating, or a fingerprint of a code segment responsible for sending SMS traffic). In various other embodiments, the results from component identification for applications on a device are presented to the user. The user may provide input in a user interface to define or update a user policy based on this component identification. For example, the user may opt-out of an identified component. Also, in particular, U.S. Patent Publication No. 2011/0047594 describes a system for providing advisement about applications on mobile devices such as smartphones, netbooks, and tablets. A server gathers data about mobile applications, analyzes the applications, and produces an assessment that may advise users on a variety of factors, including security, privacy, battery impact, performance impact, and network usage. The disclosure helps users understand the impact of applications to improve the experience in using their mobile device. The disclosure also enables a server to feed information about applications to other protection systems such as application policy systems and network infrastructure. The disclosure also enables advisement about applications to be presented in a variety of forms, such as through a mobile application, as part of a web application, or integrated into other services via an API. The data gathered by the server may be used, for example, as one or more inputs in the plurality of inputs for evaluating the first application as described herein. Also, some of the forms of advisement discussed may be used, for example, in providing notifications to the user and/or to developers or others regarding evaluations of software authenticity. In one embodiment, security evaluation and scoring uses a plurality of trust factors. In one example, some of the trust factors may be used as inputs when evaluating application authenticity. FIG.7shows a method for generating a risk profile for computing devices based on comparing device data, according to one embodiment. For example, the method ofFIG.7can be implemented in the system ofFIG.1,2, or3. The method ofFIG.7can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method ofFIG.7is performed at least in part by one or more processors of evaluation server150ofFIGS.1and2, or server1408ofFIG.3. In one embodiment, evaluation server1408is implemented using the processors and memory ofFIG.4or5. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block701, first data associated with first computing devices is received. For example, evaluation server150receives data associated with mobile devices147,149. The data is received by evaluation server150from administrator server180. At block703, the first data is compared to second data stored in a data repository. For example, the second data is historical risk data stored in data repository182. For example, the second data corresponds to risks identified based on information collected from second computing devices prior to receiving the first data. For example, the second data is historical data that has been collected from managing security for computing devices other than those devices associated with a proposed or actual new deployment. At block705, a risk profile is generated for each of the first computing devices. The risk profile is based on comparing the first data to the second data. For example, evaluation server150generates risk profiles184based on comparing the first data to historical data stored in data repository182. In one example, the historical data includes identified risks associated with particular software components. At block707, one or more actions are caused based on the risk profile for each of the first computing devices. For example, evaluation server150transmits a report to administrator server180that causes a display of information in a user interface1420of administrator server1310. In one embodiment, a risk response is configured using MDM software1311(e.g., the risk response is based on the risk profiles generated at block705). In one example, a trigger is added to drive a compliance response for one or more policies. When a policy is applied to a device (because the device becomes associated with a corresponding risk label), a compliance action will be executed (as the device will be out of compliance based on the trigger). This allows MDM software1311to drive an appropriate response based on the security risk posture of the device (e.g., a risk posture as provided by a report of device risks from evaluation server150to administrator server180). In one embodiment, a method comprises: receiving, by a server (e.g., evaluation server150or evaluation server1408), first data associated with first computing devices (e.g., mobile devices147,149); comparing, by the server, the first data to second data stored in a data repository (e.g., data repository182), wherein the second data corresponds to risks identified based on information collected from second computing devices prior to receiving the first data; generating, by the server and based on comparing the first data to the second data, a risk profile (e.g., risk profiles184) for each of the first computing devices; and causing, by the server and based on the risk profile for each of the first computing devices, at least one action. In one embodiment, the at least one action comprises at least one of generating a report regarding prioritized deployment of software to the first computing devices, performing a remediation action for at least one of the first computing devices, or generating a new policy for deployment to the first computing devices (e.g., new policy186is updated based on an initial trial deployment). In one embodiment, the at least one action comprises generating a report regarding prioritized deployment of software to the first computing devices; the software is a client application (e.g., security component1412, application1316, or application1013) for installation on the first computing devices; and the client application is deployed to each computing device in a priority order based on the risk profile for the respective computing device. In one embodiment, the method further comprises receiving the first data from an administrator server (e.g., administer server1310), wherein the administrator server manages policy on the first computing devices, and has collected the first data from the first computing devices. In one embodiment, causing the at least one action comprises causing the administrator server to deploy software to each of the first computing devices in a priority order based on the risk profile for the respective computing device. In one embodiment, the method further comprises: causing presentation, in a user interface (e.g., user interface1420or user interface2219), of a priority order for deployment of software to the first computing devices, wherein the priority order is based on the risk profile for each computing device; and wherein deployment of the software in the priority order can be initiated by a user input in the user interface. In one embodiment, the method further comprises: tracking deployment of the software to the first computing devices; after deployment of the software to the first computing devices, performing a risk assessment for each of the first computing devices; comparing the risk profile to the risk assessment for each of the first computing devices to provide a comparison result for each computing device; and causing presentation, in the user interface, of the comparison result for each computing device. In one embodiment, the method further comprises causing a label to be added, by an administrator server, to a computing device needing remediation based on the risk profile for the computing device, wherein adding the label causes a remediation action to be performed by the administrator server for the labeled computing device. In one embodiment, the server is a first server (e.g., evaluation server1408), an administrator server (e.g., administrator server1310) manages policy on the first computing devices, and the at least one action comprises generating a new policy for deployment to the first computing devices, the method further comprising: receiving the first data from the administrator server; and sending, by the first server, a communication causing the administrator server to implement the new policy on the first computing devices. In one embodiment, a system comprises: at least one processor; and memory storing instructions configured to instruct the at least one processor to: receive first data associated with first computing devices; compare the first data to second data stored in a data repository, wherein the second data corresponds to risks identified based on information collected from second computing devices prior to receiving the first data; generate, based on comparing the first data to the second data, a risk profile for each of the first computing devices; and cause, based on the risk profile for each of the first computing devices, at least one action. In one embodiment, the at least one action comprises generating policy options for deployment of a new policy to the first computing devices, and wherein the instructions are further configured to instruct the at least one processor to: cause presentation, in a user interface, of the policy options, wherein each policy option includes risk levels and corresponding actions to be performed on the first computing devices in response to a violation of the new policy; wherein the new policy to be deployed is determined based on a user selection from the policy options. In one embodiment, the instructions are further configured to instruct the at least one processor to: compare a new policy to the first data, wherein the new policy is for deployment to the first computing devices, and wherein the first data includes information associated with at least one of device configuration, device state, or device behavior for each of the first computing devices; determine, based on comparing the new policy to the first data, a compliance for each of the first computing devices; and report the compliance for each of the first computing devices to an administrator server that manages policy for the first computing devices. FIG.8shows a display of suggested policy options presented for a user in a user interface based on a pre-deployment risk assessment, where the display presents classifications for various risks, with each risk including a corresponding risk level and a response, according to one embodiment. For example, this display may be presented on user interface2219or user interface1420(e.g., after performing a read through rehearsal or a dress rehearsal). The user is able to customize the policy selections prior to initiating a deployment of a new policy. In one embodiment, an administrator may not know what its mobile risk profile is until client security software (e.g., client application2207, or security component1412) is deployed to its fleet computing devices, and actual risk detection results are observed (e.g., based on data provided to evaluation server150from security monitoring of the device using security component1412). However, based on the pre-deployment risk assessment (e.g., produced from device data from MDM software1311(or data from a third-party service)), guidance on policy settings can be suggested while the administrator is still in the pre-deployment state. As part of a mobile risk assessment, there can be a call-to-action for a “Suggested Policy”. In one example, the policy suggestions present a selection of policy options along with the risk level and response settings that are suggested based on the pre-deployment risk assessment (e.g., as sent in a report to administrator server1310). In one example, each suggestion includes provenance describing why the suggestion is made. In one example, a policy suggestions can include setting a particular policy item response to “don't alert”. This can be based on a prediction that the policy item may trigger a large percentage of the administrator's managed devices to disrupt user operation. In one example, a root/jailbreak and root enabler item is set to low risk and don't alert. This can be due to a large percentage of enterprise devices being observed to have these types of apps installed. Thus, this suggests this situation is normal for the enterprise. FIGS.9A-9Bshow a report generated for various computing devices using stored historical risk data and that presents risk profiles for the computing devices, according to one embodiment. For example, the report can be generated by evaluation server150or evaluation server1408based on evaluating device data and/or assessing a potential deployment of a new policy. For example, the report can be sent by evaluation server150to administrator server180. In one example, the report includes potential threats, potential application risk, potential data leaks, geographic risks, and/or device vulnerabilities. In one embodiment, a risk profile can be presented for identified devices, and/or groups of devices. The risk profile can be presented with a corresponding level of risk (e.g., low, medium, or high). In one example, a risk profile can include a risk score based on a context of a computing device. Pre-Deployment Evaluation Server Capabilities Based on Risk Assessment Various non-limiting embodiments are now described below that relate to evaluating data to determine risks associated with operation of computing devices (e.g., prior to deployment of a client application to the computing devices that is used to manage security risks on the computing devices). In one embodiment, referring again toFIG.3, security component1412is a client application installed a mobile device1405that is used to manage security. For example, security component1412collects data from mobile device1405that is used to assess a context of operation for mobile device1405. For example, security component1412collects data from mobile device1405that is transmitted to evaluation server1408and used to identify risk(s) associated with mobile device1405. In one example, the data transmitted relates to characteristics of components1324,1326on mobile device1405. In one example, mobile device1405is part of a fleet of devices managed by administrator server1310. Administrator server1310communicates with evaluation server1408to learn of and/or receive data regarding new risk(s) that may be identified by evaluation server1408for mobile device1405and/or other fleet devices. In one example, the risk is identified by evaluation server1408by comparing data received from security component1412with historical risk of data in a data repository (e.g., data repository182). In one example, a deployment of security component1412to fleet devices of an enterprise includes communicating with mobile device1405to have a user install security component1412on mobile device1405. In one example, the status of deployment to each device is tracked by administrator server1310and/or evaluation server1408. For example, security component1412can report to evaluation server1408that it is in an active state. Evaluation server1408maintains a tally of states for security components on each fleet device in a deployment. In one example, this deployment can be a dress rehearsal as discussed above. In one embodiment, evaluation server1408performs polling of MDM software1311to track a deployment status of security component1412for each device. In one example, the deployment states can include pending, active, disconnected, and deactivated. In one embodiment, when a mobile risk assessment is performed by evaluation server1408(e.g., prior to deployment of security component1412), data is collected about devices and apps from MDM software1311(and/or is collected from a computing device of a similar or other third-party service). In one example, the collected data can include, but is not limited to the following:Device and device user identifier(s)Device make, model, and network typeDevice firmware version, build code, and patch levelApplication metadata: package/bundle identifiers, signer data, name/title, version code/nameCorrelations between devices and apps (i.e., which apps are installed on which devices) With this data collected, a manifest is created of all distinct devices in the data and their correlated apps. This manifest is correlated with an existing corpus of historical data about mobile devices and applications. In one example, this historical data is stored in data repository182as discussed above forFIG.1. In one example, the historical data includes, but is not limited to the following:Device and firmware geographic prevalenceFirmware vulnerabilitiesApplication geographic prevalenceApplication vulnerabilitiesApplication capabilitiesApplication malware, riskware, and/or adwareCorrelations to mobile threats (e.g., signers, network activity, etc.) In one embodiment, the correlation of the manifest with the existing corpus of historical data generates a risk profile for each device listed in the input manifest (including mobile device1405). In one embodiment, the manifest alternatively and/or additionally can be provided as an input into a machine learning/artificial intelligence model that outputs a risk score for each device listed in the input manifest. In one example, the risk profile is based on a mobile risk model that has been trained using the existing corpus of historical data above. In one embodiment, the generated risk profile and/or risk score for each device are used to rank each device in the manifest in a priority order of overall potential risk (e.g., a priority order in decreasing risk). In one embodiment, after generating a risk profile by evaluation server1408for each of various computing devices managed by administrator server1310, data is presented to a user in user interface1420. In one example, the data is based on the risk profile generated for each device. In one example, the data is a priority order of potential risk (e.g., as indicated by a risk score) associated with each device. In one embodiment, as part of the mobile risk assessment, user interface1420presents a call-to-action for a “Suggested Deployment Priority”. A similar call-to-action can be displayed on other deployment-related web or interface pages once the mobile risk assessment has been performed. Interaction with these calls-to-action can direct users to a separate user interface for displaying deployment priority recommendations. In one embodiment, the user interface contains a list of devices and/or users (depending on what data can be pulled from MDM software and/or a third-party service) in the order in which it is suggested that the administrator/customer should deploy a security component or other application/software. The list may be partitioned into risk levels (e.g., low, medium, high) based on the devices' prioritization assessment results. For each item in the list, a provenance can optionally be displayed that was used to make the prioritization assessment (e.g., data from the device's risk profile and risk score). In one embodiment, the user interface may contain interactions which allow a user of services provided by evaluation server1408to select devices/users from the prioritized list and initiate deployment through MDM software1311(or via another third-party service used for fleet devices). Once the security component has been deployed to those devices/users, the respective devices/users are removed from the prioritized list, and the list continues to display the remaining undeployed devices/users in a priority order. In one embodiment, a user interface can be used that, for devices that have been deployed, compares a pre-deployment risk profile of the device (e.g., generated by evaluation server1408using a corpus of historical data) to its actual assessed risk post-deployment (e.g., a risk assessment performed based on data collected by security component1412after installation (e.g., data regarding components1324,1326), and after such data is sent to and evaluated by evaluation server1408). In one example, this interface can serve as a validation of a pre-deployment risk assessment, and/or provide a confirmation to an administrator that the appropriate devices have been prioritized for deployment. In one embodiment, the prioritized deployment is initiated using administrator server1310. This deployment is based on data provided from evaluation server1408as a result of generating a risk profile for each of several computing devices, including mobile device1405. In one embodiment, the prioritized deployment is implemented by a workflow as follows: 1. Pre-deployment, an administrator connects MDM software1311(or a similar third-party service) to a tenant (e.g., tenant1422) associated with evaluation server1408and initiates a mobile risk assessment. Evaluation server1408performs a deployment prioritization assessment as part of the mobile risk assessment. 2. Once the mobile risk assessment is complete, the administrator interacts with the prioritized deployment call-to-action which is prominently displayed (e.g., in user interface1420). 3. The prioritized deployment user interface presents the administrator with a suggested deployment prioritization, for example including provenance, and provides tools to initiate deployment for select groups of devices. 4. Using this prioritization guidance, the administrator formulates a deployment rollout plan and uses tools on the presented prioritization page to initiate prioritized deployment. 5. As deployment rollout progresses, the same user interface can be used to keep track of which devices in the prioritized list have been selected for deployment and their state (pending, deployed, disconnected, etc.). 6. As devices in the prioritized list are successfully deployed and become active, they are removed from the prioritized list. 7. Devices that were once part of the pre-deployment prioritization plan that are now deployed and active can be presented in a separate part of the user interface that compares a pre-deployment risk prediction to the actual post-deployment risk assessment (e.g., risk assessment performed by evaluation server1408based on data collected from mobile device1405using security component1412and/or another installed client application). 8. As new, undeployed devices are added to MDM software1311(or similar third-party service), tenant1422can be used to automatically collect data for the new devices, perform the risk and prioritization assessments, and update the deployment prioritization list as necessary (e.g., for an updated presentation in user interface1420). 9. The tenant1422can periodically update its risk and prioritization assessment for undeployed devices based on updated data from MDM software1311(or similar third-party service). As a result, the prioritized deployment list can be updated as necessary. 10. Administrator server1310can be notified if the prioritized deployment list changes based on new data (e.g., new device and/or app data) so that the administrator can take action as necessary based on the new priorities. In one embodiment, a deployment prioritization assessment is performed by using an algorithm that takes device data, user data (if available), and the generated device risk profiles above as inputs. Based on these inputs, the algorithm deterministically ranks the devices in order of potential risk, where the potential risk is determined by comparing the data about each device to every other device in the input data. In one embodiment, user interface1420is used by an administrator to initiate prioritized deployment. In one example, the user selects which devices to initiate deployment to from a prioritization page. In one example, the user clicks a button on the page to initiate deployment for selected devices. In one example, based on a tenant configuration (e.g., for tenant1422), evaluation server1408uses MDM software1311to deploy security component1412to the selected devices, and/or sends an enrollment email or other message to those devices. In one embodiment, information is presented in user interface1420that shows an accuracy for the risk predictions made for devices pre-deployment (e.g. predicted vs. actual results). After deployment, actual data has been collected from mobile device1405and other devices. This actual data is used to prepare an updated risk assessment for comparison to the pre-deployment risk profile. In one example, in the pre-deployment risk assessment, a particular device is prioritized for deployment based on certain potential risks. Once the administrator deploys security component1412to that device, evaluation server1408can verify whether or not the device actually exhibits those risks. The user interface presentation can be provided to administrator or other user that shows these comparison results (e.g., mobile device1405was prioritized based on risks identified in a pre-deployment evaluation, and after deployment, actual data collected by security component1412was used to identify these same predicted risks). In one embodiment, data is collected by evaluation server1408using periodic polling of MDM software1311. For example, data can be collected every hour or every day. This data is used to update a risk profile for mobile device1405. In one embodiment, this data includes data provided by security component1412. In one embodiment, this data includes data received from and/or observed for application1316. In one embodiment, this data includes a configuration of permissions1409. In one embodiment, the state includes an operating system configuration1410. In one embodiment, automatic deployment can be initiated for a new device if the risk assessment exceeds a predetermined threshold. For example, evaluation server1408can send a message to MDM software1311identifying a new device for which automatic deployment should be initiated. In one embodiment, based on correlation results of device data from MDM software1311against a corpus of historical risk, such as discussed above, evaluation server1408performs actions indicating suggested remediation actions to be performed pre-deployment (e.g., prior to deployment of security component1412or client application2207). In one example, the suggested remediation actions are sent to administrator server1310. In one example, the suggested remediation actions are presented in user interface1420. In one example, the suggested remediation actions are implemented automatically by MDM software1311. In one embodiment, when in a pre-deployment state, evaluation server1408is not yet able to initiate remediation actions based on actual assessed device risk (e.g., because security component1412has not yet been installed on the mobile device1405or other new devices). However, based on the pre-deployment risk profile generated for each device, such as discussed above, immediate remediation strategies can be suggested or otherwise indicated (e.g., to administrator server1310and/or another computing device) (e.g., indicated by a communication sent by evaluation server1408). For example, as part of a mobile risk assessment presented by evaluation server1408, there can be a prominent call-to-action for “Suggested Remediation”. The remediation suggestions may be in the form of guidance for manual implementation via the MDM software1311and/or, to an extent possible, automatic implementation via APIs of the MDM software1311(or similar third-party service). For example, evaluation server1408can implement one or more of these remediation suggestions automatically using an API of MDM software1311. Immediate remediation suggestions may vary based on a particular fleet of devices, the particular administrator, the particular enterprise, and/or the nature of employees using mobile devices that are managed by administrator server1310. Optionally, each remediation suggestion can include provenance information describing why the suggestion is being made. Various examples of suggestions can include, but are not limited to, the following:Non-compliance based on device make/model (e.g., device make and/or model is an outlier based on the enterprise's geographic norm).Non-compliance based on firmware version (e.g., firmware version is an outlier based on the enterprise's geographic norm, firmware identifiers don't match known-good firmwares, and/or firmware has vulnerabilities with in-the-wild exploits).Non-compliance based on installed app(s) (e.g., apps are outliers based on the enterprise's geographic norm, apps have capabilities (or combinations of capabilities) that are particularly risky for enterprises, apps are suspected (e.g., with high confidence) to be malicious, and/or apps have vulnerabilities with in-the-wild exploits). In one embodiment, in some cases, an administrator may not use MDM software to manage its fleet devices. Instead of data from MDM software (or additionally to such data), the same, similar, and/or other data is received by evaluation server1408from an operating system provider (e.g., Apple, Google and/or the mobile network operators (MNOs) with their specific Android versions). In one case, deployment can be direct (e.g., email/SMS), or can be done through the operating system provider. In some cases, in addition to and/or alternatively to data collected from MDM software, evaluation server1408can receive data from various other sources. These can include, for example, an identity provider (IdP), AD (e.g., Microsoft Active Directory), Workday, firewall, computing device or network service (e.g., that has information about device lists, app presence, etc.), Samsung Knox, Android for Work; MAM technology (e.g., a container, etc., information, or remediation action, etc.). In one example, for immediate remediation capability, evaluation server1408can query network middlebox server app connections, blackhole an application, and/or ask an identity provider to prevent access to certain network or corporate resources. In various examples, the types of collected data can differ. For example, data may include data received from a firewall (e.g., a next-generation firewall (NGFW)) or an identity provider (e.g., Okta). In one example, the data can include data about a user (e.g., job title/role, such as pulled from Microsoft Active Directory or human resources systems), network traffic (e.g., as a type of data), firewall rules, and/or network access control (NAC)/other as an immediate remediation action (e.g., CISCO Security Connector) (e.g., like a forensics box for everything with respect to a device). In one embodiment, collected data includes DNS level info. In one example, data is collected using the Apple iOS NEDNSProxyProvider (where NE stands for Network Extension, and DNS stands for the Domain Name System). NEDNSProxyProvider is an API that lets, for example, evaluation server1408see all DNS traffic from the mobile device1405or other devices having an installed security component, and to perform various actions as a result. In one embodiment, data collected from the MDM software1311(e.g., including device identifiers) is used to determine if any of the devices have an existing client application that is already communicating with, and/or has previously communicated with, evaluation server1408regarding security or otherwise for the respective device. If so, data from the existing client application is used as part of evaluating mobile device1405or other new devices and generating a risk profile for each device. CLOSING In this description, various functions and operations may be described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the code by a processor, such as a microprocessor. Alternatively, or in combination, the functions and operations can be implemented using special purpose circuitry, with or without software instructions, such as using an Application-Specific Integrated Circuit (ASIC) or a Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by a computing device. While some embodiments can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution. At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computing device or other system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device. Routines executed to implement the embodiments may be implemented as part of an operating system, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface). The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects. A machine readable medium can be used to store software and data which when executed by a computing device causes the device to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time. Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions. The instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. However, propagated signals, such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions. In general, a tangible or non-transitory machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by a computing device. Although some of the drawings illustrate a number of operations in a particular order, operations which are not order dependent may be reordered and other operations may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof. In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 113,219 |
11863393 | DETAILED DESCRIPTION OF EMBODIMENTS In the following description, for purposes of explanation, specific details are set forth in order to provide an understanding of the disclosure. It will be apparent, however, to one skilled in the art that the disclosure can be practiced without these details. Furthermore, one skilled in the art will recognize that embodiments of the present disclosure, described below, may be implemented in a variety of ways, such as a process, an apparatus, a system/device, or a method on a tangible computer-readable medium. Components, or modules, shown in diagrams are illustrative of exemplary embodiments of the disclosure and are meant to avoid obscuring the disclosure. It shall also be understood that throughout this discussion, components may be described as separate functional units, which may comprise sub-units, but those skilled in the art will recognize that various components, or portions thereof, may be divided into separate components or may be integrated together, including, for example, being in a single system or component. It should be noted that functions or operations discussed herein may be implemented as components. Components may be implemented in software, hardware, or a combination thereof. Furthermore, connections between components or systems within the figures are not intended to be limited to direct connections. Rather, data between these components may be modified, re-formatted, or otherwise changed by intermediary components. Also, additional or fewer connections may be used. It shall also be noted that the terms “coupled,” “connected,” “communicatively coupled,” “interfacing,” “interface,” or any of their derivatives shall be understood to include direct connections, indirect connections through one or more intermediary devices, and wireless connections. It shall also be noted that any communication, such as a signal, response, reply, acknowledgment, message, query, etc., may comprise one or more exchanges of information. Reference in the specification to “one or more embodiments,” “preferred embodiment,” “an embodiment,” “embodiments,” or the like means that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the disclosure and may be in more than one embodiment. Also, the appearances of the above-noted phrases in various places in the specification are not necessarily all referring to the same embodiment or embodiments. The use of certain terms in various places in the specification is for illustration and should not be construed as limiting. The terms “include,” “including,” “comprise,” and “comprising” shall be understood to be open terms and any examples are provided by way of illustration and shall not be used to limit the scope of this disclosure. A service, function, or resource is not limited to a single service, function, or resource; usage of these terms may refer to a grouping of related services, functions, or resources, which may be distributed or aggregated. The use of memory, database, information base, data store, tables, hardware, cache, and the like may be used herein to refer to system component or components into which information may be entered or otherwise recorded. The terms “data,” “information,” along with similar terms, may be replaced by other terminologies referring to a group of one or more bits, and may be used interchangeably. The terms “packet” or “frame” shall be understood to mean a group of one or more bits. The term “frame” or “packet” shall not be interpreted as limiting embodiments of the present invention to 5G networks. The terms “packet,” “frame,” “data,” or “data traffic” may be replaced by other terminologies referring to a group of bits, such as “datagram” or “cell.” The words “optimal,” “optimize,” “optimization,” and the like refer to an improvement of an outcome or a process and do not require that the specified outcome or process has achieved an “optimal” or peak state. It shall be noted that: (1) certain steps may optionally be performed; (2) steps may not be limited to the specific order set forth herein; (3) certain steps may be performed in different orders; and (4) certain steps may be done concurrently. A. O-RAN Deployment Scenarios A radio access network (RAN) is part of a telecommunication system. It implements a radio access technology (RAT) to provide a connection between a device, e.g., a mobile phone, and a core network (CN). O-RAN is an approach based on interoperability and standardization of RAN elements, including a unified interconnection standard for white-box hardware and open source software elements from different vendors. O-RAN alliance has specified O-RAN Cloud (O-Cloud) as O-RAN includes the cloudification of RAN for single or multi-tenants and automation of RAN end-to-end. O-Cloud may include edge cloud as a virtual distribution unit (vDU) and/or a virtual central unit (vCU).FIG.1depicts various O-RAN deployment scenarios according to embodiments of the present disclosure. As shown inFIG.1, an O-RU105couples to an O-CU115via an O-DU110. O-Cloud platform may support RAN functions and involve hardware accelerator required by the RAN functions and software stacks, which be decoupled from the hardware accelerator. Each O-cloud uses open interface. Different deployment scenarios may be used for an O-RAN. For example, the O-RU may be proprietary and deployed on the cell site (e.g., in Scenarios A-D), while the O-DU and O-CU may be deployed separately as a region cloud and an edge cloud, or jointly deployed in an edge cloud (e.g., scenario A). Alternatively, the O-DU and the O-RU may be jointly deployed at a cell site, as shown in scenario E. It shall be noted that in scenarios E and F, an O-RAN may be deployed fully on cloud with the O-CU deployed on a region cloud, the O-RU either deployed on an O-Cloud on a cell site (O-DU deployed on an Edge Cloud in this case) or deployed together with the O-DU on a cell site. A full O-RAN cloud deployment may provide cloud services extending from O-RU to O-DU and O-CU. FIG.2Adepicts a block diagram for cloud platform components, according to embodiments of the present disclosure. The cloud platform200comprises cloud platform hardware210(e.g., hardware accelerations for severs, switches and storages, etc.) and cloud platform software220. The cloud platform software220may comprise different modules for different functions, e.g., a VM/container management and orchestration module222, a cloud platform management module224for various management functions (e.g., service management, host management, user management, fault management, etc.), and a cloud platform runtime module226for various accelerator/network driver running, storage defining, etc. FIG.2Bdepicts a block diagram for O-RAN cloud deployment, according to embodiments of the present disclosure. A plurality of O-RUs, e.g.,254a,254b, are deployed as a cell site O-Cloud252, which may be configured into multiple instances to serve multiple communication service providers or users. The cell site O-Cloud252couples to a management network via a fronthaul network260. The management network may comprise O-Cloud management (OCM)280, which comprises one or more controllers, and a plurality of O-Cloud compute nodes270. The one or more controllers may be synchronized or coordinated for operation via network time protocol (NTP), while the plurality of O-RUs may be synchronized or coordinated for operation via precision time protocol (PTP). Both the O-RUs and the compute nodes may provide service of high availability with redundant hardware, software, or a combination of both. O-RAN supports the option of placing network functions (NFs) in different places along the signal path. That option, also referred as a functional split, lets network engineers optimize performance and make tradeoffs. The function splits involves different 5G Protocol Stack layers, i.e. layer 1, layer 2 and layer 3. The 5G layer-1 (L1) is PHYSICAL Layer. The 5G layer-2 (L2) includes MAC, radio link control (RLC), and packet data convergence protocol (PDCP) sublayers. The 5G layer-3 (L3) is a radio resource control (RRC).FIG.3depicts different functional splits of an O-RAN. 3GPP has defined 8 functional split options for fronthaul networks in Technical Report 38.801 V 14.0.0 (2017-03) as below: Option 1 (RRC/PCDP); Option 2 (PDCP/RLC Split); Option 3 (High RLC/Low RLC split, or Intra RLC split); Option 4 (RLC-MAC split); Option 5 (Intra MAC split); Option 6 (MAC-PHY split); Option 7 (Intra PHY split); and Option 8 (PHY-RF split). The DU is responsible for high L1 and low L2, which contains the data link layer and scheduling functions. The CU is responsible for high L2 and L3 (network layer) functions. For example, with an option 2 split, some L2 Ethernet functions may reside in the remote radio head (RHH). Also, aggregation and statistical multiplexing may be done before the data is passed across the fronthaul network. This may greatly reduce the amount of data transmitted across the interface. In another example, with an option 7 split, some L1 functions may reside in the baseband unit (BBU) and pooling gains may be realized with centralized processing. A service provider (SP) may adopt more than one Open RAN deployment models based on band, fronthaul bandwidth requirements, or deployment type (macro/small cell), etc. Deployment models are influenced or decided based on multiple factors, including Fibre availability, real-estate/site/location constraints at pre-aggregation (Pre-Agg) and cell sites, total cost of ownership (TCO), operational preference, etc. With a cloud infrastructure, a Telco cloud may add services more quickly, respond faster to changes in demand, and centrally manage their resources more efficiently. A current approach to address the high availability requirement in Telco RAN is adding redundant resources. However, such an approach adds cost for Telco cloud deployment, especially when the redundant resources are not used efficiently. Described in the following sections are system and method embodiments to meet the high availability requirement in Telco RAN for improving efficiency and performance. B. Embodiments for High Availability in O-RU An RU converts radio signals sent to and from the antenna to a digital signal that can be transmitted over the fronthaul to a DU. An O-RU is a logical node hosting low PHY and RF processing based on a lower layer functional split. Function split option 7 divides into sub-options 7.1, 7.2, and 7.3, which vary in the way of dividing the PHY between the DU and the RU. Split Option 7.2 is adopted by O-RAN fronthaul specifications for splitting between high PHY residing in O-DU and low PHY residing in O-RU. FIG.4depicts a block diagram of an O-RU, according to embodiments of the present disclosure. The O-RU405may be deployed on a cell site and comprise one or more RF clusters410, one or more computation clusters420, and one or more interface clusters430. The one or more RF clusters410handle RF front end (RF FE) to establish wireless communications with one or more user equipment (UE)402via O-RU antenna. The one or more computation clusters420handle digital front end (DFE) and low PHY baseband processing. The one or more interface clusters430handle fronthaul transport, e.g., interfacing to/from an O-DU. A local high availability (HA) manager440couples to all three types of clusters for broadcasting internal state and establishing a low latency path to a centralized HA manager for load balancing decision across cell sites if required. In one or more embodiments, the internal state broadcasting may be symbol tick-based broadcasting. FIG.5depicts a block diagram of one RF cluster in the O-RU, according to embodiments of the present disclosure. The RF cluster410provides redundant RF processing components, such as power amplifiers (Pas)411, low noise amplifiers (LNA)412, digital-to-analog converters (DACs)413, analog-to-digital converters (ADCs)414, duplexers/circulators415, smart RF switch/sensor, etc., to establish one or more active RF paths. In one or more embodiments, the local HA manager440may monitor the RF cluster410and use one or more parameters for RF path management, such as activating a new RF path, adding more resources to an active RF path, removing resources from an active RF path, deactivating an active RF path, etc. The local HA manager440may use Artificial intelligence (AI) or machine learning (ML) based algorithm for RF path management. The one or more parameters may comprise temperature, RF power, changing rate of temperature, changing rate of RF output poxer, voltage variations, current variations, etc. The local HA manager440may also establish a low latency path to a centralized HA manager450, which may connect to a plurality of local HA managers, including local HA manager440for other O-RUs, such that the centralized HA manager may implement HA management for O-RUs at a high hierarchical level. In one or more embodiments, HA implementation on the cell site level or O-RU level may provide redundant low PHY, transceiver, and PA, enable prediction or early detection of RF component failure based on AI/ML algorithm. New instance(s) may be enabled using smart RF switch(es) in case of an existing instance failure. The local HA manager and the central HA manager may form hierarchical and localized redundant resource management for low latency RF path. FIG.6depicts a block diagram of one computation cluster in the O-RU, according to embodiments of the present disclosure. The computation cluster420comprises various components or modules for handling digital front end and low PHY baseband processing. In one or more embodiments, the components or modules handling digital front end may comprise one or more digital up converters (DUCs)421, one or more digital down converters (DDCs)422, one or more digital pre-distortion (DPD) modules423, and one or more crest factor reduction (CFR) modules424. Low PHY baseband processing may be implemented by using one or more field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs)425to handle functions such as fast Fourier transform (FFT)/inverse fast Fourier transform (iFFT), frequency domain physical random access channel (PRACH) filtering, precoding, cyclic prefix (CP) addition/removal, and digital beamforming (BF), etc. In one or more embodiments, for HA implementation within the same hardware (or O-RU), the computation cluster420may be configured to provide redundant compute resources with time synced computation, time stamping within the compute cluster to maintain a sub-symbol level granularity. With this feature, even sub-modules may be run on different compute clusters. The time-stamping may be enabled between the computation cluster420and the RF cluster410per instance. Furthermore, buffer occupancy variation may be used to indicate the system issues. For HA implementation across locations, the computation cluster420may be configured to provide load sharing across different O-RU. At symbol boundary, the computation cluster420keeps on broadcasting internal states of the O-RU, such as frame, subframe, slot, and symbol ticks, internal states of buffers, DPD coefficients, CFR coefficients, etc. In one or more embodiments, across location may be referred to as across different sectors at the same physical site. For example, the first 120 degree in space may be covered in one sector, while the next 120 degree angle in space is covered by another RU on the same physical site. Essentially the HW computational cluster may service any of these sectors (angles). FIG.7depicts a block diagram of one interface cluster in the O-RU, according to embodiments of the present disclosure. The interface cluster430comprises various components or modules for synchronization432(e.g., via a GPS clock synchronization435and/or an IEEE 1588 precision time protocol436) and fronthaul transport. In one or more embodiments, the fronthaul connectivity between O-RU and O-DU may be implemented via an enhanced Common Public Radio Interface (eCPRI)431, which may be established using Fiber or Ethernet434. Furthermore, the interface cluster430may comprise one or more status indicators437, e.g. LEDs, displaying status for fronthaul transport interface. In one or more embodiments, the O-RAN fronthaul specifications may also support a protocol stack that transmits the signals over User Datagram Protocol (UDP)/Internet Protocol (IP) suite433, which provides a direct way to send and receive datagrams over an IP network. The UDP/IP may be used for broadcasting messages over a network. FIG.8depicts a flow diagram for high availability management in a cell site O-Cloud comprising multiple O-RUs, according to embodiments of the present disclosure. Each O-RU comprises one or more RF clusters, one or more computation clusters, and one or more interface clusters. In step805, each of the multiple O-RUs couples to a local HA manager, respectively. In step810, one or more O-RU instances are instantiated, with redundancy, on the cell site O-Cloud to serve one or more users respectively. The one or more O-RU instances involve one or more O-RUs among the multiple O-RUs. Each O-RU instance comprises at least one RF cluster, at least one computation cluster, and at least one interface cluster. The redundancy may be referred to as an RF cluster redundancy, a computation cluster redundancy, an interface cluster redundancy, or a combination redundancy for RF/computation/interface clusters. In some embodiments, one O-RU may have one or more O-RU instances, and one O-RU instance may involve one or more O-RUs in the cell site O-Cloud. In step815, the local HA manager for each of the one or more O-RU instances monitors instance performance of the one or more O-RU instances for failure prediction/detection. The local HA manager may use AI/ML based algorithm to monitor one or more parameters comprising O-RU temperature, RF power, a change rate of temperature, a change rate of RF output power, a change rate of voltage, a change rate of current, data rate, latency, etc. In step820, in response to a failure for the at least one O-RU instance being detected or predicted, one or more new O-RU instances are instantiated intra O-RU (in the same O-RU with detected/predicted O-RU instance failure) or across O-RU (in another O-RU) for replacement. For example, when the latency for one O-RU instance is beyond a latency threshold, the O-RU instance may need to be replaced by a new O-RU instance. The failure may be referred to as one or more parameters being above or below a predetermined threshold. The new O-RU instance may be referred to as an O-RU instance having at least one of a new RF cluster, a new computation cluster, and a new interface cluster as compared to an existing O-RU instance. For example, an existing O-RU instance may be replaced as a new O-RU instance by changing an RF cluster (or a computation cluster, etc.) in the existing O-RU into a new RF cluster (or a new computation cluster, etc.). New O-RU instance instantiation in another O-RU may be implemented via a centralized HA manager that couples to the local HA manager of the O-RU and a local HA manager of the another O-RU. In one or more embodiments, the centralized HA manager may implement load balancing across O-RUs or cell sites when the number of O-RU instances in one O-RU is excessive, e.g., above a predetermined number. In one or more embodiments, high availability management for O-RU instances may be implemented independently or in combination with high availability management for O-DU instances, described in detail in Section C below, for O-Cloud services. C. Embodiments for High Availability in O-DU FIG.9depicts a schematic diagram of an interaction of an O-DU920with an O-RU910and an O-CU930, according to embodiments of the present disclosure. The O-DU920couples to the O-RU and the O-CU via fronthaul915and mid-haul interface925respectively. The fronthaul915may be an open fronthaul between the O-DU and one or more O-RUs to allow connection between any vendor DU and any vendor RU. To enable this multi-vendor DU and RU interconnection some signaling formats and control messaging are detailed by Open Standard, i.e. O-RAN Alliance, as part of O-RAN fronthaul specification. O-RAN details synchronization architectures for 7-2x split in open fronthaul networks. O-RAN fronthaul defines operations in different planes: Control Plane (C-Plane): defining scheduling, coordination required for data transfer, beam-forming, etc. User Plane (U-Plane): for efficient data transfer within the strict time limits of 5G numerologies. Synchronization Plane (S-Plane): responsible for the timing and sync aspects between O-DU and O-RU. For O-RAN cloud deployments, a high accurate synchronization between an O-DU and O-RUs may be necessary to achieve controlled linking for inter-O-RU synchronization for time division duplex (TDD), carrier aggregation using multiple O-RUs, multiple-input and multiple-output (MIMO), and similar processes. In one or more embodiments, the O-DU920comprises a transport network interface controller (NIC, also known as a network interface card)922for O-RU communication, a transport NIC924for O-CU communication, one or more CPU cores and memory blocks926coupled to the transport NICs922and924, one or more hardware accelerators928, etc. The one or more CPU cores and memory blocks926may be instantiated into one or more O-DU instances to enable one or more network function virtualizations (VNFs). The O-DU920may further comprise O-DU hardware accelerator928, e.g., FPGA, for processing various functions at the high PHY, MAC, and RLC layers. Different software kits, e.g., Data Plane Development Kit (DPDK), single root I/O virtualization (SR-IOV), etc., may be used for O-DU performance enhancement. The O-DU920may further comprise various a synchronization module432to support synchronization between the O-DU and O-CU/O-RU via GPS clock and/or an IEEE 1588v2 precision time protocol (PTP) and fronthaul transport. A local HA manager940couples to the O-DU920for monitoring internal states of the O-DU920and broadcasting internal state to other servers. In one or more embodiments, the local HA managers (940,945. . . ) for O-DUs are separate from the local HA managers (440,445. . . ) for O-RUs. The internal states may comprise buffer fullness level, channel state information, frame/subframe/slot/symbol ticks, hybrid automatic repeat request (HARQ) buffer information, etc. The local HA manager940may use AI/ML based algorithm for O-DU instance monitoring. The local HA manager940may also establish a low latency path to a centralized HA manager450, which may be deployed on the cloud, such as on a regional cloud (O-CU). The centralized HA manager450may connect to a plurality of local HA managers for O-RUs (e.g., local HA manager440and445) and a plurality of local HA managers for O-DUs (e.g., local HA managers940and945), such that the centralized HA manager may implement HA management at a higher hierarchical lever across O-RUs and/or O-DUs. FIG.10depicts a diagram of O-DU PHY processing blocks for downlink flows, according to embodiments of the present disclosure. Downlink data1005from layer 2 or above1010may comprise physical downlink shared channel (PDSCH) transport block (TBs), PDSCH demodulation reference signals (DMRS), physical downlink control channel (PDCCH) downlink control information (DCI), PDCCH demodulation reference signal (DMRS), Physical Broadcast Channel (PBCH) TBs, primary synchronization signal (SSS)/secondary synchronization signal (SSS) PBCH DMRS, and reference signals, such as channel state information reference signal (CSI-RS), phase tracking reference signal (PT-RS), and/or tracking reference signal (TRS). These different parts of the downlink data1005undergo respective data processing processes. For example, PDSCH TBs may have processing steps comprising TB cyclic redundancy check (CRC) attachment, codeblock (CB) segmentation, low-density parity-check (LDPC) encoding, rate matching, CB concatenation, scrambling, modulation, and layer mapping, etc. The different parts of the downlink data1005, upon respective processing, may be jointly processed together, e.g., resource element (RE) mapping and in-phase and quadrature (IQ) compression, for downlink transmission via an O-RAN fronthaul interface1020to one or more O-RUs. FIG.11depicts a diagram of O-DU PHY processing blocks for uplink flow, according to embodiments of the present disclosure. Downlink data1105sent from one or more O-RUs via the O-RAN fronthaul interface1020is processed at the O-DU for RE mapping and IQ compression, and then decomposed into multiple data components for respective processing. The multiple data components may comprise physical uplink shared channel (PUSCH) data (with or without uplink control information (UCI)), physical uplink control channel (PUCCH) DCI, PRACH, reference signals, such as sounding reference signal (SRS) or PT-RS, etc. For example, the PUSCH data may undergo processings comprising channel estimation, channel equalization, inverse discrete Fourier transform (IDFT), demodulation, descrambling, rata rematching, LDPC decoding, and/or CRC checking. The multiple data components, after respective processing, may be transmitted to layer 2 or above at the O-DU for further processing. FIG.12depicts a flow diagram for high availability management in an O-Cloud for O-DU, according to embodiments of the present disclosure. The O-Cloud for O-DU comprises multiple O-DUs and may be an Edge O-Cloud or be the same as the cell site O-Cloud comprising multiple O-RUs. Each O-DU comprises one or more cores and memory blocks that may be instantiated into one or more O-DU instances to enable one or more network function virtualizations (VNFs). Each O-DU may further comprise one or more O-DU hardware accelerators to process various functions at the high PHY, MAC, and RLC layers. In step1205, each of the multiple O-DUs couples to a local HA manager respectively. In step1210, one or more O-DU instances are instantiated, with redundancy, on the O-Cloud for O-DU to serve one or more users. Each O-DU instance involves at least one core, at least one memory block, and optionally an O-DU hardware accelerator. The redundancy may be a core redundancy, a memory block redundancy, an O-DU hardware accelerator redundancy, or a combination thereof. In some embodiments, one O-DU may have one or more O-DU instances, and one O-DU instance may involve one or more O-DUs in the O-Cloud for O-DU. In step1215, the local HA manager for an O-DU involved at least one O-DU instance monitors internal states for each of the at least one O-DU instances. The monitored internal states may comprise buffer fullness level, frame/subframe/slot/symbol ticks, HARQ buffer information, etc. In step1220, in response to one or more internal states beyond or below corresponding predetermined state thresholds, one or more new O-DU instances are instantiated in the O-Cloud for O-DU, e.g., in the O-DU or in another O-DU as a replacement for the at least one O-DU instance. For example, when the buffer fullness for one O-DU instance is beyond a fullness threshold, the O-DU instance may need to be replaced by a new O-DU instance with more resources to maintain a desired operation performance. The new O-DU instance may be referred to as an O-DU instance that uses newly allotted cores and/or memory blocks, or an O-DU instance that has added cores and/or memory blocks in addition to originally allotted resources. For example, an existing O-DU instance may be replaced as a new O-DU instance by adding more resources, e.g., more cores and memory blocks, to the existing O-DU. New O-DU instance instantiation in another O-DU may be implemented via a centralized HA manager that couples to the local HA manager of the O-DU and a local HA manager of the another O-DU. In one or more embodiments, the centralized HA manager may implement load balancing across O-DUs when the number of O-DU instances in one O-DU is excessive, e.g., above a predetermined number. In one or more embodiments, high availability management for O-DU instances may be implemented independently or in combination with the aforementioned high availability management for O-RU instances. It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the scope of the present disclosure. It is intended that all permutations, enhancements, equivalents, combinations, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It shall also be noted that elements of any claims may be arranged differently, including having multiple dependencies, configurations, and combinations. | 29,166 |
11863394 | DETAILED DESCRIPTION OF THE EMBODIMENTS The embodiments provide a connectivity detection session creation method, a network device, and a system, to create a connectivity detection session in an EVPN. The terms “first”, “second”, “third”, “fourth”, and the like (if existent) are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. It may be understood that the data termed in such a way are interchangeable in proper circumstances, so that the embodiments of the present invention described herein can be implemented in other orders than the order illustrated or described herein. Moreover, the terms “include” and “contain” and any other variants mean to cover the non-exclusive inclusion, for example, a process, method, system, product, or device that includes a list of steps or units is not necessarily limited to those units, but may include other units not expressly listed or inherent to such a process, method, system, product, or device. The embodiments are applied to a network scenario of implementing interconnection between VPN sites by using a technology. An EVPN is a layer 2 network interconnection technology. In the EVPN, MAC learning between network devices (for example, PE devices) is implemented on a control plane, and a BGP is used as a protocol of the control plane for a control protocol, to perform MAC address learning, and access topology and VPN site discovery. The EVPN mainly includes a VPWS network and an E-LAN network. The VPWS network is also referred to as an E-Line network. The E-Line network is an MPLS-based layer 2 VPN service, and is a point-to-point communications service that enables two network devices to communicate with each other bidirectionally. The E-LAN provides a multipoint-to-multipoint layer 2 VPN service. In the E-LAN network, a packet is transparently transmitted, so that a plurality of network devices can communicate with each other in a same local area network. In the E-LAN network, each network device may send a data packet in a multicast manner. In other words, any network device in the E-LAN network may send a message to all network devices in the E-LAN network in the multicast manner. The multicast manner may be a broadcast or multicast manner. It may be understood that a network device is a device that performs a routing and forwarding function, and may be a device such as a router, a switch, or a forwarder. The router, the switch, or the forwarder may be a physical device, or may be a virtual device (for example, a virtual server, a virtual router, a virtual switch, or a virtualized forwarder) implemented based on a virtualization technology. The network device may alternatively be a PE device or the like based on different deployment locations and roles of the network device in a network. For example, inFIG.1, an EVPN100includes at least two network devices, for example, a PE 1, a PE 2, and a PE 3. Three sites (site 1, site 2, and site 3) of a VPN service1(VPN1 for short) separately access the EVPN by using a CE 1, a CE 2, and a CE 3, and the three sites are connected to each other through the EVPN. It may be understood that the EVPN100in this embodiment may further include a control management device. The control management device is configured to control and manage the network device in the EVPN100. It may be further understood that a problem such as device reboot or a link fault may occur on a connection between any two of the at least two connected network devices, and consequently a network disconnection or network drop occurs. In this case, connectivity detection such as CFM detection needs to be performed between the any two network devices. In a CFM detection process, two network devices periodically send CFM detection packets to each other. If one network device does not receive, in several periods, the CFM detection packet sent by the other network device, it may be determined that the two network devices are disconnected, and therefore an alarm message needs to be reported to the control management device. In an exemplary implementation, the control management device may be a server in the EVPN. The control management device is configured to receive and process alarm messages reported by the at least two network devices. In an exemplary implementation, the server may vary with a configuration or performance. The server may include one or more central processing units (CPUs) (for example, one or more processors), a memory, and one or more storage media (for example, one or more mass storage devices) for storing an application program or data. The memory and the storage medium may perform temporary storage or permanent storage. The program stored in the storage medium may include one or more modules, and each module may include a series of instruction operations for the server. Further, the central processing unit may be configured to communicate with the storage medium, to perform, on the server, a series of instruction operations in the storage medium. There are a large quantity of network devices in the EVPN. If a connectivity detection technology that is the same as that in the E-LINE network is used, in other words, if a CFM instance is configured between every two network devices, a workload is heavy and configuration is time-consuming and labor-intensive. In this embodiment, when a first network device is newly added to the EVPN, an inclusive multicast routing table needs to be configured for only the first network device. In this case, when a creation message of a connectivity detection session is received from a second network device, because the creation message of the connectivity detection session from the second network device carries an inclusive multicast route of the second network device and session information of the second network device, and the local inclusive multicast routing table includes the inclusive multicast route of the second network device, the first network device may create the connectivity detection session with the second network device based on the session information of the second network device. Therefore, a connectivity detection session instance may be configured between the first network device and each original network device in the EVPN without a need of configuring a connectivity detection session instance between the first network device and each existing network device in the EVPN. This simplifies a configuration process. In view of this, referring toFIG.2-1, the embodiments provide a connectivity detection session creation method, and the method is applied to an EVPN. The EVPN includes a first network device and a second network device. In this embodiment, the second network device is a sender of a connectivity detection session, and the first network device is a receiver of the connectivity detection session. In an exemplary implementation, the second network device may alternatively be a receiver of a connectivity detection session, and the first network device may alternatively be a sender of the connectivity detection session. The method includes the following steps. 201: The second network device obtains an inclusive multicast route of the second network device. In this embodiment, when the second network device enters the EVPN, the second network device may obtain the inclusive multicast route of the second network device. In an exemplary implementation, the second network device may locally receive the inclusive multicast route of the second network device that is added by a network administrator by using a command line. In an exemplary implementation, the second network device may receive remote control performed by a network administrator through a control management device, to add the inclusive multicast route of the second network device. In an exemplary implementation, the control management device may send an inclusive multicast route to the second network device. After receiving the inclusive multicast route, the second network device uses the inclusive multicast route as the inclusive multicast route of the second network device, to complete configuration of the inclusive multicast route. In an exemplary implementation, the network administrator may send an inclusive multicast route to the second network device through the control management device, or the control management device may automatically send an inclusive multicast route to the second network device. In an exemplary implementation, the control management device may select an available inclusive multicast route from a pre-stored inclusive multicast routing table, and configure the available inclusive multicast route for the second network device. In an exemplary implementation, the control management device may alternatively set a value randomly, configure the value for the second network device, and update an inclusive multicast routing table, so that a value of an entry in the inclusive multicast routing table is the inclusive multicast route of the second network device. It may be understood that the foregoing steps performed by the control management device may be performed by the control management device under control of the network administrator, or may be automatically performed by the control management device. It may be understood that the inclusive multicast route may carry a route distinguisher (RD), a route target (RT) value, a source IP such as a local loopback interface address of a local network device, and provider multicast service interface (PMSI) information of an EVPN instance on the local network device. The PMSI is used to carry label information encapsulated during multicast packet transmission. The PMSI and the RT value are carried in attribute information of a route, and the RD and the source IP are carried in network layer reachability information (NLRI) of the route. 202: The first network device obtains the inclusive multicast routing table. In this embodiment, when the first network device newly enters the network, the first network device may obtain the inclusive multicast routing table. The inclusive multicast routing table includes a plurality of entries, and a value of each entry is one inclusive multicast route. It may be understood that the inclusive multicast routing table includes the inclusive multicast route of the second network device. In other words, a value of an entry in the inclusive multicast routing table is a value of the inclusive multicast route of the second network device. In an exemplary implementation, the first network device may locally receive the value of each entry in the inclusive multicast routing table that is added by the network administrator by using a command line. In an exemplary implementation, the first network device may alternatively receive remote control performed by the network administrator, to add the value of each entry in the inclusive multicast routing table. In an exemplary implementation, the control management device may send the inclusive multicast routing table to the first network device, so that the first network device stores the inclusive multicast routing table. In an exemplary implementation, the network administrator may send the inclusive multicast routing table to the first network device through the control management device, or the control management device may automatically send the inclusive multicast routing table to the first network device. In an exemplary embodiment implementation, the control management device may aggregate values of all possible inclusive multicast routes at a time to obtain one inclusive multicast routing table, and then send the inclusive multicast routing table to all network devices in the EVPN. The control management device may send the inclusive multicast routing table to the first network device when the first network device enters the network, or may periodically send the inclusive multicast routing table to all the network devices in the EVPN in a broadcast or multicast manner. In an exemplary implementation, the control management device may alternatively periodically update the inclusive multicast routing table, and then periodically send an updated inclusive multicast routing table to all network devices in the EVPN in a broadcast or multicast manner. In an exemplary implementation, the control management device may alternatively configure one inclusive multicast route for the second network device when the second network device enters the network, and then send the inclusive multicast route of the second network device to all network devices in the EVPN in a broadcast or multicast manner, so that all the network devices (for example, the first network device) in the EVPN each update a local inclusive multicast routing table, and a value of an entry in the inclusive multicast routing table is the inclusive multicast route of the second network device. It may be understood that the foregoing steps performed by the control management device may be performed by the control management device under control of the network administrator, or may be automatically performed by the control management device. In an exemplary implementation, after a BGP neighbor relationship between the first network device and the second network device is successfully established, the first network device and the second network device may transfer respective inclusive multicast routes. Therefore, after obtaining an inclusive multicast route, the second network device may advertise, in a broadcast or multicast manner, the inclusive multicast route to each network device having a BGP neighbor relationship with the second network device in the EVPN. For example, if the first network device has a BGP neighbor relationship with the second network device, the first network device may receive the inclusive multicast route of the second network device that is sent by the second network device, and store the inclusive multicast route of the second network device in the inclusive multicast routing table, to update the inclusive multicast routing table. In an exemplary implementation, the first network device may further receive an inclusive multicast route sent by a network device in the EVPN other than the second network device, and then locally store the inclusive multicast route. It may be understood that the inclusive multicast routing table may be a spreadsheet format, an electronic file, or a class database stored in a router or a networked computer. In an exemplary implementation, the inclusive multicast routing table stores a path pointing to a specific network address. It may be understood that the inclusive multicast routing table may be fixedly preset by the network administrator, or may be dynamically modified. It may be understood that, in a same EVPN, inclusive multicast routing tables configured for different network devices may be the same, but inclusive multicast routes configured for different network devices are different. For example, if the first network device and the second network device obtain a same inclusive multicast routing table, values of entries in the inclusive multicast routing table are respectively as follows: 0:32:1.1.1.1, 0:32:2.2.2.2, 0:32:3.3.3.3, 0:32:4.4.4.4, and 0:32:10.10.10.10. In this case, an inclusive multicast route obtained by the first network device may be 0:32:1.1.1.1, and an inclusive multicast route obtained by the second network device may be 0:32:2.2.2.2. 203: The second network device sends a creation message of a connectivity detection session to the first network device, where the creation message of the connectivity detection session carries the inclusive multicast route of the second network device and session information of the second network device. In an exemplary implementation, after the second network device enters the EVPN, to perform connectivity detection between the second network device and each network device in the EVPN, the second network device may send a creation message of a connectivity detection session to each network device (for example, the first network device) in the EVPN. After entering the EVPN, the second network device may send the creation message of the connectivity detection session to each network device in the EVPN in a broadcast or multicast manner. In an exemplary implementation, after entering the EVPN, the second network device may send the creation message of the connectivity detection session in a broadcast or multicast manner only once, or may periodically send the creation message of the connectivity detection session in a broadcast or multicast manner. In an exemplary implementation, the connectivity detection session may be a CFM session. It may be understood that the CFM session is used to implement an operation, administration and maintenance (OAM) function provided in the Institute of Electrical and Electronics Engineers (IEEE) 802.1ag standard, that is, a function of detecting, recovering, and managing, in time, a network exception such as service downgrade or a service failure that occurs on a network device such as a switching device or an optical network device. It may be understood that the CFM session may be created in a three-way handshake manner. After the second network device sends a creation message of the CFM session to the first network device, the first network device may return a response packet of the CFM session, and then the second network device sends an acknowledgment message to the first network device again, so that the CFM session can be created between the second network device and the first network device. In an exemplary implementation, the CFM session may alternatively be created in a two-way or four-way handshake manner. It may be understood that, for detailed descriptions of the CFM session, refer to the solicit opinion 802.1ag standard released by the IEEE, and content related to the CFM session is generally incorporated by reference (incorporated by reference) as a whole through copying. For brevity, details are not described herein. In this embodiment, the creation message of the connectivity detection session that is sent by the second network device carries the inclusive multicast route of the second network device. In an exemplary implementation, a type-length-value TLV field in the creation message of the connectivity detection session includes the inclusive multicast route of the second network device, and the TLV field includes a type, a length, and a value. As shown inFIG.2-2(which is a schematic diagram of a TLV), the type may include eight characters and is used to indicate that a type of the TLV field is the inclusive multicast route, the length may include eight characters and is used to indicate a length of the TLV field, and the value may include 16 characters and is used to indicate the inclusive multicast route of the second network device. 204: The first network device determines that the local inclusive multicast routing table includes the inclusive multicast route of the second network device. In this embodiment, when receiving the creation message of the connectivity detection session that is sent by the second network device, the first network device may obtain the inclusive multicast route of the second network device from the creation message of the connectivity detection session, and compare the inclusive multicast route with a value of each entry in the local inclusive multicast routing table. If the first network device finds the inclusive multicast route of the second network device from the inclusive multicast routing table, it may be considered that the first network device and the second network device belong to a same EVPN. Therefore, the first network device may communicate with the second network device, and needs to create the connectivity detection session. In this case, the first network device may return a response packet of the connectivity detection session to the second network device. If the first network device does not find the inclusive multicast route from the inclusive multicast routing table, it may be considered that the first network device and the second network device do not belong to a same EVPN. Therefore, the first network device may not communicate with the second network device, and does not create the connectivity detection session. It may be assumed that values of entries in the local inclusive multicast routing table of the first network device are respectively as follows: 0:32:1.1.1.1, 0:32:2.2.2.2, 0:32:3.3.3.3, 0:32:4.4.4.4, and 0:32:10.10.10.10. If the inclusive multicast route of the second network device that is received by the first network device is 0:32:2.2.2.2, the inclusive multicast routing table includes the inclusive multicast route of the second network device. Therefore, it is determined that the second network device and the first network device belong to the same EVPN. In this case, the first network device determines to create the connectivity detection session with the second network device. It may be understood that, when the connectivity detection session is created between the first network device and the second network device, negotiation needs to be performed by using session information of the first network device and the second network device, to create the connectivity detection session. In an exemplary implementation, the session information includes a MEP ID or a session ID. The CFM session is used as an example. When the CFM session needs to be created between the first network device and the second network device, the creation message of the CFM session that is sent by the second network device to the first network device carries a MEP ID of the second network device as the session information. Then, the first network device returns the response packet of the CFM session to the second network device, and the response packet carries a MEP ID of the first network device. Therefore, the CFM session is finally created between the first network device and the second network device. In an exemplary implementation, the first network device may preset a MEP ID range, and the MEP ID range includes a plurality of MEP IDs. When receiving the creation message of the CFM session that is sent by the second network device, the first network device obtains the session information in the creation message, that is, the MEP ID of the second network device. The first network device determines whether the MEP ID of the second network device is within the MEP ID range. If the MEP ID of the second network device is within the MEP ID range, the first network device performs the step of creating the CFM session with the second network device; otherwise, the first network device does not perform the step. In an exemplary implementation, the second network device may alternatively set a MEP ID range. When receiving a creation response of the CFM session that is returned by the first network device, the second network device may obtain session information in the creation response, that is, the MEP ID of the first network device. The second network device determines whether the MEP ID of the first network device is within the MEP ID range. If the MEP ID of the first network device is within the MEP ID range, the first network device performs the step of creating the CFM session with the second network device; otherwise, the first network device does not perform the step. It may be understood that The MEP ID range that is set by the first network device may be the same as or may be different from the MEP ID range that is set by the second network device. In an exemplary implementation, the first network device or the second network device may alternatively not set the MEP ID range. In an exemplary implementation, the session message carries the session ID, so that the first network device determines, based on a session ID range, whether to create the connectivity detection session. This step is similar to the foregoing case in which the session message carries the MEP ID, and is not described herein again. 205: The first network device sends the response packet of the connectivity detection session to the second network device, where the response packet of the connectivity detection session includes the session information of the first network device. In this embodiment, after the first network device determines that the inclusive multicast route of the second network device is in the local inclusive multicast routing table, the first network device may create the connectivity detection session with the second network device based on the session information of the second network device. The first network device may send the response packet of the connectivity detection session to the second network device, and the response packet of the connectivity detection session includes the session information of the first network device, so that the second network device creates the connectivity detection session with the first network device based on the session information of the first network device. In an exemplary implementation, if the connectivity detection session is created in a three-way handshake manner, the second network device further sends an acknowledgment message to the first network device, and therefore the connectivity detection session is created between the second network device and the first network device. If the connectivity detection session is created in a two-way handshake manner, the second network device does not need to send an acknowledgment message to the first network device. After the first network device sends the response packet of the connectivity detection session to the second network device, the connectivity detection session is created between the second network device and the first network device. Alternatively, the connectivity detection session may be created in a four-way handshake manner. In this embodiment, the response packet of the connectivity detection session that is sent by the first network device to the second network device carries an inclusive multicast route of the first network device, so that the second network device can determine whether a local inclusive multicast routing table includes the inclusive multicast route of the first network device. If the local inclusive multicast routing table includes the inclusive multicast route of the first network device, the second network device returns the acknowledgment message to the first network device, to complete creation of the connectivity detection session. In this embodiment, in the foregoing steps, the network administrator may configure an instance of the connectivity detection session between the first network device and each original network device in the EVPN without a need of configuring an instance of the connectivity detection session between the first network device and each existing network device in the EVPN. This simplifies a configuration process. It may be understood that, after the connectivity detection session is created between the first network device and the second network device, the first network device and the second network device may periodically send connectivity detection packets to each other. The first network device periodically sends a connectivity detection packet to the second network device, and/or the first network device receives a connectivity detection packet periodically sent by the second network device. If the first network device does not receive, in several periods, a connectivity detection packet sent by the second network device, or if the second network device does not receive, in several periods, a connectivity detection packet sent by the first network device, the first network device or the second network device reports an alarm message to the control management device. If the first network device does not receive, in a preset quantity of periods, the connectivity detection packet sent by the second network device, the first network device reports the alarm message to the control management device, or if the second network device does not receive, in a preset quantity of periods, the connectivity detection packet sent by the first network device, the second network device reports the alarm message to the control management device. In an exemplary implementation, after the connectivity detection session is created between the first network device and the second network device, fault statistics such as Y1731 statistics needs to be collected for a problem such as a fault or a disconnection. During Y1731 statistics collection, a fault statistics packet may be sent and received between the first network device and the second network device, where one party sends the fault statistics packet, and the other party receives the fault statistics packet. An example in which the session information is the MEP ID is used. The first network device and the second network device may be determined as a sender and a receiver of the fault statistics packet based on the MEP ID of the first network device and the MEP ID of the second network device. The sender is one of the first network device and the second network device, and the receiver is the other of the first network device and the second network device. The sender and the receiver may be determined based on a value of the MEP ID of the first network device and a value of the MEP ID of the second network device. For example, a network device having a larger MEP ID value is used as the receiver, and a network device having a smaller MEP ID value is used as the sender. Alternatively, in an exemplary implementation a network device having a larger MEP ID value may be used as the sender, and a network device having a smaller MEP ID value may be used as the receiver. In another exemplary implementation, determining may alternatively be performed in another manner, for example, based on an inclusive multicast route size or a session ID size. The foregoing describes the solutions from the perspective of method steps. The following describes the embodiments from the perspective of a function apparatus. Referring toFIG.3, a network device serving as a first network device300is further provided, and includes: a transceiver301, a memory302, a processor303, and a bus304, where the transceiver301, the memory302, and the processor303are connected by using the bus304. The processor303is configured to execute a computer-readable instruction in the memory302, to perform the following operations: receiving a creation message of a connectivity detection session from a second network device, where the creation message of the connectivity detection session carries an inclusive multicast route of the second network device and session information of the second network device; determining that a local inclusive multicast routing table includes the inclusive multicast route of the second network device; and creating the connectivity detection session with the second network device based on the session information of the second network device. An inclusive multicast routing table only needs to be locally configured for the first network device. Therefore, a connectivity detection session instance may be configured between the first network device and each original network device in an EVPN without a need of configuring a connectivity detection session instance between the first network device and each existing network device in the EVPN. This simplifies a configuration process. In an exemplary implementation, the first network device may be a PE, to provide a solution of creating a connectivity detection session between PEs in the EVPN. In this solution, a connectivity detection session instance does not need to be configured between the first network device and each existing network device in the EVPN. This simplifies a configuration process. In an exemplary implementation, the processor303may further receive the creation message of the connectivity detection session that is sent by the second network device in a broadcast or multicast manner. The second network device needs to send only one packet, and does not need to send the packet to each network device in the EVPN. This reduces transmission resource burden and improves transmission efficiency. In an exemplary implementation, the processor303may further obtain the inclusive multicast routing table, where the inclusive multicast routing table includes the inclusive multicast route of the second network device. Therefore, the processor303may determine that the first network device and the second network device belong to a same EVPN, so as to determine that the connectivity detection session needs to be created between the first network device and the second network device. In an exemplary implementation, a manner in which the processor303obtains the inclusive multicast routing table may include the following: The processor303obtains the inclusive multicast routing table based on a configuration of a locally received command line. Therefore, a network administrator may locally input the inclusive multicast routing table into the network device. Alternatively, the processor303may receive an inclusive multicast routing table sent by a control management device, so that the memory302stores the inclusive multicast routing table. Therefore, a network administrator may remotely configure the inclusive multicast routing table through the control management device. Alternatively, the processor303may receive the inclusive multicast route of the second network device that is advertised by the second network device, and the memory302stores the inclusive multicast route of the second network device in the inclusive multicast routing table. The inclusive multicast routing table does not need to be edited in advance, and the inclusive multicast routing table may be dynamically and automatically updated. This meets requirements of different network devices in different periods, reduces a workload of a network administrator, and improves working efficiency. The processor303is further configured to obtain a MEP ID range or a session ID range, where the MEP ID range includes a MEP ID of the second network device, or the session ID range includes a session ID of the second network device. Therefore, the first network device may specify that only a network device having a specific MEP ID or session ID can create a connectivity detection session. Before performing the step of creating the connectivity detection session with the second network device based on the session information of the second network device, the processor303may further determine that the MEP ID of the second network device is within the MEP ID range, or determine that the session ID of the second network device is within the session ID range, and therefore may create the connectivity detection session. Otherwise, the processor may not create the connectivity detection session. In an exemplary implementation, after the first network device creates the connectivity detection session with the second network device, the processor303may further determine a sender and a receiver of a fault statistics packet based on a MEP ID of the first network device and the MEP ID of the second network device, or determine a sender and a receiver of a fault statistics packet based on a session ID of the first network device and the session ID of the second network device, where the sender is one of the first network device and the second network device, and the receiver is the other of the first network device and the second network device. It may be understood that the processor303may be a CPU, a network processor (NP), or a combination of the CPU and the NP. The processor303may alternatively be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof. The PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), generic array logic (GAL), or any combination thereof. The processor303may be one processor, or may include a plurality of processors. The transceiver301is configured to receive BGP routing information from the second network device, and send the BGP routing information to the processor303for subsequent operation processing. The BGP routing information includes a destination address, and a next-hop address and attribute information of the destination address. The attribute information indicates a manner in which the first network device performs route recursion processing on the next-hop address. The memory302may include a volatile memory, such as a random access memory (RAM); or the memory may include a nonvolatile memory, such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD); or the memory may include a combination of the foregoing types of memories. The memory302stores a computer-readable instruction, and the computer-readable instruction includes at least one software module. After executing each software module, the processor303may perform a corresponding operation according to an instruction of each software module. Referring toFIG.4, the embodiments may further provide a network device serving as a second network device400. The second network device400includes: a transceiver401, a memory402, a processor403, and a bus404, where the transceiver401, the memory402, and the processor403are connected by using the bus404. The processor403is configured to execute a computer-readable instruction in the memory402, to perform the following operations: obtaining an inclusive multicast route of the second network device400; and sending a creation message of a connectivity detection session to a first network device300, where the creation message of the connectivity detection session carries the inclusive multicast route and session information of the second network device400, and the session information is used to create the connectivity detection session. An inclusive multicast route only needs to be locally configured for the second network device400. Therefore, a connectivity detection session instance may be configured between the second network device400and each original network device in an EVPN without a need of configuring a connectivity detection session instance between the second network device400and each existing network device in the EVPN. This simplifies a configuration process. In an exemplary implementation, the second network device400includes a PE, to provide a solution of creating a connectivity detection session between PEs in the EVPN. In this solution, a connectivity detection session instance does not need to be configured between the first network device300and each existing network device in the EVPN. This simplifies a configuration process. In an exemplary implementation, the processor403is configured to: send the creation message of the connectivity detection session to each network device in the EVPN in a broadcast or multicast manner. Therefore, the second network device400needs to send only one packet, and does not need to send the packet to each network device in the EVPN. This reduces transmission resource burden and improves transmission efficiency. In an exemplary implementation, that the processor403obtains the inclusive multicast route includes the following: The processor403obtains the inclusive multicast route of the second network device400based on a configuration of a command line. Therefore, a network administrator may locally input an inclusive multicast routing table into the network device. Alternatively, the processor403receives the inclusive multicast route sent by a control management device, and uses the inclusive multicast route as the inclusive multicast route of the second network device400. Therefore, a network administrator may remotely configure the inclusive multicast routing table through the control management device. The inclusive multicast route of the second network device400may be advertised to each network device in the EVPN. Therefore, the inclusive multicast routing table does not need to be edited in advance, and the inclusive multicast routing table may be dynamically and automatically updated. This meets requirements of different network devices in different periods, and reduces a workload of a network administrator. In an exemplary implementation, the processor403may send the creation message of the connectivity detection session to each network device in the EVPN in a broadcast or multicast manner. Therefore, the second network device400needs to send only one packet, and does not need to send the packet to each network device in the EVPN. This reduces transmission resource burden and improves transmission efficiency. In an exemplary implementation, after the processor403sends the creation message of the connectivity detection session to each network device in the EVPN, the processor403is further configured to perform the following operations: receiving a response packet of the connectivity detection session that is sent by the first network device300, where the response packet of the connectivity detection session includes session information of the first network device300; and determining a sender and a receiver of a fault statistics packet based on the session information of the first network device300and the session information of the second network device400, where the sender is one of the first network device300and the second network device400, and the receiver is the other of the first network device300and the second network device400. It may be understood that the processor403may be a CPU, a NP, or a combination of the CPU and the NP. The processor403may alternatively be an ASIC, a PLD, or a combination thereof. The PLD may be a CPLD, a FPGA, GAL, or any combination thereof. The processor403may be one processor, or may include a plurality of processors. The transceiver401is configured to receive BGP routing information from the second network device, and send the packet to the processor403for subsequent operation processing. The BGP routing information includes a destination address, and a next-hop address and attribute information of the destination address. The attribute information indicates a manner in which the first network device performs route recursion processing on the next-hop address. The memory402may include a volatile memory, such as a RAM; or the memory may include a nonvolatile memory, such as a ROM, a flash memory, a HDD, or a SSD; or the memory may include a combination of the foregoing types of memories. The memory402stores a computer-readable instruction, and the computer-readable instruction includes at least one software module. After executing each software module, the processor403may perform a corresponding operation according to an instruction of each software module. As shown inFIG.5, a system500is further provided, which includes a first network device300and a second network device400. The first network device300is the first network device inFIG.3, and the second network device400is the second network device inFIG.4. For detailed descriptions of each device in the system500, refer to related embodiments inFIG.3,FIG.4, and the like. Details are not described herein again. All or some of the foregoing embodiments may be implemented through software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, all or some of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedure or functions according to the embodiments are all or partially generated. The computer may be a general purpose computer, a special purpose computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium. It may be understood by persons of ordinary skill in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again. In the several embodiments provided, it may be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the unit division is merely logical function division and may be other division during actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electrical, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. Function units in the embodiments may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software function unit. When the functions are implemented in a form of a software function unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the solutions essentially, or the part contributing to the prior art, or some of the solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a ROM, a RAM), a magnetic disk, or an optical disc. The foregoing embodiments are merely intended for describing the solutions, but are non-limiting. Although embodiments are described in detail, persons of ordinary skill in the art may understand that they may still make modifications to the solutions described or make equivalent replacements to some features thereof, without departing from the spirit and scope of the solutions of the embodiments. | 48,068 |
11863395 | DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known components are shown in block diagram form in order to avoid obscuring such concepts. Described herein are various examples related to correlating service events, or underlying incident records, to facilitate determining, for a given service event, one or more correlated service events. This can facilitate improved service event analysis, root cause prediction, alert noise reduction, and/or the like. For instance, a multiple-layer relational graph can be generated and employed to define relationships among service events, and the graph can be queried to determine, for a given service or service event, the correlations and/or corresponding patterns at one or more of the layers to determine a set of related services or service events. For example, the multiple-layer relational graph can include a configuration layer that defines relationships between services and/or between service events based on a stored configuration. In addition, for example, the multiple-layer relational graph can include an observation layer that defines relationships between services and/or between service events based on observed network activity and/or usage of a network diagnostic system. Moreover, for example, the multiple-layer relational graph can include a learned layer that defines relationships between services and/or between service events based on algorithmic determinations about the services and/or service events (e.g., around parameters thereof). In an example, given a query context of a service and/or service event, the multiple-layer relational graph can be queried to determine the correlated services and/or service events, patterns of correlations between the services and/or service events, etc. at each layer to determine other services and/or service events that are possibly of interest (e.g., that have some correlation). In one example, the correlations or related metrics can be weighted at each layer based on the layer itself (e.g., to assign different weights in general to configured, observed, learned, etc. correlations) and/or based on other parameters regarding the correlation. Where the correlation or related metric achieves a threshold, in one example, the corresponding service and/or service event may be indicated for the query context to identify possibly related services and/or service events. This can assist in reducing the number of services and/or service events to be observed in diagnosing the service or service event that is the subject of the query context. Turning now toFIGS.1-7, examples are depicted with reference to one or more components and one or more methods that may perform the actions or operations described herein, where components and/or actions/operations in dashed line may be optional. Although the operations described below inFIGS.2-3are presented in a particular order and/or as being performed by an example component, the ordering of the actions and the components performing the actions may be varied, in some examples, depending on the implementation. Moreover, in some examples, one or more of the following actions, functions, and/or described components may be performed by a specially-programmed processor, a processor executing specially-programmed software or computer-readable media, or by any other combination of a hardware component and/or a software component capable of performing the described actions or functions. FIG.1is a schematic diagram of an example of a wireless communication system100that includes one or more networks, such as network 1102, having one or more service event loggers104for logging service events occurring on resources of the network 1102. For example, the resources of the network 1102may include various types of nodes, such as computing devices, databases, devices with a network-specific functionality, such as routers, bridges, firewalls, web servers, load balancers, etc., and/or the like. Each resource may have an associated service event logger104to log service events in a service event repository106, where the service event logger104may operate on the resource or otherwise to detect communications from the resource for logging the service events. In an example, the service events in service event repository106may include various types of events to notify of a health or status of one or more resources, such as processor or memory utilization on the resource, throughput of traffic on the resource, application-specific events that are definable by applications executing on the resource, etc. The service events may also include or be referred to as incident reports to identify certain incidents occurring on resources. In one example, an incident report can include an incident derived from multiple service events detected with one or more defined parameter values (e.g., a poor or non-existent connection for a network resource based on detecting one or more consecutive service events related to a dropped connection). A computing device110can be provided that can execute a network diagnostic application112to obtain service events from the service event repository106for inspection thereof and/or taking remedial steps to resolve an identified incident. As described, this can result in a vast number of service events being generated and stored in the service event repository106over a short period of time, and as such monitoring each service event can become overwhelming and ineffective for diagnosing possible issues in the network. For example, another computing device120is provided for exposing a framework to obtain service event information from service event repository106and for generating additional structures to assist in processing the vast number of service events in the service event repository in accordance with aspects described herein. For example, computing device120can include or can otherwise be coupled with a processor124and/or memory126, where the processor124and/or memory126can be configured to execute or store instructions or other parameters related to processing service events, generating a multiple-layer relational graph defining relationships among the service events, responding to queries for service events, etc., as described herein. For example, processor124and memory126may be separate components communicatively coupled by a bus (e.g., on a motherboard or other portion of a computing device, on an integrated circuit, such as a system on a chip (SoC), etc.), components integrated within one another (e.g., processor124can include the memory126as an on-board component121), and/or the like. Memory126may store instructions, parameters, data structures, etc., for use/execution by processor124to perform functions described herein. In an example, computing device120can execute an operating system128(e.g., via processor124and/or memory126) for providing an environment for executing one or more components or applications, such as a network diagnostic component130for fulfilling requests for service event data from the service event repository106, as requested by network diagnostic application(s)112on one or more other computing devices110, a graphing component132for generating a multiple-layer relational graph150defining multiple layers of relationships between service events in the service event repository106, and/or a query processing component134for processing a query context for a service event based on determining one or more related service events from the multiple-layer relational graph150. In an example, graphing component132may include a layer generating component140for generating the multiple layers of the multiple-layer relational graph150. For example, layer generating component140can include a configuration obtaining component142for obtaining a configuration (e.g., as stored in memory126or other memory or from another device related to the network, another device for configuring network diagnostic analysis, etc.) where the configuration can specify relationships between service events or corresponding services, and generating the configuration layer152of the multiple-layer relational graph150to indicate relationships based on the obtained configuration. In another example, layer generating component140can include an observing component144for observing network traffic, user behavior of the network diagnostic application112, etc. with respect to the service events and/or corresponding services, and generating the observation layer154of the multiple-layer relational graph150to indicate relationships based on the observations. In another example, layer generating component140can include a learning component146for performing anomaly detection of key services or service events in the service event repository106, and generating the learned layer156of the multiple-layer relational graph150to indicate relationships based on the detected anomalies in the service events. In one example, query processing component134can process query contexts for service events received by or from the network diagnostic component130to provide additional service events that may be of interest based on a set of service events or services in the query context. For example, query processing component134can query graphing component132to determine the one or more additional service events based on relationships specified in the multiple-layer relational graph150. The relationships can be identified at each of (or one or more of) the different layers152,154,156. Query processing component134can determine whether to include the additional service events based on which layer(s) indicate the relationship and/or different associated metrics, such as an observation count in the observation layer154, a confidence metric of the relationship in the learned layer156, etc. Computing device110can also similarly include a processor124, memory126, operating system128, etc., for operating the network diagnostic application112and/or other features or functions described herein. These components are not shown in the computing device110inFIG.1for ease of explanation. FIG.2is a flowchart of an example of a method200for determining related service events in processing a query for a set of one or more service events. For example, method200can be performed by the computing device120, and is accordingly described with reference toFIG.1, as a non-limiting example of an environment for carrying out method200. In method200, at action202, a query context for service events occurring on a network can be received. In an example, query processing component134, e.g., in conjunction with processor124, memory126, operating system128, etc., can receive the query context for service events occurring on the network. For example, query processing component134can receive the query context from network diagnostic component130, where the network diagnostic component130can receive a corresponding query from a network diagnostic application112executing on another computing device110. For example, network diagnostic component130can facilitate querying of service events in service event repository106, as described, and can provide various network diagnostic applications112with service event information (e.g., incident reports, etc.) based on a request from a network diagnostic application112, based on a subscription from the network diagnostic application112to receive certain service events (e.g., for certain resources and/or for certain types of service events, etc.), and/or the like. In one specific example, a query context can relate to a signal that can represent service events, such as a signal line representing processor utilization at a network resource. In this example, network diagnostic application112may request service events related to the processor utilization at the network resource, which may include periodic service events received from the network resource (e.g., via a service event logger104) that report the processor utilization. Network diagnostic application112can utilize the service events to generate a signal on a user interface representing the processor utilization reported in the service events. Examples are shown inFIGS.4and5, which are described in further detail below. In one example, network diagnostic component130can also implement security policies that define security contexts for users to access certain service events for certain nodes, certain types of service events, etc. In this example, network diagnostic component130can ensure a network diagnostic application112has the security clearance (e.g., a user of the application112is in a certain security group) to receive the requested service event information. In any case, network diagnostic component130can provide requested service event information to the corresponding network diagnostic application112. In method200, at action204, a set of service events occurring in the network can be determined based on the query context. In an example, query processing component134, e.g., in conjunction with processor124, memory126, operating system128, etc., can determine, based on the query context, the set of (e.g., one or more) service events occurring in the network. For example, the request can be a request/response request, a subscription request, etc., that can indicate one or more parameters in the query context. The one or more parameters may identify a type of service event, a service, a corresponding network resource, and/or the like. In one example, the query content may indicate a user account for the network diagnostic application112, a network resource or machine being viewed, and/or a view (or related parameters) of service events for the network resource. Given this information, for example, network diagnostic component130can obtain data (e.g., including a collection of one or more service events) from service event repository106for providing to the corresponding network diagnostic application112. For example, network diagnostic component130can query the service event repository106to obtain the service events as requested (e.g., as related to a certain service and/or network resource). In another example, network diagnostic component130can receive (e.g., based on a subscription) service events from the service event repository106that correspond to the certain service and/or network resource (e.g., where the service event repository can callback a callback function of the network diagnostic component130based on receiving the service events and associating the service events with a subscription, etc.). In addition, e.g., based on the query context, network diagnostic component130can additionally query a multiple-layer relational graph150for additional service events that may be related to the query context or the set of service events specified in the query context. Query processing component134can obtain the query and can determine the set of service events occurring in the network based on the query context (e.g., similarly as network diagnostic component130and/or can receive this information directly from network diagnostic component130). Given the set of one or more service events in the query context, additional services possibly of interest can be identified from the multiple-layer relational graph, as described herein. In one example, a query context can relate to a view of the network diagnostic application112that may be associated with multiple service events over a period of time, such as a signal showing resource usage over a period of time, where the resource usage is indicated over time in various service events. In this example, network diagnostic component130can query the service event repository to determine the service events indicating the resource usage for the service over the period of time, and network diagnostic application112can graphically represent the usage as a signal line over time. Network diagnostic application112can be used to generate multiple views in this regard, where each view can have an underlying query context for obtaining corresponding service events used to create the views. Thus, as an example of correlating service events based on observation, where views are generated for viewing together, a relationship between the underlying queries may be observed as occurring at similar times, for similar users, on similar network diagnostic application112or computing device110, etc., as described further herein. In method200, at action206, multiple-layers of a multiple-layer relational graph can be queried to determine one or more other service events having a defined relationship with the set of service events at one or more of the multiple layers. In an example, query processing component134, e.g., in conjunction with processor124, memory126, operating system128, etc., can query the multiple layers of the multiple-layer relational graph150to determine the one or more other service events having a defined relationship with the set of service events at one or more of the multiple layers. For example, query processing component134can query the configuration layer152, the observation layer154, and/or the learned layer156of the multiple-layer relational graph150to determine a relationship between the set of services and the one or more other services at least at one or more of the layers152,154,156. The related service events may be determined as related based on a relation between the underlying services, which can be determined from one or more of the layers152,154,156, and/or other considerations, such as based on a timing of the service events (e.g., service events occurring within a threshold time of one another) of the related services, and/or the like. Though shown as part of the same computing device120, in an example, query processing component134can be at a different computing device than graphing component132that generates, manages, and/or stores the multiple-layer relational graph150. For example, given a service event in the set of one or more service events, query processing component134may identify a relationship with one or more other service events in the configuration layer152, the observation layer154, and/or the learned layer156. As described, the configuration layer152can indicate (e.g., and/or may store an indication of) a relationship between the service event and one or more other service events as specified in a configuration. For example, the configuration may be generated using a user interface to allow a user to indicate known relationships between service events and/or corresponding services or by another mechanism. The observation layer154can indicate (e.g., and/or may store an indication of) a relationship between the service event and one or more other service events that is based on observing network traffic of requests for the service event (or for a similar type of service event, for other events for the corresponding service, etc.) and similar network traffic (e.g., occurring at a similar time, from a similar computing device110or user account, etc.) of the one or more other service events. The observation layer154can additionally or alternatively indicate (e.g., and/or may store an indication of) a relationship between the service event and one or more other service events that is based on observing user activity (e.g., of the network diagnostic application112) in requesting and/or viewing the service event (or similar types of service events, other events for the corresponding service, etc.) and then also requesting and/or viewing the one or more other service events. For each observed relationship, the observation layer154may include one or more metrics, in one example, such as an observation count for the number of times the observed relationship criteria is detected. For example, the observations can be made in real-time or near real-time as traffic or user activity occurs, or can be made after-the-fact based on analyzing network traffic logs, logs of user activity on network diagnostic component130, etc. The learned layer156can indicate (e.g., and/or may store an indication of) a relationship between the service event and one or more other service events that is based on algorithmic determinations regarding the service events within the service event repository106, such as by detecting data anomalies corresponding to the other service events based on keying the service event. For each anomaly, the learned layer156may include one or more metrics, in one example, such as a confidence metric for the determined relationship. In querying the multiple layers at action206, optionally at action208, a metric based on the results of querying the multiple layers can be determined. In an example, query processing component134, e.g., in conjunction with processor124, memory126, operating system128, etc., can determine the metric based on the results of querying the multiple layers. For example, query processing component134can determine the metric based on whether a relationship is determined from a given layer and/or based on the layers within which the relationship exists. For example, query processing component134can determine a first metric where the relationship is determined from the configuration layer152. In one example, this can be a highest metric and/or can definitively identify a relationship between the service in the set of one or more services and the other services, as the relationship can be explicitly identified by a user. In addition, for example, the metric can be determined based on one or more other metrics observed or obtained from each layer, such as an observation count in the observation layer for an observed relationship between the service events (and/or types of service events), confidence score in the learned layer, etc., as described. Moreover, in determining the metric at action208, optionally at action210, one or more weights can be applied to a result metric for each layer. In an example, query processing component134, e.g., in conjunction with processor124, memory126, operating system128, etc., can apply the one or more weights to the result metric for each layer (or one or more of the layers). For example, query processing component134can apply higher weights to metrics for the configuration layer152, as described, and/or can determine any desirable weighting for each layer. In one example, weighting the metrics for the layers152,154,156may be based on feedback of whether correlations between service events is accurate (e.g., based on being presented via an interface). In any case, the weights and/or metrics can be compared with threshold(s) to determine whether to indicate a correlation between a determined set of service events and the other service events discovered from the multiple-layer relational graph. In addition, in an example, query processing component134can further perform pattern mining or other machine-learning algorithms based on a more limited set of correlated services and/or service events determined from the multiple-layer relational graph150. In this example, query processing component134can further distill a list of services and/or service events determined as possibly related (e.g., such to indicate the other service events in reporting the determined service events) from the multiple layers152,154,156of the graph150by performing pattern mining on the list of services and/or service events. In method200, at action212, the one or more other service events can be indicated via an interface and in response to the query context. In an example, query processing component134and/or network diagnostic component130, e.g., in conjunction with processor124, memory126, operating system128, etc., can indicate, via the interface (e.g., user interface) and in response to the query context, the one or more other service events. For example, query processing component134can indicate the one or more other service events to the network diagnostic component130for providing to the corresponding network service application112requesting the query. In an example, query processing component134can determine whether to indicate the one or more other service events based on the determined metric and/or can indicate any other service events for which a relationship is identified (or determined to have an associated metric that achieves a threshold) in one of the multiple layers of the multiple-layer relational graph150, in a threshold number of the multiple layers of the multiple-layer relational graph150, in each of the multiple layers of the multiple-layer relational graph150, etc. Additionally, for example, query processing component134and/or network diagnostic component130may indicate the one or more other service events including an indication of a relationship to the set of service events determined for the query context. The indication of relationship may include an identifier for the other service event(s) indicating the relationship and/or a level of relationship (e.g., a metric, weight, and/or the like, as described). For example, network service application112can provide an indication of the one or more other service events received from the query processing component134or network diagnostic component130using various mechanisms. For example, network service application112can provide the indication as another view or signal line representing the one or more other service events presented along with a view that may correlate to the query context. In another example, network service application112can provide the indication as a list of the other service events, an indication of the other service events occurring at times corresponding to the set of service events that correlate to the query context, etc. In yet another example, network service application112can provide the indication as a pop-up or other notification that there are possibly related service events (e.g., the other service events) to the service events that are the subject of the query context. Moreover, as described, the network service application112may also provide a mechanism for indicating feedback for the indication of the other service events (e.g., feedback as to whether the other service events are relevant to the service events that are the subject of the query context). In method200, optionally at action214, feedback indicating whether the one or more other service events are relevant to the set of service events can be received. In an example, query processing component134and/or network diagnostic component130, e.g., in conjunction with processor124, memory126, operating system128, etc., can receive the feedback indicating whether the one or more other service events are relevant to the set of service events. For example, as described, network service application112can provide an interface for prompting for feedback of the relevancy, and can provide any indicated feedback to the query processing component134and/or network diagnostic component130. For example, the feedback can indicate whether the one or more other service events are relevant to the set of service events that are the subject of the query context, a level of relevancy, and/or the like. In method200, optionally at action216, one or more layers of the multiple-layer relational graph can be modified based on the feedback. In an example, graphing component132, e.g., in conjunction with processor124, memory126, operating system128, etc., can modify one or more layers of the multiple-layer relational graph150(e.g., the configuration layer152, the observation layer154, or other layers) based on the feedback. For example, graphing component132may modify metrics associated with observations at the observation layer154based on the feedback (e.g., improve a metric where the feedback is positive, decrease the metric or delete an observation association where the feedback is negative, etc.). FIG.3is a flowchart of an example of a method300for generating a multiple-layer relational graph indicating relationships between service events and/or corresponding services. For example, method300can be performed by the computing device120, and is accordingly described with reference toFIG.1, as a non-limiting example of an environment for carrying out method300. In addition, method300can be performed in preparation for fulfilling queries and/or determining related service events, as described in method200. In another example, method300can be performed as a real-time or near real-time process as part of querying the multiple-layer relational graph at action206of method200. In method300, at action302, a configuration layer of a multiple-layer relational graph can be generated based on relationships between services as defined in a stored configuration. In an example, layer generating component140, e.g., in conjunction with processor124, memory126, operating system128, graphing component132, etc., can generate the configuration layer of the multiple-layer relational graph based on relationships between services as defined in the stored configuration. For example, configuration obtaining component142can obtain the stored configuration (e.g., from memory126and/or from another computing device, etc.), which can be generated based on user input received via an interface for defining relationships between service events and/or between corresponding services. As described, services may depend on one another, and this dependency can be indicated in the stored configuration. This can allow for determining a relationship between service events occurring on the dependent services (e.g., at a similar time or otherwise). In one example, the configuration can define a relationship between service events based on collating and linking of underlying incident records by on-call engineers with incident management and service observability systems. For example, a user of network diagnostic application(s)112executing on various computing devices110can indicate the linking of the incident records and/or service events via an interface option on the network diagnostic application112. In other examples, other applications can be used to indicate the configured associations between service events, service event types, services, incident reports, incident report types, etc. The configuration layer152may include an indication of a relationship (or link) between at least a subject service and the one or more other services, such that the query processing component134can identify the link and report the other services or service events of the other services (e.g., occurring at a similar time or otherwise indicated as depending on the subject service event) as possibly of interest. In method300, at action304, an observation layer of the multiple-layer relational graph can be generated based on relationships between services based on monitoring network traffic or observing user behavior. In an example, layer generating component140, e.g., in conjunction with processor124, memory126, operating system128, graphing component132, etc., can generate the observation layer of the multiple-layer relational graph based on relationships between services based on monitoring network traffic or other topological relationships or observing user behavior. For example, observing component144can monitor the network traffic (e.g., coming from network diagnostic application(s)112or network diagnostic component130) to determine correlated requests for services or service events. For example, where observing component144observes similar patterns in requests for services and/or service events at different times based on the network traffic, whether from the same network diagnostic application(s)112or different network diagnostic application(s), or other topological relationships between signal sources (e.g., the source being the service from which the service event is logged), observing component144may infer an observed relationship between the services and/or service events. Similarly, where observing component144observes similar patterns in requests for services and/or service events at different times based on user behavior on the network diagnostic application112(e.g., as observed from the network diagnostic application112itself or requests received at the network diagnostic component130), observing component144may infer an observed relationship between the services and/or service events. In one example, observing component144can observe user behavior of the diagnostic application112itself, which in one specific example may include a configuration of a user-defined interface of the network diagnostic application112. For example, a user may define a user interface to analyze health or other metrics of network resources, where the interface may display signals generated based on observed service events (e.g., service events reporting resource utilization). In one specific example, based on physical proximity of signals on the interface (e.g., as being next to one another, part of the same chart/graph, etc.), observing component144can determine a relationship between the corresponding services. The information regarding the user-defined interface layout may be provided to the network diagnostic component130, from which the observation layer154can receive such information. An example is shown inFIG.4, which illustrates an example of a user interface400of a network diagnostic application112. In user interface400, a user thereof may have defined the user interface400to include signals402,404,406,408in the view. The signals402,404,406,408may each correspond to a set of service events for different services that the user desires to monitor. The signals402,404,406,408may show information of the service events (e.g., reliability, incoming request rate or reliability, etc.) over the same or similar period of time and/or at the same or similar time instances. In this example, observing component144can determine that the user interface400includes the signals402,404,406,408on the same view and/or within a threshold physical proximity within the view, that the user interface400processes interactions on the signals402,404,406,408at similar points in time, etc., and can accordingly observe a relationship between the corresponding service events and/or underlying services, which can be set in the observation layer154for subsequently determining related services or service events. As described, for example, observing component144can observe such properties of the user interface400based on at least one of determining the user interface400defined on the network diagnostic component130that facilitates operation of the network diagnostic application112, receiving, at the network diagnostic component130, an alert of creation/modification of the user interface400on the network diagnostic application112, and/or the like. In any case, the observation layer154may include an indication of a relationship (or link) between at least a subject service and the one or more other services, such that the query processing component134can identify the link and report the other services or service events of the other services (e.g., occurring at a similar time or otherwise indicated as depending on the subject service event) as possibly of interest. In one example, observing component144may include an observation count, frequency, etc. based on a number of observations of the services and/or service events within a period of time, where the observation count may indicate a likelihood of correlation between the services and/or service events. Thus, for a given service or service event, query processing component134can determine related services or service events based on the observations, observation count, etc., to provide in response to a query for the given service. For example, these observations can indicate what services and/or service events on-call engineers are looking at when looking at the given service or service event, as described. In method300, at action306, a learned layer of the multiple-layer relational graph can be generated based on relationships between services based on performing anomaly detection on key services. In an example, layer generating component140, e.g., in conjunction with processor124, memory126, operating system128, graphing component132, etc., can generate the learned layer of the multiple-layer relational graph based on relationships between services based on performing anomaly detection on key services (e.g., a subject service where generating the learned layer156is performed in real-time or near real-time or otherwise). For example, learning component146can perform correlations, anomaly detection, or other machine-learning algorithms (e.g., pattern mining) on the services and/or service events in the service event repository106to identify likely related services and/or service events. The learned layer156may include an indication of a relationship (or link) between at least a subject service and the one or more other services, such that the query processing component134can identify the link and report the other services or service events of the other services (e.g., occurring at a similar time or otherwise indicated as depending on the subject service event) as possibly of interest. For example, the learned layer156can detect anomalies in certain service event data over a period of time, such as resource utilization of services or related network nodes based on reported service events. For example, anomalies can be detected in similar changes in utilization amounts, the times at which utilization changes (e.g., regardless of whether the amount is similar), etc. In one example, learning component146may determine a confidence score or other metric for identified anomalies between services and/or service events, which can be included in the learned layer156. Thus, for a given service or service event, query processing component134can determine related services or service events based on the detected anomalies, the confidence score or other metric, etc., to provide in response to a query for the given service. In one example, the confidence score may be based on a number of correlations observed between the potentially related services or service events. An example is shown inFIG.5, which illustrates an example of a graphical depiction of signals500related to service events, where signal502relates to a set of service events of a service, such as resource utilization, etc., as described, and signal504relates to a different set of service events that may be determined as related to the set of service events of signal502based on correlation or other machine-learning algorithms. For example, correlation may show events happening at similar time instances, indicated by symbols506. In an example, learning component146may determine a relationship between the underlying service events based on detecting a threshold number of events happening in each signal within a period of time (and/or a confidence score may be computed based on the frequency of correlated events among the signals or underlying service events). In an example, learning component146can set a determined relationship and/or related metrics in the learned layer156for subsequently determining related services or service events. In the multiple-layer relational graph150, relational data from the various layers152,154,156can be combined, as described, and used to build a knowledge graph between the services and their metrics. Traversal of this graph150can be useful in various applications, such as root cause analysis, determining most failing metrics, grouping of related metric failures etc. FIG.6illustrates an example of a relational graph representation600of a set of service events. For example, representation600can indicate a signal of interest, which can refer to a metric (e.g., processor usage) measured on a specific network resource based on a collection of service events (e.g., events that indicate resource usage at periods of time). For example, the signal of interest can be requested and/or defined by a network diagnostic application112on a computing device110to receive, from the service event repository106in real-time, near real-time or otherwise, processor usage service events for the network resource via network diagnostic component130. In an example, a query context can include an indication of the user, the service or associated network resource, and a view of the network diagnostic application112being requested (which can indicate the desired service events). In determining relationships with other service events, query processing component134can query the configuration layer152to determine that service 1 on machine A depends on service 2 on machine B, and/or specifically that the processor usage on service 1 on machine A depends on incoming API reliability on service 2 on machine B. Thus, query processing component134can provide, in response to a query for processor usage of service 1 on machine A, corresponding API reliability service events for service 2 on machine B. Similarly, query processing component134can query the observation layer154to determine service events typically viewed by this user (or other users) along with the service event that is subject of the view to determine additional service events of interest (and/or related views of the additional service events, such as other signals). For example, query processing component134can determine a relationship (e.g., frequency looked at) indicated on observation layer154between the user looking at service 5 on machine F when also looking at service 1 on machine A. As described, this observation may be determined based on a user-defined interface that includes views of metrics for service 5 on machine F and service 1 on machine A (and specifically for outgoing request rate for service 5 on machine F with the processor usage time for service 1 on machine A). Thus, query processing component134can provide, in response to a query for processor usage of service 1 on machine A, corresponding outgoing request rate events for service 5 on machine F. In another example, query processing component134can determine a relationship (e.g., frequency seen with) indicated on observation layer154between network traffic for obtaining metrics related to service 4 at machine D around the same time or times as network traffic for obtaining metrics related to service 1 on machine A (and specifically for outgoing request rate for service 4 on machine D with the processor usage time for service 1 on machine A). Thus, query processing component134can provide, in response to a query for processor usage of service 1 on machine A, corresponding outgoing request rate events for service 4 on machine D. Similarly, query processing component134can query the learned layer156to determine service events that are historically anomalous with the service event that is subject of the view to determine additional service events of interest (and/or related views of the additional service events, such as other signals). For example, query processing component134can determine a relationship indicated on learned layer156between service 3 on machine E (and specifically outgoing request rate) and at service 1 on machine A. Thus, query processing component134can provide, in response to a query for processor usage of service 1 on machine A, corresponding outgoing request rate events for service 3 on machine E. In any case, for example, the network diagnostic application112can indicate potential relationship between the various service events based on the correlations that are detected/observed at each or one or more (or all) layers in the multiple-layer relational graph. FIG.7illustrates an example of computing device120including additional optional component details as those shown inFIG.1. In one example, computing device120may include processor124for carrying out processing functions associated with one or more of components and functions described herein. Processor124can include a single or multiple set of processors or multi-core processors. Moreover, processor124can be implemented as an integrated processing system and/or a distributed processing system. Computing device120may further include memory126, such as for storing local versions of applications being executed by processor124, related instructions, parameters, etc. Memory126can include a type of memory usable by a computer, such as random access memory (RAM), read only memory (ROM), tapes, magnetic discs, optical discs, volatile memory, non-volatile memory, and any combination thereof. Additionally, processor124and memory126may include and execute an operating system executing on processor124, one or more applications, such as a network diagnostic application/component112/130, graphing component132, query processing component134, and/or components thereof, as described herein, and/or other components of the computing device120. Further, computing device120may include a communications component702that provides for establishing and maintaining communications with one or more other devices, parties, entities, etc. utilizing hardware, software, and services as described herein. Communications component702may carry communications between components on computing device120, as well as between computing device120and external devices, such as devices located across a communications network and/or devices serially or locally connected to computing device120. For example, communications component702may include one or more buses, and may further include transmit chain components and receive chain components associated with a wireless or wired transmitter and receiver, respectively, operable for interfacing with external devices. For example, communications component702can carry communications between a network diagnostic application/component112/130, graphing component132, query processing component134, etc. executing on another device (or the same device), etc., as described in various examples herein. Additionally, computing device120may include a data store704, which can be any suitable combination of hardware and/or software, that provides for mass storage of information, databases, and programs employed in connection with examples described herein. For example, data store704may be or may include a data repository for applications and/or related parameters not currently being executed by processor124, may include the service event repository106, etc. In addition, data store704may be a data repository for an operating system, application, such as a network diagnostic application/component112/130, graphing component132, query processing component134, and/or components thereof, etc. executing on the processor124, and/or one or more other components of the computing device120. Computing device120may also include a user interface component706operable to receive inputs from a user of computing device120and further operable to generate outputs for presentation to the user (e.g., via a display interface to a display device). User interface component706may include one or more input devices, including but not limited to a keyboard, a number pad, a mouse, a touch-sensitive display, a navigation key, a function key, a microphone, a voice recognition component, a gesture recognition component, a depth sensor, a gaze tracking sensor, any other mechanism capable of receiving an input from a user, or any combination thereof. Further, user interface component706may include one or more output devices, including but not limited to a display interface, a speaker, a haptic feedback mechanism, a printer, any other mechanism capable of presenting an output to a user, or any combination thereof. Computing device120can also include a network diagnostic application/component112/130for generating a query context related to one or more service events, a graphing component132for generating a multiple-layer relational graph defining relationships between service events, and/or a query processing component134for processing queries for service events by providing one or more other service events based on relationships defined in the multiple-layer relational graph, as described herein. By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Accordingly, in one or more examples, one or more of the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), and floppy disk where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The previous description is provided to enable any person skilled in the art to practice the various examples described herein. Various modifications to these examples will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other examples. Thus, the claims are not intended to be limited to the examples shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various examples described herein that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.” | 52,285 |
11863396 | DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to example implementations, illustrated in the accompanying drawings. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. In the following description, reference is made to the accompanying drawings that form a part thereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the invention. The following description is, therefore, merely exemplary. Some embodiments provide a device, e.g., a sensor, that can passively discover, identify, and assess a network. According to some embodiments, the device can be installed and can function where there is limited or zero prior knowledge of the network environment, communication protocols, or addressing schemes currently in use. According to some embodiments, the device can be installed by a complete novice or a person who has limited technical training. According to some embodiments, the device may be installed within an arbitrary communications network and participate in network communications with no prior knowledge of addressing schemes, directionality of the network topology, or prior knowledge of viable upstream routers, restrictive firewalls or any other condition that might otherwise impede its ability to communicate. Once installed, the device (e.g., sensor) according to some embodiments can work in a continuously adaptive manner that self heals amidst changing network conditions. According to some embodiments, the device's installation provides zero-configuration network communications capabilities. According to some embodiments, the device can conditionally participate or intervene in network communications. According to some embodiments, once installed, the device creates a communications link or plurality of communication links with a server or a plurality of servers outside of the network, which can be used for bi-direction communications activities, such as remote management, alerting, firmware/software/configuration updates, or other situations where moving data across a communications network is desired. According to some embodiments, the device or sensor is non-destructive, e.g., it does not damage the network or necessarily restrict others' usage of the same network. According to some embodiments, the device can be installed inside (e.g., in-line) a communications network without any disruption of upstream or downstream communications whatsoever. According to some embodiments, the device can be installed without reducing or occupying any otherwise statically-assigned or ephemeral address on the network. According to some embodiments, the installed device does not interrupt or otherwise adversely affect the operation of the network, its speed, or its quality of service. According to some embodiments, the device is Network Address Translator (NAT)-friendly, e.g., it is resilient to network address translation that may occur inside the network, modifying or further modifying network traffic at some point or at multiple points in the network. Such NAT translation does not prevent or obstruct the device from communicating. According to some embodiments, the device is non-alerting, and its installation and usage can take place without presenting as a new component of a network. This capability is especially advantageous to, for example, detect and report network anomalies in inhospitable, hostile, or contested network environments, without alerting potential advanced threat actors that their network presence is known or being assessed. This is also advantageous in protecting a friendly or cooperative network, in that malicious actors are not alerted as to the presence of a device that is searching for or assessing threat risks or communicating for other purposes. Additional advantages of some embodiments include that the device can be installed and used without requiring detailed network knowledge. Existing techniques for providing network access often require prior knowledge of the network in which they are operating, configuration to communicate and be a part of that network, and unique addressability in order to work properly. By contrast, some embodiments operate in a way that eliminates those requirements and instead automatically operate across a variety of communications networks with multiple varying underlying technology differences and characteristics. Embodiments have many use cases. Some embodiments may be used to monitor, assess, and/or participate in a hostile or cooperative network. Some embodiments can be used to improve the reliability and usability of connected (networked) devices and make them simpler to introduce into an existing network without any prior knowledge of the network configuration, and without significant technical expertise. Some embodiments may be used to install an Internet of Things (IOT) device or similar low-profile, low-traffic device that does not necessarily need to be addressable on a Local Area Network (LAN), but that could utilize occasional external (WAN) network access in order to receive firmware or configuration updates, for example. Some embodiments can provide a way for ephemeral access to outside communications without requiring things like a DHCP server or for there be extra ports available on an Internet-accessible VLAN. Other features and advantages are presented herein in reference toFIGS.1-5. FIG.1illustrates a device180for discovering a network190with various alternative installation configurations151,152,153,162,164,172,174according to various embodiments. As shown, the network190includes switches122,124, and126, as well as network clients130,132,134,136, and138. Lines between network elements indicate communicative coupling. The elements and connections of the network180are non-limiting; other network elements and configurations are contemplated and possible. The network190may be an intranet, a private network, a local area network, or generally any network segment or the like. The network may utilize IPv4, IPv6, or any other protocol. In the example shown, the network190is communicatively coupled to an external network102, such as the internet, by way of a router112. More generally, any number of routers may couple the network190to the external network102and fall within the scope of network analysis according to various embodiments. The device180, which may also be referred to herein as a sensor180, may be installed by physically establishing one or more wireline connections between the device180and the network190. According to some embodiments, the installation may include configuring and connecting to a SPAN/Mirror port. According to some embodiments, the device180may be installed by inserting executable code into an existing network node, such as a switch122,124, or126. The executable code configures the network node to perform actions of the device180as disclosed herein. In such embodiments, a separate physical device/sensor180is not used; instead, an existing network device, (e.g., such as the switch122), performs the operations and functions of the device180according to the inserted executable code, as described herein. Despite the lack of an added or extra physical device/sensor180, such embodiments are nonetheless considered to have a device/sensor180“installed” in the network190. The device180may be installed in the network190according to any of a variety of installation configurations, not limited to those illustrated inFIG.1. In various embodiments, the device180may be a computerized network device, or the like, that includes a memory and a processor that executes software code. In some embodiments, the device180may include hardware that is the same as, or similar to, the hardware found in a switch, a router, a hub, a gateway, or the like. In various embodiments, the device180may include one or more ports for coupling to the network190. As shown inFIG.1, the device180includes a port for outbound communications and a port for inbound communications. However, according to various embodiments, a single port may be used for bi-directional communications, or more than two ports may be used. In general, the device180may be installed aside all or a part of the network190, or may be installed in-line with all or part of the network180. Further, the device180may be installed or configured to passively participate in network communications, or installed or configured to actively participate in network communications. In general, by way of non-limiting example, the aside installations may facilitate passive network communication participation, whereas the in-line installations may facilitate either passive or active network communication participation; however, either installation may be used for either active or passive network communication participation according to various embodiments. The installation configurations by way of the alternative communicative couplings151,152, and153represent examples of installing the device180aside all or part of the network190. Any of the communicative couplings151,152, or153may be used in the alternative. For aside installations, only one port, e.g., the inbound communication port, may be connected to the network. In particular, the communicative couplings151and152represent alternative aside installations with the entire network190, whereas the communicative coupling153represents an alternative aside installation with a portion of the network190, which includes the switch126and the clients134,136. The installation configurations by way of the alternative pairs of couplings162,164and172,174represent examples of installing the device180in-line with all of part of the network190. Either the pair of communicative couplings162,164may be used, or the pair of communicative couplings172,174may be used in the alternative. In general, for in-line installation, both ports may be connected to the network. As shown inFIG.1, the pair of communicative couplings162,164represent an in-line installation with the portion of the network190that includes the switch126and the clients134and136, whereas the pair of communicative couplings172,174represent an in-line installation with the portion of the network190that includes the client138. In various embodiments, the device180functions to accurately and automatically detect underlying network protocols, determine addressing schemes that are being used, and isolate the network address that corresponds to the upstream router112that is providing access to the external network102. Once this information is known, the device180can use it to access the network190and send outbound communications to, and receive inbound communications from, a destination in the external network102via the router112. For example, a connection, (such as a tunneled connection), can be made to a server in the external network102to facilitate additional configurations, code (e.g., software or firmware) updates, command and control, alerting, or reporting, among other things. As a specific non-limiting example, the network190may be an IPv4 network using a private address scheme (e.g., per RFC1918). The device180may be used to gain access to the network190, identify the addresses currently being used at each relevant layer of communications (Ethernet MAC, IPv4 address, TCP port numbers, etc.), detect the destination Ethernet MAC address of the upstream router112, and direct traffic to a destination in the external network102via that address. In general, as the device180operates, it may continually, or periodically, re-evaluate the addressing scheme being used, detect router changes, and determine a way to communicate on the network190that does not significantly affect (e.g., slow or obstruct) the other communications on the network190. To communicate in an unaffecting and/or difficult-to-detect manner, the device180may emulate the address of another device on the network190, e.g., the address of one of the clients130,132,134,136, or138. The device180may adaptively determine and select the device it emulates (i.e., the device whose address it imitates or copies) by building a ranked list of devices on the network190, as shown and described presently in reference toFIG.2. FIG.2is a flow chart for a method200of updating a candidate list for address emulation according to various embodiments. The method200may be implemented by a device, such as the device180as shown and described herein in reference toFIG.1, to generate and maintain a candidate list for address emulation, as well as the identity of at least one router for the network. The method200may operate continuously or periodically. Once the list is generated, a candidate device may be selected for address emulation as disclosed herein. At202, the device (e.g.,180) receives a communications packet, e.g., a network packet, or the like. The packet may be received by way of any of a variety of techniques and in any of a variety of installations, e.g., in-line or aside. For example, for a device installed in-line on a network of interest (e.g.,190), such as, by way of non-limiting example, a network with a private RFC1918 IPv4 address space, the device may directly intercept packets. As another example, the device may be installed aside the network, with passive access to a copy of the network traffic (e.g., via SPAN/Mirror port or similar) and the ability to generate traffic on the network. At204, the device optionally checks policy rules with respect to the incoming packet. This optional operation includes checking whether the detected packet is a packet of interest at all. For example, the device may function to only detect and/or participate in communications to or from a particular specified device (e.g., client138, switch122, etc.) or address, or in a particular specified addressing scheme or protocol, and the policy rules may specify that packets outside of these parameters may be ignored (and if the device is installed inline, the ignored packet may be forwarded to its destination). As another example, the device may function to filter content sent from the network to a destination in an external network, and policy rules may be set to ignore packets that are not part of such filtering. As another example, the device may function ignore packets that are in IPv4, as opposed to IPv6 (or vice versa). At210, a set of filtering rules (e.g.,212,214,216,218) is applied. In general, the device can use the received packet and filtering rules to detect, identify, or otherwise determine other network elements (e.g., network devices other than the sensor device180), such as one or more clients (e.g.,130-138) and one or more routers (e.g.,112). The filtering rules may be disjunctive, such that if any of them applies, then control passes to222, and the packet is represented in the candidate list; otherwise, if no filtering rule applies, then control passes to220and the packet is ignored or forwarded as appropriate for the installation. FIG.2shows some non-limiting examples of rules. As shown, the first rule212may analyze the received packet to determine whether the packet destination address is in a different/8 CIDR network from that of the source address. In this example, a packet bound for an address in a different/8 network may be interpreted as being directed to a router; therefore, the process200may identify the destination MAC address of the packet as the address of a router (e.g.,212) in the network (e.g.,190). In various embodiments, the rule212allows the process200to identify, discover, or otherwise determine the router(s) in the network. The second illustrated filtering rule214may analyze the received packet to determine whether the packet is from a popular source, e.g., as defined by a programmable list, which may include the source addresses of popular web sites and the like. Such popular sources may be, for example, web sites that have heavy traffic, such as news sources, mail servers, or search engines. In this example, the source address of a packet may be interpreted as being from outside of the network. In various embodiments, the rule214allows the process200to identify, discover, or otherwise determine the client(s) in the network. The third illustrated filtering rule216may analyze the received packet to determine whether the packet is a DHCP response. If so, then the packet may be interpreted as having originated from a DHCP server in the network and as being sent to a client in the network, containing the correct routing information. In various embodiments, the rule216allows the process200to identify, discover, or otherwise determine the DHCP server(s) and the client(s) in the network. Other rules218may be applied in addition or in the alternative. Thus, without having any prior information of a network, the device can use the rules to discover, assess, or determine network topology, client address(es), router address(es), etc. of the network. As a non-limiting example, the rules may be used to determine that the network is using, e.g., addresses in the 192.168.1.0/24 range and that the router is at 192.168.1.1. Based on the amount of traffic, it may also detect the most recent active IP addresses on the network segment are, e.g., 192.168.1.11 and 192.168.1.12. At220, if none of the rules of210apply, then the packet may be ignored. If the device is installed inline, the packet may be forwarded to its destination. At222, if at least one of the rules of210applies, then a determination is made as to whether the packet represents a new candidate—e.g., a network element/device that has not been detected or discovered previously by the process200. The determination may be made, for example, by examining whether an identification regarding the packet is already on the list, If not, then control passes to230; otherwise, if so, then control passes to232. At232, when the packet represents a new candidate (e.g., is sent to or from the candidate), then information regarding the candidate is stored in a candidate list at the device. By way of non-limiting example, any, or any combination, of the following may be stored: a first identifier for the candidate network element, such as its layer two address, e.g., a MAC address; a second identifier for the candidate network element, such as its layer three address, e.g., an IP address; a count of packets sent to and/or sent from the candidate device; a layer four protocol; and/or a timestamp indicating when the most recent respective packet for the candidate was detected. At230, when the packet represents a candidate already represented in the list, then the entry for that candidate is updated in the list. By way of non-limiting example, any or both of the following updates may be made: the count of packets sent to and/or sent from the candidate device may be incremented, and/or the timestamp indicating when the most recent respective packet for the candidate may be updated. Thus, in the example ofFIG.2, the method200operates to generate and continuously maintain a list of candidate networks elements, (e.g., a list of active clients130-138) on the network whose stored information may be used for address emulation, and the list of candidate networks elements typically includes a router (e.g.112) that connects the network to an external network. For example, any time a packet is received that the method200processes according to230or232, the candidate list may be updated accordingly. According to some embodiments, the candidate list may be regularly pruned of inactive network elements. By way of non-limiting example, the list may be pruned periodically, e.g., every ten seconds. At each pruning, network elements represented in the list may be removed if they have not communicated (at all) within a specified time period, or if they have not communicated specifically with a device or devices outside the network within a specified time period, e.g., since the previous pruning check, or for any other interval (e.g., 1, 2, 5, 10, 15, 20, 30, 60, 90, or more seconds). In various embodiments, the device (e.g., sensor180) may select a candidate from the candidate list to emulate its address as follows. In some embodiments, the device180may function to select the most active network element from the candidate list, where “most active” is determined based on the count of packets sent to and/or sent from the candidate network element. This selection technique may be desirable because, among other reasons, the communications sent/received by the device180are less noticeable and less impactful among the large number of communications sent/received by the most active network element, as compared to the less active network elements. As long as the network elements represented on the list remain active, the device180can select to emulate their address, e.g., their associated Ethernet MAC address, to send and receive data, e.g., by including (e.g. imitating or spoofing) such addresses as the origination addresses for packets sent by the device180. If one of the network elements ceases actively communicating or otherwise reduces its level of participation on the network, the device180can then select a better (e.g., more active currently) candidate from the candidate list for address emulation. The selection may be based on a ranking, which may be implemented as follows. According to some embodiments, the network elements represented on the candidate list may be ranked for desirability for emulation based on each's activity according to count-based or other criteria, which may be measured during a sliding temporal window, (e.g., a sliding one, two, three, five, seven, ten, 15, or 30 second window, or the like). By way of non-limiting examples, the network elements may be ranked according to any, or a combination, of: total packet count or message count for the candidate device, where a higher count increases the ranking; number of devices in the network in communication with the candidate device, where a higher number increases the ranking; number of devices outside of the network in communication with the candidate device, where a higher number increases the ranking; frequency of communications outside of the network with the candidate device, where a higher frequency increases the ranking; and/or volume of communications outside of the network with the candidate device, where a higher volume increases the ranking;. If ranked according to a plurality of criteria, the candidates may be ranked according to a first criteria, then for ties within that ranking, according to a second criteria, then for ties within than ranking, according to a third criteria, etc. For any given communication from the device, the candidate with the current highest ranking may be emulated. For communications between the device180and an external server or the like, e.g., as shown and described herein in reference toFIG.4, the emulated address can dynamically change during such communications, e.g., according to changes in the rankings of the network elements in the list of candidates. Further, for such communications, the device180can provide the updated/current emulated address to the server, e.g., in encrypted or authenticated form. As described herein in reference toFIG.2, the candidate list includes router identifications (e.g., by way of one or more IP and/or MAC addresses). The device (e.g., sensor180) may select a router from the candidate list to use for routing communications as follows. For the router selection, the device may select the most recent router that was detected and represented in the candidate list. Further, according to various embodiments, rather than, or in addition to, representing identified routers in the candidate list, a separate list of router candidates may be generated and maintained, e.g., populated and pruned as described herein in reference toFIG.2. According to such embodiments, the device may select the most recent router that was detected and represented in the separate list of router candidates to use to route (e.g., send and/or receive) communications. FIG.3is a flow chart for a method300of sending data according to various embodiments. In particular, the method300may be used by a device, such as the device or sensor180as shown and described in reference toFIG.1, to send data, whether within the network190or to a destination external to the network190. Note that at any given time, the candidate list generated by the method200may be empty. Accordingly, at such times, the device may not have the source address of a network element to emulate. Therefore, in some embodiments, the data to be sent may be stored in a buffer of the device and sending it may be delayed until such time that the candidate list is populated with at least one suitable candidate. At302, the data to be sent is received at an outgoing packet buffer of the device. The outgoing packet buffer may be implemented using volatile memory hardware, for example. In some non-limiting examples, this data may include information that is collected and/or generated by the device180, such as: information describing or regarding the elements/devices of the network190; information describing or regarding the characteristics of the network190; information describing or regarding the communications (e.g., messages, packets, bandwidth, etc.) on the network190; information describing or regarding communications associated with malicious actors; information describing or regarding network anomalies, (e.g., which may be caused by malicious actors); and/or information describing or regarding the software or firmware of the device180, among other things. At304, a determination is made as to whether the candidate list is non-empty and contains an entry that represents a valid candidate. If not, then control may pass back to304, possibly after a predetermined time interval, e.g., in the range of 0.1 to 5.0 seconds. In various embodiments, during the predetermined time interval, the device180may be performing the process200ofFIG.2, which may add one or more candidates (e.g., network elements) to the list. If there is at least one entry that represents a valid candidate on the candidate list, then control passes to306. At306, the data is sent or transmitted in a packet or the like, with the source address of the packet emulated to match that of a candidate represented in the candidate list and selected, for example, according to each candidate's ranking as described above. In such embodiments, the transmitted packet therefore appears to come from the selected network element candidate, and not from the device180that actually transmitted it, because the source address of the packet is the same as the source address of the selected network element. For example, with brief reference toFIG.1, the transmitted packet may appear to come from the client138, and not from the sensor180that actually transmitted it, because the sensor180placed the source address of the client138into the “source address” field of the packet before transmitting it. FIG.4is a diagram400of a bi-directional communication channel414according to various embodiments. The communication channel414is shown as being between a server404and a device402, such as the device or sensor180ofFIG.1. The device402may be in a network such as the network190ofFIG.1, and the server404may be in an external network, such as the external network102. Any number of intermediate devices may be between the server404and the device402, such as the shown network address translation (NAT) device. The communication channel414may be maintained using the address for the server404, which may be known (e.g., stored) by the device402as an IP address or a domain name, e.g., from a list of IP addresses or domain names, (which list may be coded), and using the address of a network element (e.g. a client) on the network of the device402as emulated by the device402. Either or both addresses may be updated dynamically during any particular communication session or between communication sessions. For example, the address of the device402may be updated as shown and described in reference toFIG.2, and the device402may communicate its updated address to the server404upon changing its emulated address. The communication channel414may be used to send data from the server404to the device402, such as instructions, configuration information, code (e.g., software or firmware) updates, and/or command and control data. The communication channel414may be used to send data from the device402to the server404, such as alerting data, reporting data, and/or data about or acquired from the communications within the network of the device402, among others. The communication channel414may be encrypted, so as to provide a tunneled connection412. Any of a variety of encryption protocols may be used at any of a variety of layers. According to some embodiments, secure sockets layer (SSL), transport layer security (TLS), or direct encryption at any protocol layer, e.g., using symmetric encryption, asymmetric encryption, and/or hybrid encryption. For any communication sent from the server404to the device402over the communication channel414, if the device402is installed in-line between a router at the network gateway and the network element, e.g., network client, whose address the device402is emulating, the device402may prevent the server404's communication from reaching the network client by intercepting it and not passing it through. However, the device402may pass through communications that are intended for the client rather than for the device402. FIG.5is a flow chart for a method500of discovering a network according to various embodiments. The method may be implemented in a context such as is shown and described herein in reference toFIG.1, for example, by a device such as the device or sensor180. At502, the method500optionally includes installing a sensor, such as the device180ofFIG.1, in a network, such as the network190ofFIG.1. The sensor may be installed aside or in-line as disclosed herein. The network may be communicatively coupled to an external network (e.g.102), such as the internet, by one or more routers (e.g.112). In some embodiments, the functionality and/or operations of the sensor180, as described herein, may be optionally installed in or added to an existing network element (e.g., to a client130-138, to a switch122-126, etc.); for example such functionality may be installed by downloading or otherwise modifying the software and/or firmware that is run by an existing network element. At504, the method500includes determining, e.g., automatically and by the sensor180, a network address of a router (e.g.,112) of the network (e.g.,190). The network address of the router may be determined using the techniques shown and described herein in reference toFIGS.1and2, for example. At506, the method500includes detecting, e.g., automatically and by the sensor, the network address of a client in the network. The network address of the client may be detected using the techniques shown and described herein in reference toFIGS.1and2, for example. At508, the method500includes assessing, e.g., automatically and by the sensor, that the client in the network is actively communicating on the network. Whether the client is actively communicating on the network may be assessed using the techniques shown and described herein in reference toFIG.2, for example. At510, the method500includes communicating, by the sensor, with the network address of the router. The sensor may communicate on the network by utilizing the address of the router with any of a variety of protocols, e.g., the MAC sublayer of the data link layer. At512, the method500includes participating, by the sensor, in communications on the network, which may include emulating the network address of a network element, such as a client, and using the network address of the router. For example, the sensor may acquire communications from the router using the network address of the router, e.g., by sending an acknowledgement packet to the router's address using the address of the emulated client as the packet's origination address. According to various embodiments, the sensor may actively or passively participate in communications on the network. An example of passively participating in communications on the network includes monitoring communications on the network. According to this example, the device180may acquire any network communications that it has access to, which may depend upon its installation configuration. For example, the device180may acquire, analyze, and/or store, and also pass through such communications if the device180is installed in-line, which allows the device180to produce, report, and/or send information about such communications, (e.g., to an external server), without disrupting communications on the network. Another example of passively participating in communications on the network includes the device180receiving communications from a server that is on an external network (e.g.102). Such communications may include, for example, instructions, configuration information, code updates, and/or command and control data. An example of actively participating in communications on the network includes sending data, such as data acquired or produced from passively monitoring network communications, to an external server or the like. Another example of actively participating in communications on the network includes filtering content on the network. According to this example, a device180installed in-line between, for example, a client and a router for the network can intercept, without passing through, communications from the external network to the client and/or communications from the client to the external network. Any or all such communications can be filtered. Communications can be filtered based on origination, destination, and/or content. Yet another example of actively participating in communications on the network includes modifying content on the network. According to this example, a device180installed in-line between a client and a router for the network can intercept, modify, and pass on communications from the external network to the client and/or communications from the client to the external network. Modifications include changing origination identifications, destination identifications, and/or payload data. Any or all such communications can be modified. Communications can be modified based on origination, destination, and/or content. Yet another example of actively participating in communications on the network includes inserting content on the network. According to this example, a device180can send communications to any downstream network element, e.g., client, to a router, or to any upstream element or device, e.g., an element in an external network. The communications can utilize an emulated address of a client on the network, as disclosed in detail herein, for an origination address. Any type of content may be inserted, including, for example, excessive content intended to drown out other communications. Certain embodiments can be performed using a computer program or set of programs. The computer programs can exist in a variety of forms both active and inactive. For example, the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s), or hardware description language (HDL) files. Any of the above can be embodied on a transitory or non-transitory computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. As used herein, the terms “A or B” and “A and/or B” are intended to encompass A, B, or {A and B}. Further, the terms “A, B, or C” and “A, B, and/or C” are intended to encompass single items, pairs of items, or all items, that is, all of: A, B, C, {A and B}, {A and C}, {B and C}, and {A and B and C}. The term “or” as used herein means “and/or.” As used herein, language such as “at least one of X, Y, and Z,” “at least one of X, Y, or Z,” “at least one or more of X, Y, and Z,” “at least one or more of X, Y, or Z,” “at least one or more of X, Y, and/or Z,” or “at least one of X, Y, and/or Z,” is intended to be inclusive of both a single item (e.g., just X, or just Y, or just Z) and multiple items (e.g., {X and Y}, {X and Z}, {Y and Z}, or {X, Y, and Z}). The phrase “at least one of” and similar phrases are not intended to convey a requirement that each possible item must be present, although each possible item may be present. While the invention has been described with reference to the exemplary embodiments thereof, those skilled in the art will be able to make various modifications to the described embodiments without departing from the true spirit and scope. The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. In particular, although the method has been described by examples, the steps of the method can be performed in a different order than illustrated or simultaneously. Those skilled in the art will recognize that these and other variations are possible within the spirit and scope as defined in the following claims and their equivalents. | 38,095 |
11863397 | DETAILED DESCRIPTION In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. However, it will be apparent to those skilled in the art that, in various embodiments of the present disclosure, numerous technical details are set forth in order to provide the reader with a better understanding of the present disclosure. However, the technical solutions claimed in the present disclosure may be implemented without these technical details and various changes and modifications based on the following embodiments. A first embodiment of the present disclosure relates to a traffic prediction method. The core of this embodiment lies in: acquiring historical traffic data and preprocessing the historical traffic data; performing empirical modal decomposition on the preprocessed historical traffic data to obtain a plurality of component series; using a time series prediction model to predict each of the component series respectively to obtain a component prediction result of each of the component series; and accumulating all component prediction results to obtain a traffic prediction result, so that accuracy of a traffic prediction is improved. The implementation details of the traffic prediction method of this embodiment will be specifically described below, and the following contents are only provided for convenience of understanding, which are not necessary for implementing this solution. The traffic prediction method in this embodiment is shown inFIG.1. In101, traffic data of a first preset time period in a historical period is acquired, and the traffic data is preprocessed. Specifically, in an actual application scenario, before predicting traffic of a plurality of cells of a same base station for a certain time period in the future, traffic data of the cells in the historical period collected by an operator is first acquired. In an exemplary embodiment, traffic data of each day in the first preset time period in the historical period is acquired, and the traffic data of each day is defined as a traffic value at a time within 24 hours of a day when a utilization rate of physical resource blocks(prb) is the largest. It should be noted that acquiring the traffic data of each day in the first preset time period in the historical period is not limited to acquiring the traffic value at the time within 24 hours of each day when the utilization rate of the prb is the largest, and other methods may also be used to determine the traffic data of each day. For example, an average traffic within 24 hours of each day may be taken as the traffic data of each day, or a maximum traffic within 24 hours of each day may be taken as the traffic data of each day, etc., which will not be described in detail. Next, since the traffic data is stored in a plurality of files, it is necessary to merge the traffic data into one file, and it is also necessary to process missing values and abnormal values in the traffic data. In an exemplary embodiment, when the traffic data is merged into one file, the traffic data needs to be filtered to obtain field attribute information required by traffic prediction. In this embodiment, the field attribute information required by traffic prediction includes: a traffic data size ([LTE]DL CELL PDCP SDU Volume (Kbyte)), a traffic collection date (Date), a traffic collection time (Time), and a collection area number (Cell) and a base station number (EnodeB), etc. Herein, two variables EnodeB and Cell may be used to identify the cell (the two variables determine a uniquely identified cell), and a num column may also be added to a determined cell for numbering. In an exemplary embodiment, processing the missing values in this embodiment includes removing the missing values. The missing values occurs in such a situation: numbered cells (usually cells with fields of null values) do not have corresponding collected traffic data size values, traffic collection dates, or traffic collection times after the data is unified into one file. In a preprocessing process, data of the above-mentioned cells maybe directly removed to prevent the missing values from affecting prediction results and improve accuracy of the prediction. In an exemplary embodiment, the abnormal values are processed by a boxplot method in this embodiment. The abnormal values refer to outliers in traffic data that deviate greatly from a normal value in size. Assuming that a certain region collects traffic data of 211 days in total, the outliers of the traffic data are processed by calculating quartile values of these traffic data. The abnormal data in the historical traffic data is processed by the boxplot method, so that influence of the abnormal data on subsequent prediction for the component series by using the time series prediction model is reduced, thereby improving accuracy of the traffic prediction. As shown inFIG.2, processing the abnormal data in the historical traffic data by the boxplot method specifically includes following operations. In operation1011, a data series X(t) is input (where, X(t) is a time series about the traffic value, where time t represents a certain day), taking t=0. In operation1012, an upper quartile Q3, a lower quartile Q1, and an interquartile range IQR are calculated (IQR=Q3−Q1). In operation1013, a maximum value Top and a minimum value Low are calculated, and calculation formulas are: Top=Q3+1.1*IQR Low=Q1−1.1*IQR In operation1014, whether the data series X(t) is greater than the maximum value is determined. In response to the data series X(t) being greater than the maximum value, operation1015is performed; in response to the data series X(t) being less than or equal to the maximum value, operation1016is performed. In operation1015, X(t) is updated, and the following update formula may be used: X(t)=(Top+X(t))/2 In operation1016, whether the data series X(t) is less than or equal to the minimum value is determined. In response to the data series X(t) being less than or equal to the minimum value, operation1017is performed; in response to the data series X(t) being greater than the minimum value, operation1018is performed. In operation1017, X(t) is updated, and the following update formula may be used: X(t)=(Low+X(t))/2 In operation1018, whether t is equal to N (in this example, N=211) is determined. In response to t not being equal to N, operation1019is performed; in response to t being equal to N, the processing of the abnormal values of the data series X(t) is completed. In operation1019, t=t+1, a next traffic value of the X(t) series is acquired for processing. Influence of the outliers on the prediction result may be reduced by processing the outliers in the traffic data through the above operations. In operation102, empirical mode decomposition is performed on the preprocessed traffic data to obtain the plurality of component series. Specifically, empirical mode decomposition (EMD decomposition) is performed on the preprocessed historical traffic data to obtain a plurality of intrinsic mode function (IMF) components and a residual component. A single reconstruction is performed on series corresponding to each component. In an exemplary embodiment, EMD decomposition is performed on each cell by traversing the cells in an ascending order according to a sequence of the number of the num field in the preprocessed data in operation101in this embodiment. As shown inFIG.3, performing empirical mode decomposition on the traffic data of a cell mainly include following operations. In operation1021, preprocessed traffic data X(t) of the cell is acquired and input ((X(t) is a time series about the traffic value, where time t represents a certain date). In operation1022, a maximum value envelope Xmax(t), a minimum value envelope Xmin(t) and an envelope mean value m(t) are calculated. Specifically, a local maximum value and a local minimum value of X(t) are marked, and then a cubic spline curve fitting is used to obtain maximum value envelope Xmax(t) and minimum value envelope Xmin(t) of the local maximum values or the local minimum values, and a mean value of upper and lower envelopes m(t)=(Xmax(t)+Xmin(t))/2 is calculated. In operation1023, the component series h(t) of a suspected IMF component is determined, where h(t)=X(t)−m(t). In operation1024, whether h(t) satisfies IMF conditions is determined. In response to h(t) satisfying the IMF conditions, operation1026is performed; in response to h(t) not satisfying the IMF conditions, operation1025is performed. Specifically, the IMF conditions include: condition (1): in the entire dataset, the number of local maximum values is the same as the number of values greater than zero or a difference between the above two numbers is equal to one; condition (2): at any timepoint, an average value of the upper envelope defined by the local maximum value and the lower envelope defined by the local minimum value is zero. In operation1025, X(t) is updated, where X(t)=h(t). In operation1026, h(t) is taken as the IMF component. Specifically, if h(t) satisfies the conditions in operation1024, h(t) may be used as the IMF component and it may be numbered as IMFi, where i∈{0, 1, 2 . . . n}, and an number subscript is updated according to i=i+1. In operation1027, a residual component r(t) is calculated, where r(t)=X(t)−h(t). In operation1028, whether r(t) has a monotonous trend is determined. In response to r(t) having the monotonous trend, EMD decomposition on the traffic data of the cell is completed; in response to r(t) not having the monotonous trend, operation1029is performed. In operation1029, X(t) is updated according to X(t)=r(t); The EMD decomposition is performed through the above operations, and finally n component series are obtained: X(t)=IMF0+IMF1+IMF2+IMF3+IMF4+ . . . +IMFn-2+r(t). In operation103, time series prediction model is used to fit the plurality of component series, and a fitted time series prediction model (i.e. the time series prediction model obtained after fitting the plurality of component series) is used to obtain a plurality of component prediction results of the second preset time period. Specifically, the time series prediction model is used to fit each of the component series to obtain the fitted time series prediction model; then, the component prediction result of each of the component series for the second preset time period is determined according to the fitted time series prediction model. Herein, the time series prediction model may be one of a prophet model, an autoregressive model, a moving average model or an autoregressive moving average model. In an exemplary embodiment, the prophet model is used to fit each of the component series. In operation104, all the component prediction results are accumulated to obtain a traffic prediction result for the second preset time period. Specifically, since the component prediction results are the prediction results of the component series in the second preset time period (where, the component series are obtained by performing empirical modal decomposition according to the historical traffic data), the result obtained after adding up the component prediction results is the traffic prediction result for the second preset time period. Compared with the prior art, the embodiment of the present disclosure performs empirical modal decomposition on the preprocessed historical traffic data to obtain the plurality of component series; uses the time series prediction model to fit the plurality of component series, and uses the fitted time series prediction model to obtain the plurality of component prediction results of the second preset time period; and accumulates all the component prediction results to obtain the traffic prediction result. The empirical mode decomposition method decomposes signals according to time scale characteristics of the data itself, which does not need to set any basis functions in advance and has obvious advantages in processing non-stationary and nonlinear data. Therefore, when the historical data is decomposed into relatively stable component series by means of empirical mode decomposition, influence of large fluctuations in data on the prediction accuracy is reduced, and accuracy of the traffic prediction is improved. A second embodiment of the present disclosure relates to a traffic prediction method. The second embodiment is substantially the same as the first embodiment, except that in an operation of using a time series prediction model to fit a plurality of component series, and using a fitted time series prediction model to obtain a plurality of component prediction results of a second preset time period, the second embodiment decomposes each of the component series into a sum of a trend term, a season term, and a noise term; determines a fitted trend item and a fitted seasonal item respectively, and then uses the fitted trend item and the fitted seasonal item to obtain a trend item prediction result and a seasonal item prediction result for the second preset time period; accumulates the trend item prediction result, the seasonal item prediction result and the noise item to obtain the component prediction result of each of the component series for the second preset time period. The traffic prediction method in this embodiment is shown inFIG.3, and specifically includes following operations. In operation201, traffic data of a first preset time period in a historical period is acquired, and the traffic data is preprocessed. In operation202, empirical mode decomposition is performed on preprocessed traffic data to obtain a plurality of component series. Operations201to202are substantially the same as operations101to102in the first embodiment respectively and will not be described here to avoid repetition. In operation203, each of the component series is decomposed into the sum of the trend item, the seasonal item, and the noise item. In this embodiment, a prophet model is used to predict each of the component series respectively, and the component prediction results of each of the component series are obtained. Specifically, a seasonal and trend decomposition using Loess (STL) is firstly performed through locally weighted regression to decompose each of the component series into the sum of the trend term, the seasonal term, and the noise term, that is, y(t)=g(t)+s(t)+ϵt where t represents time, y(t) represents the component series, g(t) represents the trend term, s(t) represents the seasonal term, and ϵtrepresents the noise term. It should be noted that the practice of “using the prophet model for component series prediction” in this operation is not necessary. In other alternative implementations, other time series models may also be used to predict each of the component series, which will not be repeated here. In operation204, the fitted trend item and the fitted seasonal item are determined respectively, and then the fitted trend item and the fitted seasonal item are used to obtain the trend item prediction result and the seasonal item prediction result for the second preset time period. Specifically, the fitted trend item and the fitted seasonal item are respectively determined according to the known component series and a preset fitting function, and then the fitted trend item and the fitted seasonal item are used to obtain the trend prediction result and the seasonal item prediction result for the second preset time period. In an exemplary embodiment, a fitting function of the trend term g(t) is: g(t)=(k+a(t)Tδ)t+(m+a(t)Tγ) where t represents time, (k+a(t)Tδ)t represents a data traffic growth rate, m represents an offset parameter, δ represents a growth rate vector, and γjis set as −sjδjto make the function continuous; sjrepresents a time when the prophet model has S change points, where j=1, . . . , S; a(t) is a custom vector and aj(t)={1,t≥sj0,t<sj, where T is a transpose operator. The fitting function of the trend term incorporates trend changes in a growth model into the fitting function of the trend term by clearly defining transformation points that allow a growth rate to change. Specifically, the fitting function of the trend term is derived by the following method: assuming that the component series y(t) has S change points at the time sj, where j=1, . . . , S. A growth rate vector δ∈Sis defined, and δjis the rate change at the times sj. The growth rate at any time t is the sum of a base rate k and all rate change values up to the time point: k+∑jt>sjδj A vector a(t)∈{0,1}Sis defined and the equation is as follows: aj(t)={1,t≥sj0,t<sj, then the growth rate at anytime t is abbreviated as: k+a(t)Tδ. When the rate k is adjusted, the offset m must also be adjusted to connect segment endpoints. A correct adjustment value of transformation point j is calculated as: γj=(sj-m-∑l<jγl)(1-k+∑l<jδlk+∑l≤jδl). In an exemplary embodiment, a fitting function of the seasonal term s(t) is a Fourier function, specifically: s(t)=[cos(2π(1)t365.25),…,sin(2π(10)t365.25)]β where, β˜(0,σ{circumflex over ( )}2), that is, β follows a normal distribution, mathematical expectation of the normal distribution is 0, and a variance is σ2. The noise term ϵtrepresents changes that is unable to be captured by a model, and it is assumed that the noise term follows the normal distribution. In operation205, the trend item prediction result, the seasonal item prediction result, and the noise item are accumulated to obtain a component prediction result of each of the component series. Specifically, the trend item prediction result ĝ(t) and the seasonal item prediction result ŝ(t) are obtained through operation203. A random value following the normal distribution is taken as the noise item ϵt, and ĝ(t), ŝ(t) and ϵtare accumulated to obtain a component prediction result of a single component series, namely: ŷ(t)=ĝ(t)+ŝ(t)+ϵt where, ŷ(t) represents the component prediction result, ĝ(t) represents the trend item prediction result, ŝ(t) represents the seasonal item prediction result, and ϵtrepresents the noise item. In operation206, all the component prediction results are accumulated to obtain the traffic prediction result for the second preset time period. Operation206is substantially the same as operation104in the first embodiment and will not be described here to avoid repetition. It should be noted that when the traffic data in the historical period covers fewer holidays (for example, the number of days for the traffic data is less than the number of days in a year, and the number of covered holidays is less than the number of holidays in a year), a better prediction effect may be obtained by adopting this embodiment. Compared with the first embodiment, when this embodiment uses the time series model to process each of the component series respectively and obtain the component prediction result of each of the component series, specifically, a prophet model is used to decompose each of the component series into the sum of the trend term, the seasonal term and the noise term for fitting, and then the fitted prophet model is used to obtain the traffic prediction result. It is possible to predict the traffic according to trends of periodic changes and aperiodic changes of the traffic data at the same time by predict the component series using the prophet model, which improves accuracy of the traffic prediction. A third embodiment of the present disclosure relates to a traffic prediction method. The third embodiment is substantially the same as the second embodiment, except that instead of decomposing each of component series into a sum of a trend term, a seasonal term, and a noise term, the third embodiment decomposes each of the component series into a sum of the trend item, the seasonal item, a holiday item, and the noise item. In the following operations, a fitted trend item, a fitted seasonal item and a fitted holiday item are determined respectively, and then the fitted trend item and the fitted seasonal item and the fitted holiday item are used to obtain prediction results of the trend item, the seasonal item and the holiday item in a second preset time period. The prediction results of the trend item, the seasonal item prediction result, the holiday item prediction result and the noise item are accumulated to obtain a component prediction result of each of the component series for the second preset time period. The traffic prediction method in this embodiment is shown inFIG.4, and specifically includes following operations. In operation301, traffic data of a first preset time period in a historical period is acquired, and the traffic data is preprocessed. In operation302, empirical mode decomposition is performed on preprocessed historical traffic data to obtain a plurality of component series. Operations301to302are substantially the same as operations101to102in the first embodiment respectively and will not be described here to avoid repetition. In operation303, each of the component series is decomposed into the sum of the trend item, the seasonal item, the holiday item, and the noise item. In this embodiment, the prophet model is used to predict each of the component series respectively to obtain the component prediction result of each of the component series. Specifically, each of the component series is decomposed into the sum of the trend term, the seasonal term, and the noise term through STL decomposition, that is, y(t)=g(t)+s(t)+h(t)+ϵt where t represents time, y(t) represents the component series, g(t) represents the trend term, s(t) represents the seasonal term, and ϵtrepresents the noise term. In operation304, the fitted trend item, the fitted seasonal item and the fitted holiday item are determined respectively, and then the fitted trend item, the fitted seasonal item and the fitted holiday item are used to obtain the trend item prediction result, the seasonal item prediction result, and the holiday item prediction result for the second preset time period. Specifically, fitting functions of the trend term, the seasonal term, and the noise term is similar to those of operation204in the second embodiment and are not repeated here. In an exemplary embodiment, a fitting function of the holiday term h(t) is as follows: h(t)=Z(t)κ where, κ is a normal distribution curve, Z(t)=[1(t∈D1), . . . , 1(t∈DL)], and for a Lth holiday, DLrepresents a time period during which the holiday has an impact. In operation305, the trend item prediction result, the seasonal item prediction result, the holiday item prediction result, and the noise item are accumulated to obtain the component prediction result of each of the component series. Specifically, the trend item prediction result ĝ(t), the seasonal item prediction result ŝ(t) and the holiday item prediction result ĥ(t) are obtained through operation303, a random value following a normal distribution may be taken as the noise item ϵt, and ĝ(t), ŝ(t), ĥ(t) and ϵtare accumulated to obtain a component prediction result of a single component series, namely: ŷ(t)=ĝ(t)+ŝ(t)+ĥ(t)+ϵt where, ŷ(t) represents the component prediction result, ĝ(t) represents the trend item prediction result, ŝ(t) represents the seasonal item prediction result, ĥ(t) represents the holiday item prediction result, ϵtrepresents the noise term. In operation306, all the component prediction results are accumulated to obtain a traffic prediction result. Operation306is substantially the same as operation104in the first embodiment and will not be described here to avoid repetition. It should be noted that when the traffic data in the historical period covers more holidays (for example, the number of days in the traffic data is more than the number of days in a year, and the number of covered holidays is more than the number of holidays in a year), a better prediction effect may be obtained by adopting this embodiment. Compared with the second embodiment, when the prophet model is used to fit the component series, this embodiment decomposes each of the component series into the sum of the trend item, the seasonal item, the holiday item, and the noise item for fitting, and obtains the traffic prediction result according to the fitted prophet model. By retaining the holiday item of the prophet model when adopting the prophet model, the traffic prediction may be carried out according to the trend of periodic changes and aperiodic changes of the traffic data and the impact of holidays on the traffic data, which improves accuracy of the traffic prediction. A fourth embodiment of the present disclosure relates to a traffic prediction method. The fourth embodiment is substantially the same as the first embodiment, except that, according to the fourth embodiment, the operation of fitting a plurality of component series by using a time series prediction mode includes: dividing all component series into a component training set and a component test set according to a preset step size; using the time series prediction model to fit component series of the component training set; and determining a prediction error of a fitted time series prediction model according to the component test set. The traffic prediction method in this embodiment is shown inFIG.5, and specifically includes following operations. In operation401, traffic data of a first preset time period in a historical period is acquired, and the traffic data is preprocessed. In operation402, empirical mode decomposition is performed on preprocessed traffic data to obtain the plurality of component series. Operations401to402are respectively substantially the same as operations101to102in the first embodiment and will not be described here to avoid repetition. In operation403, all the component series are divided into the component training set and the component test set according to the preset step size. In an exemplary embodiment, dividing all the component series into the component training set and the component test set according to the preset step size in this embodiment may specifically include: taking a duration of the second preset time period as the preset step size; taking, as the component training set, a set of data at time points out of a time period of the preset step size before the current time in all the component series; taking, as the component test set, a set of data at time points within the time period of the preset step size before the current time in all the component series. In a practical application scenario, it is assumed that traffic data in next 30 days of a plurality of cells managed by a base station is to be predicted, and EMD decomposition is performed on collected historical traffic data of the cells in the past 210 days. In this operation, since a target prediction step size is 30 days, component series of all the cells in the last 30 days of the past 210 days are used as the component test set, and component series of all the cells in first 180 days (the remaining component series) are used as the component training set. In operation404, the time series prediction model is used to fit the component series of the component training set. This operation is roughly the same as operation103in the first embodiment, except that, in operation103of the first embodiment, the time series prediction model is used to fit each of the component series while in this operation, the time series prediction model is used to fit each of the component series in the component training set to obtain a fitted time series prediction model. In operation405, the prediction error of the fitted time series prediction model is determined according to the component test set. In an exemplary embodiment, operation403is used to divide the component series to obtain the component training set and the component testing set. At this time, a traffic prediction result for the historical time period immediately before the current time is determined according to the fitted time series prediction model, where the duration of the historical time period equals to the preset step size. The prediction error of the fitted time series prediction model is determined according to the component test set and the traffic prediction result for the historical time period. In an exemplary embodiment, prediction performance is evaluated by calculating a mean absolute percentage error (MAPE): MAPE=❘"\[LeftBracketingBar]"y^t-yt❘"\[RightBracketingBar]"yt where, ŷtis the traffic prediction result for the historical time period, and ytis the component series corresponding to the component test set. In operation406, a plurality of component prediction results for the second preset time period are obtained by using the fitted time series prediction model, and all the component prediction results are accumulated to obtain a traffic prediction result for the second preset time period. Specifically, the fitted time series prediction model obtained in operation404is used to perform prediction for the second preset time period and obtain a plurality of component prediction results. The remaining operations are substantially the same as operation104in the first embodiment and will not be described here to avoid repetition. Compared with the first embodiment, this embodiment divides the plurality of component series obtained by EMD decomposition into the component training set and the component test set. The component training set is used to determine the fitted time series prediction model, and the component test set is used to evaluate prediction accuracy of the time series prediction model after the component training set is fitted. The prediction accuracy of the time series prediction model is evaluated by cross-validation, so that the model may be retrained when the prediction accuracy of the model is low to ensure that the prediction accuracy reaches an ideal state. The operations of the above various methods are divided only for purpose of describing clearly and may be combined into one operation or some operations may be split into a plurality of operations during implementation. As long as the same logical relationship is included, they are all within the protection scope of the present disclosure. Adding insignificant modifications to an algorithm or process or introducing insignificant designs without changing the core design of the algorithm and process are all within the protection scope of the present disclosure. A fifth embodiment of the present disclosure relates to a traffic prediction device. As shown inFIG.6, traffic prediction device includes at least one processor501; and a memory502communicatively connected to the at least one processor501; where the memory502stores an instruction executable by the at least one processor501, and the instruction, when executed by at least one processor501, causes the at least one processor501to perform the foregoing traffic prediction method embodiment. The memory502and the processor501are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects various circuits of one or more processors501and the memory502together. The bus may also connect together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein. A bus interface provides an interface between the bus and a transceiver. The transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, which provides a unit for communicating with various other devices over a transmission medium. Data processed by the processor501is transmitted over a wireless medium through an antenna. In an exemplary embodiment, the antenna also receives and transmits the data to the processor501. The processor501is configured for managing the bus and general processing, and may also provide various functions, including timing, peripheral interface, voltage regulation, power management, and other control functions. The memory502may be used to store data used by the processor501when performing operations. Some embodiments of the present disclosure further provide a computer-readable storage medium storing a computer program that, when executed by a processor, causes the processor to perform the foregoing traffic prediction method embodiment. That is, those skilled in the art would understand that all or part of the operations in the method for performing the above embodiments may be completed by instructing a relevant hardware through a program. The program is stored in a storage medium and includes several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor to perform all or part of the operations of the methods described in the various embodiments of the present disclosure. The aforementioned storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or optical disk and other medium that can store program codes. Those of ordinary skill in the art can understand that the above-mentioned embodiments are specific examples for realizing the present disclosure, and in practical applications, various changes can be made in form and details without departing from the spirit and scope of the present disclosure. | 33,655 |
11863398 | DETAILED DESCRIPTION Overview Aspects of the disclosure are directed to a system for improving the deployment of machine learning models across nodes of a distributed network, by regulating the flow of data used as training data to the system from one or more data sources in communication with each network node. A data management system generates policies for each data stream of a network node of a distributed network. The policy specifies how each network node is to regulate each stream of input data from the node to a central platform implementing the data management system. Regulating a data stream can refer to altering characteristics, such as the type, volume, and rate, of data passed through a distributed network to the central platform. Before regulation, the data stream can be characterized at least by a rate, volume, or type of data present in the data stream. Initially, the data stream may be subject to predetermined parameters, for example defined by a network node, by the central platform, and/or by another device configured to control the distributed network, for its transmission over the distributed network. For instance, a network node may be initially configured to transmit as much data as possible, limited by network bandwidth and/or the node's processing capacity to transmit data. Data can be provided to a computing platform for training one or more machine learning models on each of a variety of different network nodes. One problem with transmitting as much data as possible to the computing platform is that incremental increases in the amount of data provided may not result in corresponding improvements to a model trained according to the additional data. The data management system according to aspects of the disclosure generates a policy in accordance with a plurality of objectives, such as improving the output quality of a model deployed on a network node, and lowering the operational cost, for example measured in network traffic bandwidth or processing cycles, to transmit and process the stream of data from the network node to the data management system. Network nodes can be physically separated over a large distance and across one or more interconnected networks, compounding the cost to transmit and process data streams of input data. The input data can be used to generate new training data, which can be used by a central computing platform for training or retraining a machine learning model deployed on a network node. As training a machine learning model is a computationally expensive task requiring large amounts of time and computational resources to process training data, a computing platform can train models to be deployed or redeployed on a network node. The data management system balances model output quality with operational cost to transmit additional data from the node to the system for training or retraining the deployed machine learning model. In other words, the data management system can provide just enough data to meet predetermined output quality thresholds, for example defined to provide a minimum level of quality of user interaction with a node implementing a deployed model. The system can be trained to identify characteristics of a data stream, such as a type, volume, and rate of data, which, when used for training a machine learning model, causes the system to generate a model with an accuracy at least meeting or exceeding predetermined output quality thresholds. The system balances relative performance gains from added training time to a model, with the operational cost for transmitting and processing the additional data from the network node to the platform. As an example, additional input data used to train the model may realize a narrow, but quantifiable, increase in performance in the retrained model. However, the additional input data may place a strain on a distributed network and inhibit the performance of the deployed model in other ways, for example measured in network latency, response time, etc. The system can provide a policy to the network node to regulate, for example, the type, volume, and rate at which data is transmitted to the system, balance objectives, such as operational cost to transmit additional data to the platform, with model quality improvements as a result of training on the additional data. For example, the policy may define certain time periods at which data transmitted is not to exceed a predetermined threshold, as a way to regulate the rate at which data is transmitted to the system from the network node. Other objectives can include objectives related to training the deployed model to perform a particular task. For example, additional objectives can include reducing or mitigating bias in input data received by the system used in training or retraining the deployed model. Bias can be quantified by one or more statistical measures. Data sources in communication with the network node can be, for example, individual user computing devices, such as mobile phones or personal laptops; one or more or more servers; or any of a variety of computing devices, including wearable devices and other sensors, embedded systems, or other devices configured to communicate with the network node. The way in which the data source communicates with the network node can vary, for example, over a radio access network, a core network, or as part of an operational support system. As part of generating the policy, the data management system is configured to receive characteristics of the network node, which can include the type or types of networks connecting the system to the node. A data management system as described herein can identify patterns between data streams having certain characteristics, with higher levels of model performance as a result of training models on those data streams. Using these patterns, which for example can be learned using a machine learning model, the system can generate the policies specifying how the network node is to regulate data streams to the system, for example by adjusting the rate, volume, and/or type of data transmitted to the platform implementing the system. As an example, the data management system can be deployed on a computing platform in communication with various network nodes of a telecommunications network. Different nodes may have different models deployed on each node, for example for analyzing streams of telecommunication network data passing through each node, or for automating some processing task that receives telecommunication network data as input. The deployed models may be subject to minimum output quality thresholds, for example a minimum recall rate or a maximum tolerated false positive rate, such as when a deployed model is trained to generate an output classification based on ingested telecommunication network data. As a telecommunication network can be spread far along many physical or virtual computing devices, the various models can be deployed on devices physically proximate to data sources of telecommunication network data. At least some data of each data stream is transmitted by a network node to the data management system, as one or more regulated streams of data. The data management system can be implemented on one or more computing devices, for example computing devices of a computing platform. The computing platform may be connected to the telecommunications network over a separate connection, and/or be part of the telecommunications network itself. The regulated streams of data are ingested by the data management system and used to train or retrain, for example to update weights of, machine learning models deployed on the network nodes. After a period of time, for example a predetermined period of time and/or in response to a request from a network node, the data management system can provide an updated model to the network node. The data management system can generate policies for regulating data streams across the various telecommunication network nodes. The network nodes can be configured to regulate data transmitted to the data management system, according to a received policy. Data can be transmitted more efficiently, for example less data or during less network-congested periods of time, without substantially reducing the model performance of models trained on the regulated data by the data management system. In the example of a telecommunications network that may have many different nodes receiving data from a variety of different smaller and heterogeneous networks, implementing the data management system as described herein can reduce the burden of the network in transmitting data. Each network node in communication with a data management system according to aspects of the disclosure can include one or more node metric engines and one or more data source regulators implemented on nodes of the distributed network. The data management system can include a central management plane (CMP). The CMP receives node metrics data from the respective metric engine implemented on each of multiple nodes of a distributed network. The CMP uses the received node metrics data to generate a corresponding policy of actions to perform, or conditions to enforce, for each node of the distributed network. In some examples, the CMP implements a machine learning model trained to generate policies for each node. The CMP can be trained with labeled metric data, which can include features of a deployed machine learning model characterizing one or more of the inference accuracy, the inference precision, and the inference recall of the deployed machine learning model. The metric data can be labeled for example, with data characterizing one or more of the rate of the stream of input data, the volume of the stream of input data, and the types of the data in the stream of input data received by the CMP from the network node. The corresponding data source regulator for a network node receives a policy from the CMP, and performs actions defined by the policy to regulate the stream of input data from the node. For example, the policy can specify a maximum rate, for example, in bits per second, at which the stream of input data is to be provided to the CMP. Because nodes of the network can be heterogeneous, for example, are in different geographic locations, with different supporting infrastructure, sources of data, data traffic patterns for data to and from each node, etc., the system generates different policies for each data stream received from each node. The data management system accounts for deployment-specific characteristics of each node, as well as specific characteristics of different data streams from different sources of data. The system can generate different policies, even when the base model or task performed at each node is the same for each received stream of data. In this way, the system can receive less data for training one or more different models deployed on a respective node, using a policy that specifies regulated characteristics of each of one or more data streams, which may be received by the node from one or more data sources. Data stream characteristics can vary, and training data can affect model performance in different ways. Therefore, providing multiple policies for multiple data streams can reduce or eliminate adverse effects, such as reduced model quality, as compared with approaches in which a single policy is predetermined and applied to all data streams to the system. The data management system allows for rapid scaling of new deployments of nodes in the distributed network, and can provide an initial policy based on similarly-deployed nodes to accelerate integration of the node into a distributed network. New nodes can be deployed faster at least because the initial policy can be provided versus deploying the node without a policy, and/or versus deploying the node with a uniform predetermined policy which may not be suited given particular characteristics of the network node and/or data sources, or data transmitted to the node. The data management system can update the initial policy upon receiving metric data, for example, related to geographic location of the node, output quality of the model currently deployed on the node, traffic patterns of the node including characteristics about individual data streams, etc., characterizing the deployment of the new nodes. Faster deployment of nodes can improve the system's capability to add additional computing resources when necessary, and can reduce idle time between receiving a request for additional nodes of resources in the network, and the initialization of the requested node. Aspects of the disclosure provide for a number of technical advantages. Machine learning models can be trained at a central platform and distributed at different physically remote nodes, for example at cell, near, and far edges. A platform implementing a data management system as described herein can adjust the deployment of each model across heterogeneous network nodes, including adjusting the rate at which data from the node is transmitted to the platform. The policy reflects a variety of different factors unique to each node and data streams from data sources in communication with the node. The operational cost, for example measured in network bandwidth and/or in processing cycles, and performance, e.g., model accuracy, of training the node-deployed models at a platform can be balanced to reduce operational cost in transmitting data over a distributed network, without substantially reducing model quality and performance. The distributed network can use additional resources saved as a result of sending regulated data to the system for other purposes, for example in deploying additional models to the network node and increasing its capability to serve user requests. The data management system, through policies generated at a per-data stream level, allows for just enough data to be transmitted from each node to the platform, reducing network traffic while maintaining minimum accuracy or performance benchmarks for each node. Generating per-data stream policies allows for more granular adjustment to data streams transmitted to the system, in turn allowing for more efficient data transmission to the system for training a model deployed on a network node, even when more efficient adjustments to regulating one data stream may adversely affect another data stream of the same network node. Each data stream can be regulated, e.g., characteristics of the data stream can be adjusted, individually, at least by the generation and execution of per-data stream policies as described herein. As the network scales in size, for example as additional network nodes are added, the system can initialize newly added nodes with policies of other nodes identified as similar to the newly added nodes within a threshold. In doing so, the system allows for quickly improving the performance of the newly added node at reduced operational cost for data stream transmission, before later fine-tuning the policy to reflect characteristics specific to the node and of data streams received from the node. This added bootstrapping of a previously-generated policy can reduce the time to deploy a new network node, which not only directly impacts the capability of the platform in receiving and serving processing requests, but allows the platform to react faster to adding new computing resources when the need is identified. Example Systems FIG.1is a block diagram of an example data management system100in communication with network nodes110A-B, according to aspects of the disclosure. The system100can be implemented, for example, as part of a computing platform, communicating with the network nodes110A-B over a network120. As described in more detail with reference toFIG.6, the computing platform can include a number of computing devices, such as server computing devices, which can communicate with a number of other devices, such as devices implementing the network nodes110A-B. The network nodes110A-B can be part of a number of network nodes connected over the distributed network120. Examples of network nodes include user computing devices, such as personal computing devices, wearable devices, or smartphones. In some examples, network nodes can include one or more computing devices in communication with a network of other computing devices. The network node can implement an inferencing engine that is configured to receive input data and requests to process the input data, from the network of computing devices. Network nodes can be implemented in a variety of different locations, for example across different geographic regions. Network nodes can service a variety of different devices, for example corresponding to different users who may or may not be affiliated with one another. Example locations in which network nodes may be deployed range from individual buildings to entire cities, and locations of at scales in-between. The network nodes110A-B can receive data from a number of data sources115A-E. As with the network nodes110A-B, the data sources115A-E can include any of a variety of different computing devices, including computing devices serving as a proxy between a network node and one or more other devices, for example devices in a local network. The data sources115A-E in communication with network nodes1120A-B can be, for example, individual user computing devices, such as mobile phones or personal computers; one or more servers; or any of a variety of computing devices, including wearable devices and sensor devices, embedded systems, or other devices configured to communicate with the network node. The way in which the data source communicates with the network node can vary, such as over a radio access network, a core network, or as part of an operational support system. Each data source sends a respective input data stream116A-E to the network nodes110A-B. Each input data stream includes data that is input to the inferencing engine of the receiving network node. In response, the network node can process input data in the received stream and generate output data in response. The output data can be generated by the inferencing engine, for example by processing the input data through a machine learning model trained to process the input data. Some data sources, such as data source115A, can send multiple input streams to a network node, such as input streams116A,117A to the network node110A. The separate streams can correspond to input received from a larger network of devices behind the data source115A. For example, the data source115A can be one or more computing devices, with the network node acting as a proxy between the network node110A and one or more other computing devices or networks of devices. Each device or network of devices can send a respective stream of input data to the network110A, either directly or through one or more proxy devices. As described in more detail herein, the data management system100is configured to train machine learning models for deployment on the network nodes110A-B. The system100receives a stream of input data regulated according to a policy generated by the system100, and uses the regulated stream as training data for training one or more machine learning models. The task the machine learning models deployed as part of the inferencing engines114A-B can vary depending on specific requirements the network nodes110A-B are configured to meet. Examples of machine learning tasks which deployed machine learning models can be trained to perform follow. As an example, the input to an inferencing engine of a network node can be in the form of images or videos. The inferencing engine can be configured to extract, identify, and generate features as part of processing a given input through one or more deployed machine learning models, for example as part of a computer vision task. Machine learning models trained to perform this type of machine learning task can be trained to generate an output classification from a set of different potential classifications. In addition or alternatively, the machine learning model can be trained to output a score corresponding to an estimated probability that an identified subject in the image or video belongs to a certain class. For instance, the network node110A can be part of a system for monitoring an industrial manufacturing process, in which objects are designed and/or manufactured. The data sources115A-E can include one or more sensors collecting sensor data at various points in a manufacturing line, including image or video data. The inferencing engine114A can process the input data through a machine learning model trained to detect anomalies in manufactured objects, and flag those anomalies for further inspection and/or to take some predetermined action in response to the detection. As another example, the input to an inferencing engine of a network node can include data files corresponding to a particular format, such as HTML files, word processing documents, or formatted metadata obtained from other types of data, such as metadata for image files. Machine learning model(s) deployed as part of the inferencing engine can be trained to classify, score, or otherwise predict some characteristic about the received input. For example, the machine learning model(s) can be trained to predict the probability that the received input includes text relating to a particular subject. Also as part of performing a particular task, the machine learning model can be trained to generate text predictions, for example as part of a tool for auto-completion of text in a document as the document is being composed. A machine learning model can also be trained for predicting a translation of text in an input document to a target language, for example as a message is being composed. In the above example, data sources providing data in this example can include user computing devices, which can provide queries to the network node including data files or plain text for processing. The user computing devices can interact with the network node over an interface, such as a web interface accessed through a web browser or application installed on the user computing device. As another example, the input to the inferencing engine of a network node can be audio input, including streamed audio, pre-recorded audio, and audio as part of a video or other source or media. Machine learning model(s) deployed as part of a network node inferencing engine can be trained to perform speech recognition, including isolating speech from other identified sources of audio and/or enhancing characteristics of identified speech to be easier to hear. A machine learning model can be trained to predict an accurate translation of input speech to a target language, for example in real-time as part of a translation tool. Data sources can include user computing devices, such as wearable devices, including earbuds, headsets, etc., configured to communicate audio data in real-time for processing by a network node. Other types of input documents can be data relating to characteristics of a network of interconnected devices. These input documents can include activity logs, as well as records concerning access privileges for different computing devices to access different sources of potentially sensitive data. Deployed machine learning model(s) can be trained by the training engine104for processing these and other types of documents for predicting on-going and future security breaches to the network. For example, the machine learning model(s) can be trained to predict intrusion into the network by a malicious actor. As another example, a machine learning model can be trained to classify anomalous data from a set of input documents, and flag instances of predicted anomaly for further manual review and/or automatic correction. Data sources in this example can include computing devices in a local network, configured to monitor and record network activity and forward the records to a network node deploying one or more machine learning models for processing the records, as described in this example and others. In addition to data input, including the various types of data described herein, the inferencing engines114A-B can be configured to preprocess features corresponding to given input. Features are values, for example, numerical or categorical, which relate to some characteristic of the input. For example, in the context of an image, a feature of the image can relate to the RGB value for each pixel in the image. The inferencing engines114A-B can be configured to extract and select relevant features for processing to generate an output for a given input, and can also be trained to generate new features based on patterns identified by the deployed models between various characteristics of input data. In some examples, the deployed machine learning model(s) of an inferencing engine can be trained to perform some or all of the feature processing/extraction for given input data. FIG.2is a block diagram of the example data management system100interacting with a network node210, according to aspects of the disclosure.FIG.2shows a data source215transmitting input data stream A216A and input data stream B216B to a network node210. The data streams216A-B are received by a data source regulator214and an inferencing engine212of the network node210. The inferencing engine212can include a node metrics engine213and a model217. The model217can be one or more of any of a variety of machine learning models trained to perform a machine learning task by processing an input data stream, as described herein with reference toFIG.1. The model217can generate output which can at least partially form a node output232that is sent to the data source215. The node output232can be a response to the input data received from the data source215. For example, the data source215can pass a query or request to process some input data as part of an input data stream. In other examples, the node output232can be passed to other network nodes sharing a connection with the network node210(not shown). The inferencing engine212can receive the input data and request, process the input data according to the request, for example, according to any parameters for processing the input data provided as part of the request, and generate a model output in response to the processed input data. The network210can send the model output and optionally any additional information to the data source215. The data source215can receive the node output232, and send the output232to one or more connected computing devices, for example for continued downstream processing. In some examples, instead of receiving a request to process data from a data stream, the network node210is configured to automatically process received data, for example as received or according to any of a variety of predetermined parameters. The data source regulator214passes regulated input data streams A, B222A-B to the system100. A regulated input data stream is an input data stream received from a data source after a respective data stream policy is applied to the input data stream, for example by the data source regulator214. For example, the data source regulator214receives data stream policies A, B226A-B from a central management plane (CMP)228of the system100. A data stream policy can include one or more rules, which when applied by the data source regulator214to a data stream, adjusts the transmission of the data stream to the data management system100in one or more ways. For example, the rules can specify the rate of the stream of input data transmitted by the network node; the volume of the stream of input data transmitted by the network node; and/or the types of data in the stream of input data transmitted by the network node. The data source regulator214is configured to convert a received policy into one or more instructions executable by the data source regulator214to cause the data source regulator214to apply the policy in transmitting a data stream. The CMP228generates the policy to meet a number of objectives, such as objectives for reducing the operational cost of transmitting the regulated stream of input data over a distributed network or using the regulated stream of input data to train the machine learning model217, and increasing the output quality of the deployed machine learning model according to one or more quality metrics, after training the model on the regulated stream of input data. The CMP228balances at least two objectives in generating a data stream policy, lowering the operational cost to transmit and process input data for training as low as possible, while raising the output quality of models deployed on the network node and trained by the system100to meet predetermined output quality thresholds. In other examples, the CMP228is configured to raise output quality of models deployed on the network node and trained by the system100as high as possible, while also balancing the operational costs to transmit streams of data for training. In some examples, the CMP228may generate the policy according to other objectives, in addition to increasing model output quality and reducing operational cost for transmitting and processing streams of input data. For example, the CMP228may generate the policy to reduce bias in the trained machine learning model deployed on the network node. Bias is the difference between outputs generated by a machine learning model and a ground-truth or correct output, for a given input. The machine learning model may become biased for a variety of reasons, which can stem from the training data used to train the model. In addition to balancing operational cost and model output quality, the CMP228can be configured to generate policies to regulate data streams received by the data management system100to reduce bias in a machine learning model trained using the regulated data streams. For example, the generated policy can specify one or more filters that when executed by a network node, causes the network node to filter out certain types of data that have been identified by the data management system100as biasing a deployed model. The operational cost for transmitting the regulated stream of data, such as the data streams222A,222B, can be measured in processing cycles or in computing resources for transmitting the data between the node210and the system100. The operational cost can be reflected, for example, in processing cycles required by either the node210or the system100in sending and receiving the data streams, respectively; network bandwidth required to transmit the data; time spent transmitting the data and the latency between sending and receiving the data; and any latency caused in other transactions across the network as a result of transmitting the data stream, for example because other data was queued and delayed while waiting for the data stream to be sent across the network. The operational cost of transporting input data can be based on the cost, for example in time or in number of processing cycles, for transmitting a stream of input data to the system100at different rates or volumes. The rate at which a stream is transmitted can be measured as units of data over a period of time, such as megabytes per second. Higher rates generally incur a higher operational cost than lower rates of data transmission. The volume at which a stream is transmitted can be measured as units of data, for example in gigabytes or terabytes. Higher volumes of data transmitted by the network node210to the system100generally require more computing resources—and therefore have a higher operational cost—to process, over lower volumes of data. The operational cost for transmitting a data stream can also be based on when the data stream is sent to the system100. For example, some periods of time may correspond with less network activity, making the operational cost to transmit the data stream lower, at least because the chance of delay in transmitting the stream, or other data as a result of transmitting the stream, is lower. On the other hand, transmitting the data stream during other periods of time may conflict with other data transmitted during a period of peak network activity. The operational cost for transmitting a data stream can also be based on the type of data that is being transmitted or processed in the data stream. For example, some types of data, such as tensors or higher-order data structures, are generally more computationally intensive to transmit and process over other types of data, such as bit indicators or data transmitted as un-encoded raw bytes. A data stream may include one or more types of data, and the network node may transmit some, all, or none of a type of data based on a received policy from the CMP228. The operational cost for processing the regulated stream of data can refer to one or more measures of computing resources used in training a machine learning model using the regulated stream of data as input. For example, the operational cost can be measured in number of processing cycles and/or time in preparing the stream of data for training, and training, validating, and testing the model according to the prepared training data. The CMP228balances lower operational cost with improving the output quality of a model trained by the system100and deployed on a network node. Output quality can be measured in a variety of different ways, for example the output quality can be measured according to one or more of the inference accuracy, the inference precision, and the inference recall of the machine learning model after training or retraining the machine learning model on training data including the regulated stream of input data. The node metrics engine of a network node, such as the node metrics engine213, is configured to generate these metrics for the output of a model deployed on the network node. The node metrics engine213can be configured to receive or generate ground-truth labels after the model217generates and sends a model output to the data source215. The data source215can be configured to obtain confirmation as to the accuracy of a model output, for example from user input or through independent and automatic mechanisms for verifying the model output. In response, the data source215can provide feedback to the model output, which the node metrics engine213can use to generate metrics as described herein. The CMP228may also receive predetermined thresholds for a minimum output quality of a model deployed on the network node210. For example, the CMP228may receive a minimum output quality threshold specifying 99% recall for the deployed model on received input data. In other examples, the CMP228may provide its own minimum threshold, if one is not provided. The CMP228can be implemented as a machine learning model trained to generate data stream policies, as described herein. The CMP228can receive, as training data, one or more training examples of the output quality of models deployed on various network nodes, labeled with characteristics of a data stream provided by the network node to the CMP. Training examples can include data characterizing one or more of the inference accuracy, the inference precision, and the inference recall of a deployed machine learning model. The labels for the training examples can include data characterizing one or more of the rate of the stream of input data, the volume of the stream of input data, and the types of data in the stream of input data transmitted by the network node to the CMP. The CMP228can be trained according to a variety of approaches, for example as a supervised machine learning model trained using stochastic, batch, or mini-batch gradient descent. The training data can be generated by the ingestion engine208, configured to receive both regulated data streams and metric data237from the node metrics engine213. In addition or alternatively, the training examples include additional data further characterizing a network node with a deployed machine learning model. The node metrics engine of a network node, for example, the node metrics engine213of the network node210, can be configured to collect values for different metrics at least partially characterizing the network node itself, including received data from the data source(s), and/or the data source(s) themselves. For example, the node metrics engine213can collect data related to the deployment of the network node itself. This data can include physical characteristics of the network or of the network node itself, e.g., the location of one or more computing devices or processors implementing the network node, or the type of hardware or infrastructure the network node is built using. The data at least partially characterizing the network node can include characteristics of one or more streams of data received by the network node, such as the rate, volume, and types of data transmitted to the network node from one or more data sources. These characteristics can also include temporal information, such as how often data is received, and at what rates data is received in a stream by the network node over different periods of time. The ingestion engine208can receive data from network nodes and/or other devices in communication with the system100over the network120. The ingestion engine208can distribute data to other components of the system100, such as the CMP228and the training engine204. The data received can include regulated data streams A, B222A-B, as well as data streams that are unregulated, which may be received by a network node for which the CMP228has not generated a policy for a corresponding data stream. Data streams received can be labeled by the ingestion engine208with identifiers, for example based on the origin of the data stream. An example identifier can be a tuple, for example in the form: <node identifier, data source identifier, data stream type>, specifying the network node from which the stream is received, the data source from which the network node received the stream, and the type of data, such as raw bytes or encoded data, respectively. The ingestion engine208can also receive the metric data237generated by the node metrics engine213. The regulated data streams A, B222A-B can be sent to the training engine204, which can be configured for training models that are deployed on various network nodes. The metric data237can be labeled with characteristics of input data streams received by the model217, and be sent to the CMP228for training, as described herein. The training engine204can train machine learning models according to any of a variety of training procedures, including supervised, unsupervised, and semi-supervised training approaches. Before the machine learning models are fully trained, tested, and deployed on respective network nodes, the training engine204can generate training data from the data streams216A, B, labeled according to provided ground-truth labels. Similar to the labels of the training data for the CMP228, the labels provided to the training engine204can be provided as feedback to the node output232provided to the data source215. The deployment engine206can be configured to send a trained model227trained by the training engine204to a corresponding network node. The deployment engine206can maintain data associating various network nodes with corresponding machine learning models. For example, each network node can execute one or more machine learning models as part of its respective inferencing engine, described herein. The one or more machine learning models executed by one network node, such as the network node210, can at least partially overlap with one or more machine learning models of another network node. In some examples, the one or more machine learning models of one network node can be completely different than machine learning models implemented by another network node. Telecommunication orchestrators and controllers are often deployed to manage network element configurations. Sometimes those configuration changes may lead to network anomalies. Upon detecting such anomalies, the CMP228would recommend rollback of configuration changes to the orchestrators or controllers. Such recommendations may be specific to some deployments, or applicable systems-wide. Example Methods FIG.3is a flowchart of an example process300for regulating a stream of output data for a network node of a distributed network. For example, the data management system100as described herein with reference toFIGS.1-2can perform the process300. The system receives node metrics data at least partially characterizing a network node executing a deployed machine learning model on a stream of input data, according to block310. As described herein with reference toFIGS.1-2, the node metrics data can at least partially characterize a network node, which may include characterizing different streams of input data to the network node. The process300is described herein with reference to a single data stream to a single network node, although it is understood that in other examples, multiple network nodes can be in communication with the data management system performing multiple instances of the process300, in parallel or in sequence. The system generates a policy for regulating the stream of input data transmitted by the network node, according to block320. The stream of input data can be received by the network node from a data source, as described herein with reference toFIGS.1-2. As described herein with reference toFIG.2, the system can generate a respective per-data stream policy for each data stream. The policy can specify how a network node is to control the rate, volume, and types of data, among other things, to be transmitted to a central management plane for the system. The system can train a CMP to generate policies using training data of different quality metrics for different models deployed across network nodes of a distributed system, labeled with data characterizing one or more of the rate of the stream of input data, the volume of the stream of input data, and the types of data in the stream of input data transmitted by the network node to the CMP, among other quantifiable characteristics of the stream of input data. The system sends the policy to the network node, according to block330. The network node can be configured to execute the policy, for example by transmitting a regulated stream of input data to the CMP of the system with characteristics matching or approximating characteristics specified in the provided policy. As an example, if the policy specified transmitting data only during certain time periods, the network node can be configured to execute the policy by causing data to be transmitted only during those certain time periods. FIG.4is a flowchart of an example process400for initializing a new network node of the distributed network, according to aspects of the disclosure. Initialization can refer to a process in which a computing platform connects to a new network node, for example to communicate data and to train a machine learning model for deployment on the network node. The network node can be created by allocating computing resources of the platform, or in some examples, created as one or more computing devices previously not connected to the platform. In those examples, initialization can include the process by which the platform connects to the new network node and begins communication. As with the network nodes described herein with reference toFIGS.1-2, the new network node presently described can also receive one or more data streams from one or more different data sources. Initially, the network node can send one or more streams of input data to the CMP for training, and to receive a trained machine learning model to deploy on the network node. The system can perform the process300described herein to generate policies for each data stream received by the new network node. In some examples, the data management system can perform the process400, as part of initializing a new network node. The CMP receives metric data from an initialized network node, according to block410. An initialized network node can be a network node with a deployed machine learning model trained by the data management system. The initialized network node can be configured to send streams of input data to the CMP, but may not do so according to a policy as described herein. The CMP determines whether the metric data of the initialized network node is similar to metric data of a second network node of the distributed network within a similarity threshold, according to diamond420. The similarity threshold can be predetermined, for example based on empirical or statistical analysis of different data streams and different metrics having statistically significant correlations between the data stream and the policy applied to the stream. The similarity threshold can be multi-dimensional, meaning that several metrics at least partially characterizing the network node can be compared between the initialized network node and other network nodes of a distributed network. If the data management system determines that there is no metric data similar to the metric data of the initialized network node (“NO”), then the process400ends. If the CMP determines that the metric data of the second network node is similar to the metric data of the initialized network node within a similarity threshold (“YES”), then the CMP sends the policy corresponding to the second network node, to the initialized network node, according to block430. The second network node can be any network node for which the system has generated at least one policy. The sent policy can bootstrap the regulation of data at the initialized network node, before the CMP generates a tailored policy based on node metrics data received from the new network node. In this way, the system can manage newly executed nodes to begin to balance output quality and operational cost. At scale, an approximated policy based on similarities previously generated policies can quickly reduce operational costs when many network nodes are initialized, over not providing any form of regulation at all, or providing a uniform policy which may not be individually suited for a deployed node. The data management system updates the policy of the initialized network node based on received metric data, according to block440. For example, the data management system can generate a policy for a data stream received from the initialized network node, according to the process300as described herein with reference toFIG.3. In some examples, the data management system may not update the policy of a data stream of the initialized network node. In those examples, one reason for not updating the policy is because the policy provided performs better according to the applied objectives, such as model output quality and operational cost, than any other policy generated by the data management system. The time at which the data management system updates the policy can be any length of time after sending the initial policy to the network node, as described according to block430. For example, the data management system can send an updated policy to the network node as soon as one has been generated, or the data management system can generate the updated policy at a predetermined subsequent time, for example after a predetermined amount of time has passed to allow for additional input data to be received by the data management system. The predetermined amount of time can be configured, for example based on user input, or at the time the data management system is implemented on the platform. FIG.5is a flowchart of an example process500for regulating a stream of output data on a network node of a distributed network, according to aspects of the disclosure. A network node sends node metrics data to a data management system, according to block510. As described herein with reference toFIGS.1-2, the network node can implement a node metrics engine configured to generate metrics at least partially characterizing the network node and data streams received by the network node from one or more data sources. The network node receives a policy generated by a data management system, according to block520. The data management system can generate the policy using the node metrics data sent to the system by the network node, according to block510. The network node regulates the stream of data according to the received policy, according to block530. The network node can be configured to convert the received policy into one or more instructions executable by the network node to cause the network node to adjust characteristics of a stream of input data transmitted to the data management system, in accordance with the policy. Example Computing Environment FIG.6is a block diagram of an example computing environment600for implementing the data management system100. The system100can be implemented on multiple devices having one or more processors in one or more locations, such as one or more server computing devices615of a computing platform601. The system100can communicate with multiple network nodes, such as network node612and network node645. For example, the server computing device(s)615can make up at least part of the computing platform101ofFIG.1, and implement the central management plane102, as well as other components, such as the training engine204, deployment engine206, and the ingestion engine208of the system100. As another example, the network node612can implement an inferencing engine699and a data source regulator698. Network node612and the server computing device(s)615can be communicatively coupled to one or more storage devices630over a network660. The storage device(s)630can be a combination of volatile and non-volatile memory, and can be at the same or different physical locations than the computing devices612,615. For example, the storage device(s)630can include any type of non-transitory computer readable medium capable of storing information, such as a hard-drive, solid state drive, tape drive, optical storage, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. The server computing device(s)615can include one or more processors613and memory614. The memory614can store information accessible by the processor(s)613, including instructions621that can be executed by the processor(s)613. The memory614can also include data623that can be retrieved, manipulated or stored by the processor(s)613. The memory614can be a type of non-transitory computer readable medium capable of storing information accessible by the processor(s)613, such as volatile and non-volatile memory. The processor(s)613can include one or more central processing units (CPUs), graphic processing units (GPUs), field-programmable gate arrays (FPGAs), and/or application-specific integrated circuits (ASICs), such as tensor processing units (TPUs). The instructions621can include one or more instructions that when executed by the processor(s)613, causes the one or more processors to perform actions defined by the instructions. The instructions621can be stored in object code format for direct processing by the processor(s)613, or in other formats including interpretable scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The instructions621can include instructions for implementing components of the system100consistent with aspects of this disclosure. The system100can be executed using the processor(s)613, and/or using other processors remotely located from the server computing device(s)615, such as the one or more processors616of the network node612. The data623can be retrieved, stored, or modified by the processor(s)613in accordance with the instructions621. The data623can be stored in computer registers, in a relational or non-relational database as a table having a plurality of different fields and records, or as JSON, YAML, proto, or XML documents. The data623can also be formatted in a computer-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data623can include information sufficient to identify relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories, including other network locations, or information that is used by a function to calculate relevant data. The network node612can also be configured similar to the server computing device(s)615, with one or more processors616, memory617, instructions618, and data619. In some examples, the network node612can be a user computing device, such as a personal computer, a smartphone, a wearable device, or any other computing device configured for receiving user input and/or generating user output. For example, the network node612can also include a user output626, and an input624. The user input624can include any appropriate mechanism or technique for receiving input from a user, such as keyboard, mouse, mechanical actuators, soft actuators, touchscreens, microphones, and sensors. The server computing device(s)615can be configured to transmit data to the user computing device(s)612, and the network node512can be configured to display at least a portion of the received data on a display implemented as part of the user output626. The user output626can also be used for displaying an interface between the network node612and the server computing device(s)615. The user output626can alternatively or additionally include one or more speakers, transducers or other audio outputs, a haptic interface or other tactile feedback that provides non-visual and non-audible information to the platform user of the user computing device612. AlthoughFIG.6illustrates the processors613,616and the memories614,617as being within the server computing device(s)615and network node612, components described in this specification, including the processors613,616and the memories614,617can include multiple processors and memories that can operate in different physical locations and not within the same computing device. For example, some of the instructions621,618and the data623,619can be stored on a removable SD card and others within a read-only computer chip. Some or all of the instructions and data can be stored in a location physically remote from, yet still accessible by, the processors613,616. Similarly, the processors613,616can include a collection of processors that can perform concurrent and/or sequential operation. The server computing device(s)615and network node612can each include one or more internal clocks providing timing information, which can be used for time measurement for operations and programs run by the server computing device(s)615and network node612. The network node612and/or the server computing device(s)615can be configured to receive requests to process data or other input from user computing device650. The computing device650can be another network node connected to the distributed network620, or a computing device that communicates with one or more nodes and/or the one or more server computing devices of the network620. The computing platform601in which the data management system100is implemented can be configured to provide a variety of services to users, through various user interfaces and/or APIs exposing the platform services. One or more services can be a machine learning framework or a set of tools for generating neural networks or other machine learning models according to a specified task and training data. The data management system100can be configured to train and deploy one or more machine learning models onto the multiple nodes of the network620, as described herein. The user computing device650may transmit and receive data to and from the system100, for example sending queries for processing by a deployed model, and receiving a prediction from the model in response. Network node645can be similarly configured to network node612, and be further configured to communicate with the server computing device(s)615directly through the network620, or indirectly through one or more other network nodes, e.g., the network node612. The devices, including the server computing device(s)615, network nodes, such as network node612and network node245, and the user computing device250can be capable of direct and indirect communication over the network620. The devices of the network620can set up listening sockets that may accept an initiating connection for sending and receiving information. The network620itself can include various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, body area networks, personal area networks, near-me area networks, local area networks, campus area networks, telecommunication networks, including cellular networks, telephone networks, sensor networks, radio access networks (RAN), and backbone or core networks (CN), private networks using communication protocols proprietary to one or more companies, etc. The network620can span over different physical network infrastructures, maintained by one or more different providers. The network620can implement any of a variety of distributed computing architectures or paradigms, such as client-server based architectures, three-tier or multi-tier architectures, peer-to-peer architectures, distributed real-time systems, distributed database systems, systems based on parallel processing techniques, decentralized networks, mesh networks, etc. The network620can support a variety of short- and long-range connections. The short- and long-range connections may be made over different bandwidths, such as 2.402 GHz to 2.480 GHz, 2.4 GHz and 5 GHz; 13.56 MHz; or with a variety of communication standards, such as communication standards for wireless broadband communication. The network620, in addition or alternatively, can also support wired connections between the devices of the network620, including over various types of Ethernet connection. It is understood that the aspects of the disclosure can be implemented according to a variety of different configurations and quantities of computing devices, including in paradigms for sequential or parallel processing, or over a distributed network of multiple devices. In some implementations, aspects of the disclosure can be performed on a single device, and any combination thereof. Aspects of this disclosure can be implemented in digital circuits, computer-readable storage media, as one or more computer programs, or a combination of one or more of the foregoing. The computer-readable storage media can be non-transitory, e.g., as one or more instructions executable by a cloud computing platform and stored on a tangible storage device. In this specification the phrase “configured to” is used in different contexts related to computer systems, hardware, or part of a computer program, engine, or module. When a system is said to be configured to perform one or more operations, this means that the system has appropriate software, firmware, and/or hardware installed on the system that, when in operation, causes the system to perform the one or more operations. When some hardware is said to be configured to perform one or more operations, this means that the hardware includes one or more circuits that, when in operation, receive input and generate output according to the input and corresponding to the one or more operations. When a computer program, engine, or module is said to be configured to perform one or more operations, this means that the computer program includes one or more program instructions, that when executed by one or more computers, causes the one or more computers to perform the one or more operations. While operations shown in the drawings and recited in the claims are shown in a particular order, it is understood that the operations can be performed in different orders than shown, and that some operations can be omitted, performed more than once, and/or be performed in parallel with other operations. Further, the separation of different system components configured for performing different operations should not be understood as requiring the components to be separated. The components, modules, programs, and engines described can be integrated together as a single system, or be part of multiple systems. Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the examples should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible implementations. Further, the same reference numbers in different drawings can identify the same or similar elements. | 61,809 |
11863399 | DESCRIPTION OF THE EXAMPLE EMBODIMENTS Hereinafter, example embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Note that, in the Specification and drawings, elements to which similar descriptions are applicable are denoted by the same reference signs, and overlapping descriptions may hence be omitted. Descriptions will be given in the following order.1. Related Art2. First Example Embodiment2.1. Configuration of System2.2. Configuration of Control Apparatus2.3. Operation (Training of Machine Learning Based Controller)2.4. Example Alterations3. Second Example Embodiment3.1. Configuration of System3.2. Configuration of Control Apparatus3.3. First Operation (Training of Machine Learning Based Controller)3.4. Second Operation (Selection of Controller)3.5. Example Alterations4. Third Example Embodiment 1. Related Art With reference toFIG.1andFIG.2, as techniques related to example embodiments of the present disclosure, supervised learning being a type of machine learning and reinforcement learning being a type of machine learning will be described. (1) Supervised Learning In supervised learning, with the use of training data including input data and output data (specifically, correct answer data) corresponding to the input data, what kind of data is to be output is learned in response to the input data. In other words, in supervised learning, with the use of the training data, a pattern of output data for input data is learned. For supervised learning, for example, algorithm such as a neural network, a support vector machine, or a decision tree is used. (2) Reinforcement Learning FIG.1is a diagram for illustrating an overview of reinforcement learning. With reference toFIG.1, in reinforcement learning, an agent81observes a state of an environment83, and selects an action from the observe state. The agent81obtains a reward from the environment83through selection of the action under the environment. Through repetition of such a series of operations, the agent81can learn what kind of action brings out the greatest reward according to the state of the environment83. In other words, the agent81can learn an action to be selected according to the environment in order to maximize the reward. An example of reinforcement learning is Q learning. In Q learning, for example, a Q table is used, which indicates how high value each action has regarding each state of the environment83. The agent81selects an action according to a state of the environment83by using the Q table. In addition, the agent81updates the Q table, based on the reward obtained according to selection of the action. FIG.2is a diagram for illustrating an example of the Q table. With reference toFIG.2, the states of the environment83include state A and state B, and the actions of the agent81include action A and action B. The Q table indicates value when each action is taken in each state. For example, the value of taking action A in state A is qAA, and the value of taking action B in state A is qAB. The value of taking action A in state B is qBA, and the value of taking action B in state B is qBB. For example, the agent81takes an action having the highest value in each state. As an example, when qAAis higher than qAB, the agent81takes action A in state A. Note that the value (qAA, qAB, qBA, and qBB) in the Q table is updated based on the reward obtained according to selection of the action. In reinforcement learning, taking an action having the highest value in each state described above is referred to as “exploitation (use)”. When learning is performed only by “exploitation”, learning results may be a local optimal solution instead of an optimal solution because the action that can be taken in each state is limited. Thus, in reinforcement learning, learning is performed by “exploitation” and “exploration (search)”. “Exploration” means that an action randomly selected in each state is taken. For example, in the Epsilon-Greedy method, “exploration” is selected with probability E, and “exploitation” is selected with probability 1−ε. With “exploration”, for example, in a certain state, an action with unknown value is selected, and as a result, value of the action in the certain state can be known. Owing to such “exploration”, it is more likely that an optimal solution may be obtained as the learning results. 2. First Example Embodiment With reference toFIG.3toFIG.12, a first example embodiment of the present disclosure will be described. 2.1. Configuration of System FIG.3illustrates an example of a schematic configuration of a system1according to the first example embodiment. With reference toFIG.3, the system1includes a communication network10and a control apparatus100. (1) Communication Network10 The communication network10transfers data. For example, the communication network10includes network devices (for example, a proxy server, a gateway, a router, a switch, and/or the like) and a line, and each of the network devices transfers data via the line. The communication network10may be a wired network, or may be a radio network. Alternatively, the communication network10may include both of a wired network and a radio network. For example, the radio network may be a mobile communication network using the standard of a communication line such as Long Term Evolution (LTE) or 5th Generation (5G), or may be a network used in a specific area such as a wireless local area network (LAN) or a local 5G. The wired network may be, for example, a LAN, a wide area network (WAN), the Internet, or the like. (2) Control Apparatus100 The control apparatus100performs control for the communication network10. For example, the control apparatus100includes a machine learning based controller for controlling communication in the communication network10. For example, the control apparatus100is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10. Note that the control apparatus100according to the first example embodiment is not limited to the network device that transfers data in the communication network10. This will be described later in detail as a sixth example alteration of the first example embodiment. 2.2. Configuration of Control Apparatus (1) Functional Configuration FIG.4is a block diagram illustrating an example of a schematic functional configuration of the control apparatus100according to the first example embodiment. With reference toFIG.4, the control apparatus100includes an obtaining means110, a training means120, a machine learning based controller130, a configuring means140, and a communication processing means150. The operations of each of the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and the communication processing means150will be described later. (2) Hardware Configuration FIG.5is a block diagram illustrating an example of a schematic hardware configuration of the control apparatus100according to the first example embodiment. With reference toFIG.5, the control apparatus100includes a processor210, a main memory220, a storage230, a communication interface240, and an input/output interface250. The processor210, the main memory220, the storage230, the communication interface240, and the input/output interface250are connected to each other via a bus260. The processor210executes a program read from the main memory220. As an example, the processor210is a central processing unit (CPU). The main memory220stores a program and various pieces of data. As an example, the main memory220is a random access memory (RAM). The storage230stores a program and various pieces of data. As an example, the storage230includes a solid state drive (SSD) and/or a hard disk drive (HDD). The communication interface240is an interface for communication with another apparatus. As an example, the communication interface240is a network adapter or a network interface card. The input/output interface250is an interface for connection with an input apparatus such as a keyboard, and an output apparatus such as a display. Each of the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and the communication processing means150may be implemented with the processor210and the main memory220, or may be implemented with the processor210, the main memory220, and the communication interface240. As a matter of course, the hardware configuration of the control apparatus100is not limited to the example described above. The control apparatus100may be implemented with another hardware configuration. Alternatively, the control apparatus100may be virtualized. In other words, the control apparatus100may be implemented as a virtual machine. In this case, the control apparatus100(virtual machine) may operate as a physical machine (hardware) including a processor, a memory, and the like, and a virtual machine on a hypervisor. As a matter of course, the control apparatus100(virtual machine) may be distributed into a plurality of physical machines for operation. The control apparatus100may include a memory (main memory220) that stores a program (instructions), and one or more processors (processors210) that can execute the program (instructions). The one or more processors may execute the program to perform the operations of the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and/or the communication processing means150. The program may be a program for causing the processor(s) to execute the operations of the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and/or the communication processing means150. 2.3. Operation (Training of Machine Learning Based Controller) The control apparatus100(obtaining means110) obtains work-related information related to human work in network operation. The control apparatus100(training means120) trains the machine learning based controller130for controlling communication in the communication network10, based on the work-related information. (1) Work-Related Information As described above, the work-related information is information related to human work in network operation. Human Work in Network Operation The network operation is, for example, network operation of the communication network10. In other words, the human work is human work in network operation of the communication network10. The human work is, for example, a change of a network control parameter. In other words, the human work is a change of a network control parameter in network operation. Note that the network operation and the human work according to the first example embodiment are not limited to the example described above. This will be described later in detail as a first example alteration of the first example embodiment. Information Included in Work-Related Information (Work Information and Network State Information) The work-related information includes, for example, work information indicating the human work and network state information indicating a network state corresponding to the human work. As will be described later, for example, the network state information is used as input data of machine learning, and the work information is used as output data of machine learning corresponding to the input data. For example, the work-related information includes a plurality of sets of the work information and the network state information. More specifically, for example, the work-related information includes N sets of the work information and the network state information, and N is a number sufficiently large for machine learning. As described above, the human work is, for example, a change of a network control parameter. In this case, as the change of the network control parameter, for example, the work information indicates increase or decrease of the network control parameter. More specifically, the work information may indicate whether the network control parameter has increased or decreased, or may indicate the amount of increase or decrease of the network control parameter. As an example, a combination of the network state (NW state) and the network control parameter (NW control parameter) is as follows:[NW State] Throughput and/or packet arrival interval[NW Control Parameter] Priority and/or band For example, the network control parameter is a parameter for each flow, and the network state is also a network state for each flow. Each flow is, for example, identified by a transmission address, a reception address, and a port number. As a matter of course, the network state and the network control parameter according to the first example embodiment are not limited to the example described above. This will be described later in detail as a second example alteration of the first example embodiment. As described above, for example, the control apparatus100is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10. In this case, for example, the network state is a network state (for example, throughput and/or a packet arrival interval) observed in the control apparatus100, and for example, the network control parameter is a network control parameter (for example, priority and/or a band) configured in the control apparatus100. Note that, as described above, the control apparatus100according to the first example embodiment is not limited to the network device that transfers data in the communication network10. This will be described later in detail as the sixth example alteration of the first example embodiment. The network state is a state of the communication network (for example, the communication network10). It can also be said that the network state is a state of communication in the communication network.Generation of Work-Related Information The work-related information is, for example, generated based on a log of the human work. As described above, the human work is, for example, a change of a network control parameter, and in this case, the work-related information is generated based on the log of the change of the network control parameter. For example, the work information included in the work-related information is directly generated from the log, and the network state information included in the work-related information is generated from packet capture information corresponding to the log. FIG.6illustrates an example of a work log of a change of a network control parameter according to the first example embodiment. With reference toFIG.6, the work log includes times at which a set of a parameter21and a parameter23(for example, a set of priority and a band) being network control parameters is changed and their change values. In this example, the set of the parameter21and the parameter23is changed at time25and time27. For example, at the time27, the parameter23is changed from a to b. For example, from the work log as described above, the change of the network control parameters can be known. Thus, the work information indicating the change of the network control parameter can be directly generated from the work log as described above. In addition, from the packet capture information of a predetermined time period immediately before the time at which the network control parameter is changed (for example, the time27at which the parameter23is changed), the network state (for example, throughput and/or a packet arrival interval) corresponding to the change of the network control parameter (for example, the parameter23) can be known. For example, from the packet capture information, statistical value(s) (for example, an average value, a mode, a median, a maximum value, a minimum value, a variance, a standard deviation, and/or the like) of the network state in the predetermined time period can be calculated. Thus, the network state information indicating the network state (for example, the statistical value) corresponding to the change of the network control parameter may be generated from the packet capture information identified from the work log. Note that, for example, probability density distribution of the network state in the predetermined time period as illustrated inFIG.7may be generated for calculation of the statistical value and be used. (2) Obtaining of Work-Related Information As described above, the control apparatus100(obtaining means110) obtains the work-related information. For example, the work-related information is (manually or automatically) generated in an apparatus other than the control apparatus100, and is provided to the control apparatus100. Then, the control apparatus100(obtaining means110) obtains the work-related information. Note that, in the first example embodiment, the method of obtaining the work-related information is not limited to the example described above. This will be described later in detail as a fourth example alteration of the first example embodiment. (3) Training As described above, the control apparatus100(training means120) trains the machine learning based controller130for controlling communication in the communication network10, based on the work-related information. For example, the control apparatus100(training means120) trains the machine learning based controller130by using the network state information included in the work-related information as input data and using the work information included in the work-related information as output data corresponding to the input data. Specifically, for example, the control apparatus100(training means120) trains the machine learning based controller130by providing the network state information to the machine learning based controller130as input data and the work information as output data corresponding to the input data. For example, the machine learning based controller130is a supervised learning based controller, and the control apparatus100(training means120) trains the machine learning based controller130by using the work-related information as training data of supervised learning. Specifically, for example, the training data includes input data, and correct answer data (output data) corresponding to the input data. The control apparatus100(training means120) provides the network state information to the machine learning based controller130as the input data, and provides the work information to the machine learning based controller130as the correct answer data (the output data). The training data may be referred to as teaching data. Owing to such training, the machine learning based controller130can learn how the network control parameter is to be changed according to the network state, based on human work (change of the network control parameter) in network operation. As a result, the machine learning based controller130can control communication in the communication network10similarly to human work. Thus, for example, by using the machine learning based controller130for control of communication in the communication network10, control of communication in the communication network10can be stabilized. Note that the machine learning based controller130according to the first example embodiment is not limited to the supervised learning based controller. This will be described later in detail as a fifth example alteration of the first example embodiment. (4) Flow of Processing FIG.8is a flowchart for illustrating an example of a general flow of training processing according to the first example embodiment. The control apparatus100(obtaining means110) obtains work-related information related to human work in network operation (S310). The control apparatus100(training means120) trains the machine learning based controller130for controlling communication in the communication network10, based on the work-related information (S320). (5) Operation after Training The machine learning based controller130is used for control of communication in the communication network10after training based on the work-related information. Specifically, for example, the machine learning based controller130selects a change of the network control parameter (for example, priority and/or a band) from the network state (for example, throughput and/or a packet arrival interval) in the communication network10, and outputs the change. As described above, for example, the control apparatus100is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10. In this case, the network state is a network state (for example, throughput and/or a packet arrival interval) observed in the control apparatus100, and for example, the network control parameter is a network control parameter (for example, priority and/or a band) configured in the control apparatus100. In other words, the machine learning based controller130selects a change of the network control parameter configured in the control apparatus100from the network state observed in the control apparatus100, and outputs the change. The control apparatus100(configuring means140) configures the changed network control parameter in the control apparatus100, according to the selected change of the network control parameter. As a result, the control apparatus100(communication processing means150) transfers data (for example, packets) according to the changed network control parameter. In this manner, for example, the machine learning based controller130controls the communication in the communication network10by selecting a change of the network control parameter. 2.4. Example Alterations First to seventh example alterations of the first example embodiment will be described. Note that two or more example alterations of the first to seventh example alterations of the first example embodiment may be combined. (1) First Example Alteration As described above, the machine learning based controller130is trained based on the work-related information related to human work in network operation. As described above, the network operation is, for example, network operation of the communication network10. However, the first example embodiment is not limited to the example described above. In the first example alteration of the first example embodiment, the network operation may be network operation of another communication network different from the communication network10. In other words, the machine learning based controller130may be trained based on the work-related information related to human work in such another communication network. Such another communication network may be a network similar to the communication network10. In this manner, for example, even when there is no past performance of operation of the communication network10, the machine learning based controller that can be used for control of communication in the communication network10can be achieved. (2) Second Example Alteration As described above, the work-related information includes, for example, work information indicating the human work and network state information indicating a network state corresponding to the human work. As described above, the human work is, for example, a change of the network control parameter, and the work information indicates increase or decrease of the network control parameter, for example, as the change of the network control parameter. In addition, as described above, as an example, the network state is throughput and/or a packet arrival interval, and the network control parameter is priority and/or a band. As described above, for example, the network control parameter is a parameter for each flow, and the network state is also a network state for each flow. However, as a matter of course, the first example embodiment is not limited to the example described above. In the second example alteration of the first example embodiment, first, the network control parameter need not be a parameter for each flow, and the network state need not be a network state for each flow either. The network control parameter may be a parameter regarding the entire communication that may include a plurality of flows, and the network state may also be a network state regarding the entire communication. The network state need not be throughput and/or a packet arrival interval, and the network control parameter need not be priority and/or a band. A combination of the network state (NW state) and the network control parameter (NW control parameter) may be as follows:[Example 1 (Example of Control of Transmission Control Protocol (TCP) Flow)][NW State] Number of active flows, available band and/or,Previous buffer size of Internet Protocol (IP)[NW Control Parameter] Transmission buffer size[Example 2 (Example of Control of Flow Rate of Video Traffic)][NW State] Quality of experience (QoE) of video(For example, a bit rate of a video and/or resolution of a video)[NW Control Parameter] Upper limit of throughput[Example 3 (Example of Robot Control)][NW State] Packet arrival interval and/or statistical value of packet size(For example, a maximum value, a minimum value, an average value, a standard deviation, or the like)[NW Control Parameter] Packet transmission interval In addition, as the change of the network control parameter, the work information may indicate the changed value itself of the network control parameter, instead of indicating increase or decrease of the network control parameter. For example, with reference toFIG.6again, as the change of the parameter at the time27, the changed value (a, b) of the set of the parameter21and the parameter23may be indicated, instead of indicating increase or decrease (for example, b−a) of the parameter23. (3) Third Example Alteration As described above, the work-related information is, for example, generated based on a log of the human work. However, the first example embodiment is not limited to the example described above. In the third example alteration of the first example embodiment, the work-related information may be generated based on a work standard for the human work. As described above, the human work is, for example, a change of a network control parameter, and in this case, the work-related information may be generated based on a work standard for the change of the network control parameter. The work standard may be a rule for human work in network operation, or may be know-how or reference information for human work in network operation. For example, the work standard may include a network state and a change (specifically, human work) of a network control parameter corresponding to the network state, and a sample of a set of the network state and the change (specifically, human work) of the network control parameter may be generated based on the work standard as the work-related information (the network state information and the work information). In this manner, even if there is no work log, training data (specifically, the work-related information) can be generated. (4) Fourth Example Alteration As described above, for example, the work-related information is (manually or automatically) generated in an apparatus other than the control apparatus100, and is provided to the control apparatus100. However, the first example embodiment is not limited to the example described above. In the fourth example alteration of the first example embodiment, the work-related information may be generated by the control apparatus100. In this case, the control apparatus100may further include a generating means, and the control apparatus100(generating means) may generate the work-related information. (5) Fifth Example Alteration As described above, for example, the machine learning based controller130is a supervised learning based controller. However, the first example embodiment is not limited to the example described above. In the fifth example alteration of the first example embodiment, the machine learning based controller130may be a reinforcement learning based controller that outputs an action based on an input state. In this case, the control apparatus100(training means120) may train the machine learning based controller130, considering the work-related information as an input state and an output action in reinforcement learning. Specifically, for example, the control apparatus100(training means120) may train the machine learning based controller130(reinforcement learning based controller) by using the network state information as the input state and using the work information as the output action. In other words, the control apparatus100(training means120) may provide the network state information to the machine learning based controller130as the input state, and provide the work information to the machine learning based controller130as the output action. The work-related information may further include reward information indicating a reward corresponding to the human work, in addition to the network state information and the work information. The work-related information may include a plurality of sets the work information, the network state information, and the reward information. The control apparatus100(training means120) may train the machine learning based controller130(reinforcement learning based controller), considering the work-related information as an input state, an output action, and an obtained reward in reinforcement learning. Specifically, for example, the control apparatus100(training means120) may train the machine learning based controller130by using the reward information as the obtained reward. In other words, the control apparatus100(training means120) may provide the reward information to the machine learning based controller130as the obtained reward. The reward indicated by the reward information may be constant regardless of human work (change of the network control parameter) corresponding to the reward. In other words, the human work may be considered worth a certain reward. Alternatively, the reward indicated by the reward information may be calculated according to a standard of the reward of reinforcement learning from the packet capture information in a predetermined time period after human work (change of the network control parameter) corresponding to the reward. Training the reinforcement learning based controller (in other words, the machine learning based controller130) by using the work-related information related to the human work as described above can cause reinforcement learning to proceed in advance. Thus, learning in the reinforcement learning based controller (in other words, the machine learning based controller130) can converge without requiring a long period of time after starting to use the reinforcement learning based controller (in other words, the machine learning based controller130) for control of communication in the communication network10. Therefore, control of communication in the communication network10can be stabilized. (6) Sixth Example Alteration As described above, for example, the control apparatus100is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10(seeFIG.9). As described above, for example, when the machine learning based controller130selects a change of a network control parameter, the control apparatus100(configuring means140) configures the changed network control parameter in the control apparatus100(seeFIG.9). However, the control apparatus100according to the first example embodiment is not limited to the example described above. First Example In the sixth example alteration of the first example embodiment, as a first example, as illustrated inFIG.10, the control apparatus100may be an apparatus (for example, a network controller) that controls a network device30that transfers data in the communication network10, instead of a network device itself that transfers data in the communication network10. The machine learning based controller130may select a change of the network control parameter (for example, priority and/or a band) configured in the network device30from the network state (for example, throughput and/or a packet arrival interval) observed in the network device30, and output the change. As illustrated inFIG.10, when the machine learning based controller130selects a change of the network control parameter, the control apparatus100(configuring means140) may cause the network device30to configure the changed network control parameter. As an example, the control apparatus100(configuring means140) may transmit parameter information (for example, a command for instructing a change of the network control parameter) indicating a change of the network control parameter to the network device30, and the network device30may configure the changed network control parameter, based on the parameter information. As a result, the network device30may transfer data (for example, packets) according to the changed network control parameter. Second Example As a second example, as illustrated inFIG.11, a network controller50may control a network device40that transfers data in the communication network10, and the control apparatus100may be an apparatus that controls or assists the network controller50. The network device40may observe the network state, without the control apparatus100itself observing the network state of the communication network10. The control apparatus100may obtain information indicating the network state from the network device40or the network controller50. The machine learning based controller130may select a change of the network control parameter (for example, priority and/or a band) configured in the network device40from the network state (for example, throughput and/or a packet arrival interval) observed in the network device40, and output the change. As illustrated inFIG.11, when the machine learning based controller130selects a change of the network control parameter, the control apparatus100(configuring means140) may transmit first parameter information (for example, a command for instructing a change of the network control parameter or assist information reporting a change of the network control parameter) indicating a change of the network control parameter to the network controller50. In addition, the network controller50may transmit second parameter information (for example, a network command for instructing a change of the control parameter) indicating a change of the network control parameter to the network device40based on the first parameter information, and the network device40may configure the changed network control parameter, based on the second parameter information. As a result, the network device40may transfer data (for example, packets) according to the changed network control parameter. Third Example As a third example, as illustrated inFIG.12, a network controller70may control a network device60that transfers data in the communication network10, and the control apparatus100may be an apparatus that controls the network controller70. The network device60may observe the network state, without the control apparatus100itself observing the network state of the communication network10. The control apparatus100may obtain information indicating the network state from the network device60or the network controller70. The machine learning based controller130may select a change of the network control parameter configured in the network controller70from the network state observed in the network device60, and output the change. As illustrated inFIG.12, when the machine learning based controller130selects a change of the network control parameter, the control apparatus100(configuring means140) may cause the network controller70to configure the changed network control parameter. As an example, the control apparatus100(configuring means140) may transmit parameter information (for example, a command for instructing a change of the network control parameter) indicating a change of the network control parameter to the network controller70, and the network controller70may configure the changed network control parameter, based on the parameter information. As a result, the network controller70may control the network device60according to the changed network control parameter, and the network device60may transfer data (for example, packets) according to control by the network controller70. (7) Seventh Example Alteration As described above, for example, the control apparatus100includes the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and the communication processing means150. However, the control apparatus100according to the first example embodiment is not limited to the example described above. In the seventh example alteration of the first example embodiment, for example, the machine learning based controller130may be included in another apparatus instead of being included in the control apparatus100. In this case, the control apparatus100(training means120) may train the machine learning based controller130by providing the work-related information to the machine learning based controller130included in such another apparatus. The configuring means140may also be included in such another apparatus instead of being included in the control apparatus100. Note that, when the machine learning based controller130is not included in the control apparatus100, in the description in the sixth example alteration, the “control apparatus100” may be replaced by an “apparatus including the machine learning based controller130”. In the seventh example alteration of the first example embodiment, for example, the configuring means140may be included in the machine learning based controller130. In other words, the machine learning based controller130may perform the operation of the configuring means140described above. In the seventh example alteration of the first example embodiment, for example, the communication processing means150that transfers data (for example, packets) may be included in another apparatus instead of being included in the control apparatus100. For example, in a case as in the sixth example alteration, the communication processing means150may be included in a network device instead of being included in the control apparatus100. As described in the fourth example alteration, the control apparatus100may further include a generating means. 3. Second Example Embodiment With reference toFIG.13toFIG.16, a second example embodiment of the present disclosure will be described. 3.1. Configuration of System FIG.13illustrates an example of a schematic configuration of a system2according to the second example embodiment. With reference toFIG.13, the system2includes a communication network10and a control apparatus400. (1) Communication Network10 Description regarding the communication network10is the same as the description regarding the communication network10of the first example embodiment. Thus, overlapping description will be omitted here. (2) Control Apparatus400 The control apparatus400performs control for the communication network10. For example, the control apparatus400includes a machine learning based controller and a reinforcement learning based controller for controlling communication in the communication network10. For example, the machine learning based controller is a supervised learning based controller. In particular, in the second example embodiment, for example, the control apparatus400further includes a reinforcement learning based controller for controlling communication in the communication network10. For example, the control apparatus400is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10. Note that the control apparatus400according to the second example embodiment is not limited to the network device that transfers data in the communication network10. This will be described later in detail as the seventh example alteration of the second example embodiment. 3.2. Configuration of Control Apparatus (1) Functional Configuration FIG.14is a block diagram illustrating an example of a schematic functional configuration of the control apparatus400according to the second example embodiment. With reference toFIG.14, the control apparatus400includes a first obtaining means410, a training means420, a machine learning based controller430, a reinforcement learning based controller440, a configuring means442, a communication processing means444, an observing means450, a determining means460, a second obtaining means470, and a selecting means480. The operation of each of the first obtaining means410, the training means420, the machine learning based controller430, the reinforcement learning based controller440, the configuring means442, the communication processing means444, the observing means450, the determining means460, the second obtaining means470, and the selecting means480will be described later. (2) Hardware Configuration FIG.15is a block diagram illustrating an example of a schematic hardware configuration of the control apparatus400according to the second example embodiment. With reference toFIG.15, the control apparatus400includes a processor510, a main memory520, a storage530, a communication interface540, and an input/output interface550. The processor510, the main memory520, the storage530, the communication interface540, and the input/output interface550are connected to each other via a bus560. The processor510executes a program read from the main memory520. As an example, the processor510is a CPU. The main memory520stores programs and various pieces of data. As an example, the main memory520is a RAM. The storage530stores a program and various pieces of data. As an example, the storage530includes an SSD and/or an HDD. The communication interface540is an interface for communication with another apparatus. As an example, the communication interface540is a network adapter or a network interface card. The input/output interface550is an interface for connection with an input apparatus such as a keyboard, and an output apparatus such as a display. Each of the first obtaining means410, the training means420, the machine learning based controller430, the reinforcement learning based controller440, the configuring means442, the communication processing means444, the observing means450, the determining means460, the second obtaining means470, and the selecting means480may be implemented with the processor510and the main memory520, or may be implemented with the processor510, the main memory520, and the communication interface540. As a matter of course, the hardware configuration of the control apparatus400is not limited to the example described above. The control apparatus400may be implemented with another hardware configuration. Alternatively, the control apparatus400may be virtualized. In other words, the control apparatus400may be implemented as a virtual machine. In this case, the control apparatus400(virtual machine) may operate as a physical machine (hardware) including a processor, a memory, and the like, and a virtual machine on a hypervisor. As a matter of course, the control apparatus400(virtual machine) may be distributed into a plurality of physical machines for operation. The control apparatus400may include a memory (main memory520) that stores a program (instructions), and one or more processors (processors510) that can execute the program (instructions). The one or more processors may execute the program to perform the operations of the first obtaining means410, the training means420, the machine learning based controller430, the reinforcement learning based controller440, the configuring means442, the communication processing means444, the observing means450, the determining means460, the second obtaining means470, and/or the selecting means480. The program may be a program for causing the processor(s) to execute the operations of the first obtaining means410, the training means420, the machine learning based controller430, the reinforcement learning based controller440, the configuring means442, the communication processing means444, the observing means450, the determining means460, the second obtaining means470, and/or the selecting means480. 3.3. First Operation (Training of Machine Learning Based Controller) The control apparatus400(first obtaining means410) obtains work-related information related to human work in network operation. The control apparatus400(training means420) trains the machine learning based controller430for controlling communication in the communication network10, based on the work-related information. In other words, similarly to training of the machine learning based controller130in the first example embodiment, in the second example embodiment, the machine learning based controller430is trained based on the work-related information. Description regarding “(1) Work-Related Information”, “(2) Obtaining of Work-Related Information”, “(3) Training”, “(4) Flow of Processing”, and “(5) Operation after Training” according to the second example embodiment is the same as the description regarding those according to the first example embodiment except for differences of the reference signs. Thus, overlapping description will be omitted here. Note that, regarding the differences of the reference signs, the control apparatus100, the obtaining means110, the training means120, the machine learning based controller130, the configuring means140, and the communication processing means150according to the first example embodiment correspond to the control apparatus400, the first obtaining means410, the training means420, the machine learning based controller430, the configuring means442, and the communication processing means444according to the second example embodiment, respectively. Note that, in the second operation (selection of controller) described below, the machine learning based controller430is a machine learning based controller trained based on the work-related information. Unlike the machine learning based controller130in the fifth example alteration of the first example embodiment, the machine learning based controller430according to the second example embodiment is not a reinforcement learning based controller. The machine learning based controller430according to the second example embodiment is, for example, a supervised learning based controller similarly to the machine learning based controller130in the main example of the first example embodiment. 3.4. Second Operation (Selection of Controller) For example, the control apparatus400(selecting means480) selects one of the reinforcement learning based controller440for controlling communication in the communication network10and the machine learning based controller430for controlling communication in the communication network10, based on information related to the state of the communication network10. In other words, the control apparatus400(selecting means480) selects one controller used for control of communication in the communication network10out of the machine learning based controller430and the reinforcement learning based controller440. FIG.16is a flowchart for illustrating an example of a general flow of controller selection processing according to the second example embodiment. In the following, with reference toFIG.16, operation for selection of the controller will be described. (1) Observation (S610) For example, the control apparatus400(observing means450) observes the communication network10(S610). More specifically, for example, the control apparatus400(observing means450) observes throughput in the communication network10and/or a packet loss rate in the communication network10. For example, the control apparatus400is a network device that transfers data in the communication network10, and the throughput to be observed is throughput in the control apparatus400, and the packet loss rate to be observed is a packet loss rate in the control apparatus400. For example, the control apparatus400(observing means450) generates observation information regarding the communication network10. The observation information indicates results of observation of the communication network10. More specifically, for example, the observation information indicates throughput in the communication network10and/or a packet loss rate in the communication network10. (2) Determination (S620) For example, the control apparatus400(determining means460) determines a state of the communication network10(S620). State of Communication Network10 For example, the state to be determined is a congestion state of the communication network10. In other words, the control apparatus400(determining means460) determines a congestion state of the communication network10. More specifically, for example, the control apparatus400(determining means460) determines whether the communication network10is congested over a certain level. Note that the state determined here (state of the communication network10) is merely a state determined for selection of the controller, and does not mean “state” being input of reinforcement learning. Determination Method For example, the control apparatus400(determining means460) determines the state of the communication network10, based on the observation information regarding the communication network10. As described above, for example, the observation information indicates throughput in the communication network10and/or a packet loss rate in the communication network10. In this case, the control apparatus400(determining means460) determines the state of the communication network10(for example, whether the communication network10is congested over the certain level), based on the throughput in the communication network10and/or the packet loss rate in the communication network10. As an example, when the throughput in the communication network10is smaller than a predetermined threshold, or when the packet loss rate in the communication network10is larger than a predetermined threshold, the control apparatus400(determining means460) determines that the communication network10is congested over the certain level. Otherwise, the control apparatus400(determining means460) determines that the communication network10is not congested over the certain level. Alternatively, when the throughput in the communication network10is smaller than the predetermined threshold, and the packet loss rate in the communication network10is larger than the predetermined threshold, the control apparatus400(determining means460) may determine that the communication network10is congested over the certain level. Otherwise, the control apparatus400(determining means460) may determine that the communication network10is not congested over the certain level. As a matter of course, the control apparatus400(determining means460) may determine whether the communication network10is congested over the certain level, based on only one of the throughput and the packet loss rate, not based on both of the throughput and the packet loss rate as described above. Information (State Information) Related to State of Communication Network10 For example, the control apparatus400(determining means460) generates information (hereinafter referred to as “state information”) related to the state (specifically, the determined state) of the communication network10. Note that the “state information” here is information different from the “network state information” (specifically, information indicating the network state corresponding to the human work, which is information included in the work-related information) described in the first example embodiment. For example, the state information indicates the state of the communication network10(in other words, the determined state). More specifically, for example, the state information indicates whether the communication network10is congested over the certain level. Note that the state information is not limited to the example described above. This will be described later in detail as the fifth example alteration of the second example embodiment. (3) Selection (S630) The control apparatus400(second obtaining means470) obtains the state information. The control apparatus400(selecting means480) selects one of the machine learning based controller430and the reinforcement learning based controller440, based on the state information (S630). In other words, the control apparatus400(selecting means480) selects one controller used for control of communication in the communication network10out of the machine learning based controller430and the reinforcement learning based controller440, based on the state information. Through the selection as above, the machine learning based controller430and the reinforcement learning based controller440are selectively used for control of communication in the communication network10. For example, the control apparatus400(selecting means480) selects the machine learning based controller430when the communication network10is congested over the certain level, and selects the reinforcement learning based controller440when the communication network10is not congested over the certain level. In other words, when the communication network10is congested over the certain level, the machine learning based controller430trained based on the work-related information related to the human work in network operation is used, or otherwise, the reinforcement learning based controller440is used. Note that the selected controller (the machine learning based controller430or the reinforcement learning based controller440) is used for control of communication in the communication network10. Specifically, for example, the selected controller selects a change of the network control parameter (for example, priority and/or a band) from the network state (for example, throughput and/or a packet arrival interval) in the communication network10, and outputs the change. As described above, for example, the control apparatus400is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10, and the control apparatus400(configuring means442) configures the changed network control parameter in the control apparatus400according to the selected change of the network control parameter. As a result, the control apparatus400(communication processing means444) transfers data (for example, packets) according to the changed network control parameter. In this manner, for example, by selecting a change of the network control parameter, the selected controller (the machine learning based controller430or the reinforcement learning based controller440) controls communication in the communication network10. In the above, selection of the controller according to the second example embodiment is described. When the communication network10is extremely congested, the network state is unstable, and if the reinforcement learning based controller440is used, erroneous learning may occur in reinforcement learning, and learning may not converge. As a result, by using the reinforcement learning based controller440, control of communication of the communication network10may become unstable. However, according to selection of the controller as described above, when the communication network10is extremely congested, the machine learning based controller430can be used, and similarly to the human work, control of communication in the communication network10can be performed. Therefore, control of communication of the communication network10can be stabilized. According to selection of the controller as described above, when the communication network10is not extremely congested, the reinforcement learning based controller440can be used, and optimal control of communication in the communication network10can be performed. Therefore, control of communication of the communication network10can be stabilized. 3.5. Example Alterations Description regarding first to fourth example alterations of the second example embodiment is the same as the description regarding the first to fourth example alterations of the first example embodiment except for differences of the reference signs. Thus, overlapping description will be omitted here. Note that, regarding the differences of the reference signs, the control apparatus100and the machine learning based controller130according to the first to fourth example alterations of the first example embodiment correspond to the control apparatus400and the machine learning based controller430according to the first to fourth example alterations of the second example embodiment, respectively. In the following, fifth to eighth example alterations of the second example embodiment will be described. Note that two or more example alterations of the first to eighth example alterations of the second example embodiment may be combined. (1) Fifth Example Alteration As described above, for selection of the controller, the information (specifically, the state information) related to the state of the communication network10is used, and for example, the state information indicates the state of the communication network10(for example, whether the communication network10is congested over the certain level). However, the state information according to the second example embodiment is not limited to the example described above. In the fifth example alteration of the second example embodiment, the state information need not indicate the state itself of the communication network10(for example, whether the communication network10is congested over the certain level). For example, the state information may be information corresponding to the state of the communication network10, although not indicating the state itself of the communication network10. As an example, the state information may be a flag corresponding to whether the communication network10is congested over the certain level, without indicating whether the communication network10is congested over the certain level. (2) Sixth Example Alteration As described above, the machine learning based controller430is trained based on the work-related information. In the sixth example alteration of the second example embodiment, in addition to the machine learning based controller430, the reinforcement learning based controller440may also be trained based on the work-related information. For example, the reinforcement learning based controller440may be trained based on the work-related information, as in the case with the training described in the fifth example alteration of the first example embodiment. Training the reinforcement learning based controller440by using the work-related information related to the human work as described above can cause reinforcement learning to proceed in advance. Thus, learning in the reinforcement learning based controller440can converge without requiring a long period of time after starting to use the reinforcement learning based controller440for control of communication in the communication network10. Therefore, control of communication in the communication network10can be further stabilized. (3) Seventh Example Alteration The seventh example alteration will be described with reference toFIG.9toFIG.12again. In the description of the seventh example alteration, the “control apparatus100” is replaced with the “control apparatus400” in those figures. As described above, for example, the control apparatus400is a network device (for example, a proxy server, a gateway, a router, a switch, and/or the like) that transfers data in the communication network10(seeFIG.9). As described above, for example, when the selected controller (the machine learning based controller430or the reinforcement learning based controller440) selects a change of the network control parameter, the control apparatus400(configuring means442) configures the changed network control parameter in the control apparatus400(seeFIG.9). However, the control apparatus400according to the second example embodiment is not limited to the example described above. First Example In the seventh example alteration of the second example embodiment, as a first example, as illustrated inFIG.10, the control apparatus400may be an apparatus (for example, a network controller) that controls a network device30that transfers data in the communication network10, instead of a network device itself that transfers data in the communication network10. The network device30may observe the communication network10, without the control apparatus400(observing means450) itself observing the communication network10. The control apparatus400(observing means450) may obtain observation information regarding the communication network10from the network device30. The selected controller (the machine learning based controller430or the reinforcement learning based controller440) may select a change of the network control parameter (for example, priority and/or a band) configured in the network device30from the network state (for example, throughput and/or a packet arrival interval) observed in the network device30, and output the change. As illustrated inFIG.10, when the selected controller selects a change of the network control parameter, the control apparatus400(configuring means442) may cause the network device30to configure the changed network control parameter. As an example, the control apparatus400(configuring means442) may transmit the parameter information (for example, a command for instructing a change of the network control parameter) indicating a change of the network control parameter to the network device30, and the network device30may configure the changed network control parameter, based on the parameter information. As a result, the network device30may transfer data (for example, packets) according to the changed network control parameter. Second Example As the second example, as illustrated inFIG.11, the network controller50may control the network device40that transfers data in the communication network10, and the control apparatus400may be an apparatus that controls or assists the network controller50. The network device40may observe the network state, without the control apparatus400itself observing the network state of the communication network10. The control apparatus400may obtain information indicating the network state from the network device40or the network controller50. The selected controller (the machine learning based controller430or the reinforcement learning based controller440) may select a change of the network control parameter (for example, priority and/or a band) configured in the network device40from the network state (for example, throughput and/or a packet arrival interval) observed in the network device40, and output the change. As illustrated inFIG.11, when the selected controller (the machine learning based controller430or the reinforcement learning based controller440) selects a change of the network control parameter, the control apparatus400(configuring means442) may transmit first parameter information (for example, a command for instructing a change of the network control parameter or assist information reporting a change of the network control parameter) indicating a change of the network control parameter to the network controller50. In addition, the network controller50may transmit second parameter information (for example, a network command for instructing a change of the control parameter) indicating a change of the network control parameter to the network device40based on the first parameter information, and the network device40may configure the changed network control parameter, based on the second parameter information. As a result, the network device40may transfer data (for example, packets) according to the changed network control parameter. Third Example As the third example, as illustrated inFIG.12, the network controller70may control the network device60that transfers data in the communication network10, and the control apparatus400may be an apparatus that controls the network controller70. The network device60may observe the network state, without the control apparatus400itself observing the network state of the communication network10. The control apparatus400may obtain information indicating the network state from the network device60or the network controller70. The selected controller (the machine learning based controller430or the reinforcement learning based controller440) may select a change of the network control parameter configured in the network controller70from the network state observed in the network device60, and output the change. As illustrated inFIG.12, when the selected controller selects a change of the network control parameter, the control apparatus400(configuring means442) may cause the network controller70to configure the changed network control parameter. As an example, the control apparatus400(configuring means442) may transmit parameter information (for example, a command for instructing a change of the network control parameter) indicating a change of the network control parameter to the network controller70, and the network controller70may configure the changed network control parameter, based on the parameter information. As a result, the network controller70may control the network device60according to the changed network control parameter, and the network device60may transfer data (for example, packets) according to control by the network controller70. (4) Eighth Example Alteration As described above, for example, the control apparatus400includes the first obtaining means410, the training means420, the machine learning based controller430, the reinforcement learning based controller440, the configuring means442, the communication processing means444, the observing means450, the determining means460, the second obtaining means470, and the selecting means480. However, the control apparatus400according to the second example embodiment is not limited to the example described above. In the eighth example alteration of the second example embodiment, for example, the first obtaining means410and the training means420may be included in another apparatus, instead of being included in the control apparatus400. In other words, training of the machine learning based controller430may be performed in such another apparatus. In the eighth example alteration of the second example embodiment, for example, the observing means450may be included in another apparatus instead of being included in the control apparatus400. In this case, the control apparatus400may receive observation information regarding the communication network10from such another apparatus. In addition, for example, the determining means460may also be included in such another apparatus instead of being included in the control apparatus400. In this case, the control apparatus400may receive information (specifically, the state information) related to the state of the communication network10from such another apparatus. In the eighth example alteration of the second example embodiment, for example, at least one of the machine learning based controller430and the reinforcement learning based controller440may be included in another apparatus instead of being included in the control apparatus400. In this case, the control apparatus400may notify such another apparatus of results of selection of the controller. The configuring means442may also be included in such another apparatus instead of being included in the control apparatus400. Note that, when at least one of the machine learning based controller430and the reinforcement learning based controller440is not included in the control apparatus400, in the description of the sixth example alteration, the “control apparatus400” may be replaced with an “apparatus including at least one of the machine learning based controller430and the reinforcement learning based controller440”. In the eighth example alteration of the third example embodiment, for example, the configuring means442may be included in each of the machine learning based controller430and the reinforcement learning based controller440. In other words, each of the machine learning based controller430and the reinforcement learning based controller440may perform operation of the configuring means442described above. In the eighth example alteration of the second example embodiment, for example, the communication processing means444that transfers data (for example, packets) may be included in another apparatus instead of being included in the control apparatus400. For example, in a case as in the seventh example alteration, the communication processing means444may be included in a network device instead of being included in the control apparatus400. 4. Third Example Embodiment Next, with reference toFIG.17andFIG.18, a third example embodiment of the present disclosure will be described. The first example embodiment described above is a concrete example embodiment, whereas the third example embodiment is a more generalized example embodiment. FIG.17illustrates an example of a schematic configuration of a system3according to the third example embodiment. With reference toFIG.17, the system3includes an obtaining means700and a training means800. FIG.18is a flowchart for illustrating an example of a general flow of training processing according to the third example embodiment. The obtaining means700obtains work-related information related to human work in network operation (S910). The training means800trains the machine learning based controller for controlling communication in the communication network, based on the work-related information (S920). Description regarding the work-related information, obtaining of the work-related information, and training are, for example, the same as the description regarding those of the first example embodiment except for differences of the reference signs. Thus, overlapping description will be omitted here. Note that, as a matter of course, the third example embodiment is not limited to the example of the first example embodiment. As described above, the machine learning based controller is trained. In this manner, for example, control of communication in the communication network can be stabilized. Descriptions have been given above of the example embodiments of the present disclosure. However, the present disclosure is not limited to these example embodiments. It should be understood by those of ordinary skill in the art that these example embodiments are merely examples and that various alterations are possible without departing from the scope and the spirit of the present disclosure. For example, the steps in the processing described in the Specification may not necessarily be executed in time series in the order described in the flowcharts. For example, the steps in the processing may be executed in order different from that described in the flowcharts or may be executed in parallel. Some of the steps in the processing may be deleted, or more steps may be added to the processing. Moreover, a method including processing of the constituent elements of the system or the control apparatus described in the Specification may be provided, and programs for causing a processor to execute the processing of the constituent elements may be provided. Moreover, a non-transitory computer readable recording medium (non-transitory computer readable recording media) having recorded thereon the programs may be provided. It is apparent that such methods, programs, and non-transitory computer readable recording media are also included in the present disclosure. The whole or part of the example embodiments disclosed above can be described as, but not limited to, the following supplementary notes. (Supplementary Note 1) A system comprising:an obtaining means for obtaining work-related information related to human work in network operation; anda training means for training a machine learning based controller for controlling communication in a communication network, based on the work-related information. (Supplementary Note 2) The system according to supplementary note 1, wherein the work-related information includes work information indicating the human work and network state information indicating a network state corresponding to the human work. (Supplementary Note 3) The system according to supplementary note 2, wherein the training means trains the machine learning based controller by using the network state information as input data and using the work information as output data corresponding to the input data. (Supplementary Note 4) The system according to supplementary note 2 or 3, whereinthe human work is a change of a network control parameter, andthe work information indicates increase or decrease of the network control parameter or a changed value of the network control parameter. (Supplementary Note 5) The system according to any one of supplementary notes 1 to 4, wherein the human work is a change of the network control parameter. (Supplementary Note 6) The system according to any one of supplementary notes 1 to 5, wherein the work-related information is information generated based on a log of the human work or a work standard for the human work. (Supplementary Note 7) The system according to any one of supplementary notes 1 to 6, wherein the network operation is network operation of the communication network. (Supplementary Note 8) The system according to any one of supplementary notes 1 to 7, further comprising a selecting means for selecting one, of a reinforcement learning based controller configured to control communication in the communication network and the machine learning based controller, based on information related to a state of the communication network. (Supplementary Note 9) The system according to supplementary note 8, wherein the state of the communication network is a congestion state of the communication network. (Supplementary Note 10) The system according to supplementary note 9, wherein the selecting means selects the machine learning based controller when the communication network is congested over a certain level, and the selecting means selects the reinforcement learning based controller when the communication network is not congested over the certain level. (Supplementary Note 11) The system according to any one of supplementary notes 8 to 10, further comprising a determining means for determining the state of the communication network. (Supplementary Note 12) The system according to supplementary note 11, wherein the determining means determines the state of the communication network, based on observation information regarding the communication network. (Supplementary Note 13) The system according to supplementary note 11 or 12, wherein the determining means determines whether the communication network is congested over the certain level. (Supplementary Note 14) The system according to any one of supplementary notes 1 to 13, whereinthe machine learning based controller is a supervised learning based controller, andthe training means trains the machine learning based controller by using the work-related information as training data of supervised learning. (Supplementary Note 15) The system according to any one of supplementary notes 1 to 7, whereinthe machine learning based controller is a reinforcement learning based controller configured to output an action, based on an input state, andthe training means trains the machine learning based controller, considering the work-related information as the input state and an output action in reinforcement learning. (Supplementary Note 16) A method comprising:obtaining work-related information related to human work in network operation; andtraining a machine learning based controller for controlling communication in a communication network, based on the work-related information. (Supplementary Note 17) The method according to supplementary note 16, wherein the work-related information includes work information indicating the human work and network state information indicating a network state corresponding to the human work. (Supplementary Note 18) The method according to supplementary note 17, wherein the machine learning based controller is trained by using the network state information as input data and using the work information as output data corresponding to the input data. (Supplementary Note 19) The method according to supplementary note 17 or 18, whereinthe human work is a change of a network control parameter, andthe work information indicates increase or decrease of the network control parameter or a changed value of the network control parameter. (Supplementary Note 20) The method according to any one of supplementary notes 16 to 19, wherein the human work is a change of the network control parameter. (Supplementary Note 21) The method according to any one of supplementary notes 16 to 20, wherein the work-related information is information generated based on a log of the human work or a work standard for the human work. (Supplementary Note 22) The method according to any one of supplementary notes 16 to 21, wherein the network operation is network operation of the communication network. (Supplementary Note 23) The method according to any one of supplementary notes 16 to 22, further comprising selecting one, of a reinforcement learning based controller configured to control communication in the communication network and the machine learning based controller, based on information related to a state of the communication network. (Supplementary Note 24) The method according to supplementary note 23, wherein the state of the communication network is a congestion state of the communication network. (Supplementary Note 25) The method according to supplementary note 24, wherein the machine learning based controller is selected when the communication network is congested over a certain level, and the reinforcement learning based controller is selected when the communication network is not congested over the certain level. (Supplementary Note 26) The method according to any one of supplementary notes 23 to 25, further comprising: determining the state of the communication network. (Supplementary Note 27) The method according to supplementary note 26, wherein the state of the communication network is determined based on observation information regarding the communication network. (Supplementary Note 28) The method according to supplementary note 26 or 27, wherein the state of the communication network is whether the communication network is congested over the certain level. (Supplementary Note 29) The method according to any one of supplementary notes 16 to 28, whereinthe machine learning based controller is a supervised learning based controller, andthe machine learning based controller is trained by using the work-related information as training data of supervised learning. (Supplementary Note 30) The method according to any one of supplementary notes 16 to 22, whereinthe machine learning based controller is a reinforcement learning based controller configured to output an action, based on an input state, andthe machine learning based controller is trained, considering the work-related information as the input state and an output action in reinforcement learning. (Supplementary Note 31) A control apparatus comprising:obtaining means for obtaining work-related information related to human work in network operation; andtraining means for training a machine learning based controller for controlling communication in a communication network, based on the work-related information. (Supplementary Note 32) The control apparatus according to supplementary note 31, wherein the work-related information includes work information indicating the human work and network state information indicating a network state corresponding to the human work. (Supplementary Note 33) The control apparatus according to supplementary note 32, wherein the training means trains the machine learning based controller by using the network state information as input data and using the work information as output data corresponding to the input data. (Supplementary Note 34) The control apparatus according to supplementary note 32 or 33, wherein the human work is a change of a network control parameter, andthe work information indicates increase or decrease of the network control parameter or a changed value of the network control parameter. (Supplementary Note 35) The control apparatus according to any one of supplementary notes 31 to 34, wherein the human work is a change of the network control parameter. (Supplementary Note 36) The control apparatus according to any one of supplementary notes 31 to 35, wherein the work-related information is information generated based on a log of the human work or a work standard for the human work. (Supplementary Note 37) The control apparatus according to any one of supplementary notes 31 to 36, wherein the network operation is network operation of the communication network. (Supplementary Note 38) The control apparatus according to any one of supplementary notes 31 to 37, further comprising a selecting means for selecting one, of a reinforcement learning based controller configured to control communication in the communication network and the machine learning based controller, based on information related to a state of the communication network. (Supplementary Note 39) The control apparatus according to supplementary note 38, wherein the state of the communication network is a congestion state of the communication network. (Supplementary Note 40) The control apparatus according to supplementary note 39, wherein the selecting means selects the machine learning based controller when the communication network is congested over a certain level, and the selecting means selects the reinforcement learning based controller when the communication network is not congested over the certain level. (Supplementary Note 41) The control apparatus according to any one of supplementary notes 38 to 40, further comprising: determining means for determining the state of the communication network. (Supplementary Note 42) The control apparatus according to supplementary note 41, wherein the determining means determines the state of the communication network, based on observation information regarding the communication network. (Supplementary Note 43) The control apparatus according to supplementary note 41 or 42, wherein the determining means determines whether the communication network is congested over the certain level. (Supplementary Note 44) The control apparatus according to any one of supplementary notes 31 to 43, whereinthe machine learning based controller is a supervised learning based controller, andthe training means trains the machine learning based controller by using the work-related information as training data of supervised learning. (Supplementary Note 45) The control apparatus according to any one of supplementary notes 31 to 37, whereinthe machine learning based controller is a reinforcement learning based controller configured to output an action, based on an input state, andthe training means trains the machine learning based controller, considering the work-related information as the input state and an output action in reinforcement learning. (Supplementary Note 46) A program that causes a processor to execute:obtaining work-related information related to human work in network operation; andtraining a machine learning based controller for controlling communication in a communication network, based on the work-related information. (Supplementary Note 47) A non-transitory computer readable recording medium recording a program that causes a processor to execute:obtaining work-related information related to human work in network operation; andtraining a machine learning based controller for controlling communication in a communication network, based on the work-related information. REFERENCE SIGNS LIST 1,2,3System10Communication Network100,400Control Apparatus110,700Obtaining Means410First Obtaining Means120,420,900Training Means130,430Machine Learning Based Controller440Reinforcement Learning Based Controller460Determining Means480Selecting Means | 85,342 |
11863400 | Similar reference numerals may have been used in different figures to denote similar components. DESCRIPTION OF EXAMPLE EMBODIMENTS To assist in understanding the present disclosure, an example wireless communication system is described below. FIG.1illustrates an example wireless communication system100(also referred to as wireless system100) in which embodiments of the present disclosure could be implemented. In general, the wireless system100enables multiple wireless or wired elements to communicate data and other content. The wireless system100may enable content (e.g., voice, data, video, text, etc.) to be communicated (e.g., via broadcast, narrowcast, user device to user device, etc.) among entities of the system100. The wireless system100may operate by sharing resources such as bandwidth. The wireless system100may be suitable for wireless communications using 5G technology and/or later generation wireless technology (e.g., 6G or later). In some examples, the wireless system100may also accommodate some legacy wireless technology (e.g., 3G or 4G wireless technology). In the example shown, the wireless system100includes electronic devices (ED)110a-110c(generically referred to as ED110), radio access networks (RANs)120a-120b(generically referred to as RAN120), a core network130, a public switched telephone network (PSTN)140, the internet150, and other networks160. In some examples, one or more of the networks may be omitted or replaced by a different type of network. Other networks may be included in the wireless system100. Although certain numbers of these components or elements are shown inFIG.1, any reasonable number of these components or elements may be included in the wireless system100. The EDs110are configured to operate, communicate, or both, in the wireless system100. For example, the EDs110may be configured to transmit, receive, or both via wireless or wired communication channels. Each ED110represents any suitable end user device for wireless operation and may include such devices (or may be referred to) as a user equipment/device (UE), a wireless transmit/receive unit (WTRU), a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a station (STA), a machine type communication (MTC) device, a personal digital assistant (PDA), a smartphone, a laptop, a computer, a tablet, a wireless sensor, or a consumer electronics device, among other possibilities. Future generation EDs110may be referred to using other terms. InFIG.1, the RANs120include base stations (BSs)170a-170b(generically referred to as BS170), respectively. Each BS170is configured to wirelessly interface with one or more of the EDs110to enable access to any other BS170, the core network130, the PSTN140, the internet150, and/or the other networks160. For example, the BS170smay include (or be) one or more of several well-known devices, such as a base transceiver station (BTS), a radio base station, a Node-B (NodeB), an evolved NodeB (eNodeB), a Home eNodeB, a gNodeB (sometimes called a next-generation Node B), a transmission point (TP), a transmit and receive point (TRP), a site controller, an access point (AP), or a wireless router, among other possibilities. Future generation BSs170may be referred to using other terms. Any ED110may be alternatively or additionally configured to interface, access, or communicate with any other BS170, the internet150, the core network130, the PSTN140, the other networks160, or any combination of the preceding. The wireless system100may include RANs, such as RAN120b, wherein the corresponding BS170baccesses the core network130via the internet150, as shown. The EDs110and BSs170are examples of communication equipment that can be configured to implement some or all of the functionality and/or embodiments described herein. In the embodiment shown inFIG.1, the BS170aforms part of the RAN120a, which may include other BSs, base station controller(s) (BSC), radio network controller(s) (RNC), relay nodes, elements, and/or devices. Any BS170may be a single element, as shown, or multiple elements, distributed in the corresponding RAN, or otherwise. Also, the BS170bforms part of the RAN120b, which may include other BSs, elements, and/or devices. Each BS170transmits and/or receives wireless signals within a particular geographic region or area, sometimes referred to as a “cell” or “coverage area”. A cell may be further divided into cell sectors, and a BS170may, for example, employ multiple transceivers to provide service to multiple sectors. In some embodiments there may be established pico or femto cells where the radio access technology supports such. A macro cell may encompass one or more smaller cells. In some embodiments, multiple transceivers could be used for each cell, for example using multiple-input multiple-output (MIMO) technology. The number of RANs120shown is exemplary only. Any number of RANs may be contemplated when devising the wireless system100. The BSs170communicate with one or more of the EDs110over one or more air interfaces190ausing wireless communication links (e.g. radio frequency (RF), microwave, infrared (IR), etc.). The EDs110may also communicate directly with one another via one or more sidelink air interfaces190b. The interfaces190aand190bmay be generally referred to as air interfaces190. BS-ED communications over interfaces190aand ED-ED communications over interfaces190bmay use similar communication technology. The air interfaces190may utilize any suitable radio access technology. For example, the wireless system100may implement one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA) in the air interfaces190. The air interfaces190may utilize other higher dimension signal spaces, which may involve a combine of orthogonal and/or non-orthogonal dimensions. The RANs120are in communication with the core network130to provide the EDs110with various services such as voice, data, and other services. The RANs120and/or the core network130may be in direct or indirect communication with one or more other RANs (not shown), which may or may not be directly served by core network130, and may or may not employ the same radio access technology as RAN120a, RAN120bor both. The core network130may also serve as a gateway access between (i) the RANs120or EDs110or both, and (ii) other networks (such as the PSTN140, the internet150, and the other networks160). In addition, some or all of the EDs110may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. Instead of wireless communication (or in addition thereto), the EDs110may communicate via wired communication channels to a service provider or switch (not shown), and to the internet150. PSTN140may include circuit switched telephone networks for providing plain old telephone service (POTS). Internet150may include a network of computers and subnets (intranets) or both, and incorporate protocols, such as Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP). EDs110may be multimode devices capable of operation according to multiple radio access technologies, and incorporate multiple transceivers necessary to support such. FIGS.2and3illustrate example devices that may implement the methods and teachings according to this disclosure. In particular,FIG.2illustrates an example ED110, andFIG.3illustrates an example base station170. These components could be used in the communication system100or in any other suitable system. As shown inFIG.2, the ED110includes at least one processing unit200. The processing unit200implements various processing operations of the ED110. For example, the processing unit200could perform signal coding, data processing, power control, input/output processing, or any other functionality enabling the ED110to operate in the communication system100. The processing unit200may also be configured to implement some or all of the functionality and/or embodiments described in more detail elsewhere herein. Each processing unit200includes any suitable processing or computing device configured to perform one or more operations. Each processing unit200could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. The ED110also includes at least one transceiver202. The transceiver202is configured to modulate data or other content for transmission by at least one antenna or Network Interface Controller (NIC)204. The transceiver202is also configured to demodulate data or other content received by the at least one antenna204. Each transceiver202includes any suitable structure for generating signals for wireless or wired transmission and/or processing signals received wirelessly or by wire. Each antenna204includes any suitable structure for transmitting and/or receiving wireless or wired signals. One or multiple transceivers202could be used in the ED110. One or multiple antennas204could be used in the ED110. Although shown as a single functional unit, a transceiver202could also be implemented using at least one transmitter and at least one separate receiver. The ED110further includes one or more input/output devices206or interfaces (such as a wired interface to the internet150inFIG.1). The input/output devices206permit interaction with a user or other devices in the network. Each input/output device206includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen, including network interface communications. In addition, the ED110includes at least one memory208. The memory208stores instructions and data used, generated, or collected by the ED110. For example, the memory208could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processing unit(s)200. Each memory208includes any suitable volatile and/or non-volatile storage and retrieval device(s). Any suitable type of memory may be used, such as random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and the like. As shown inFIG.3, the base station170includes at least one processing unit1350, at least one transmitter252, at least one receiver254, one or more antennas256, at least one memory258, and one or more input/output devices or interfaces266. A transceiver, not shown, may be used instead of the transmitter252and receiver254. A scheduler253may be coupled to the processing unit250. The scheduler253may be included within or operated separately from the base station170. The processing unit250implements various processing operations of the base station170, such as signal coding, data processing, power control, input/output processing, or any other functionality. The processing unit250can also be configured to implement some or all of the functionality and/or embodiments described in more detail herein. Each processing unit250includes any suitable processing or computing device configured to perform one or more operations. Each processing unit250could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. Each transmitter252includes any suitable structure for generating signals for wireless or wired transmission to one or more EDs or other devices. Each receiver254includes any suitable structure for processing signals received wirelessly or by wire from one or more EDs or other devices. Although shown as separate components, at least one transmitter252and at least one receiver254could be combined into a transceiver. Each antenna256includes any suitable structure for transmitting and/or receiving wireless or wired signals. Although a common antenna256is shown here as being coupled to both the transmitter252and the receiver254, one or more antennas256could be coupled to the transmitter(s)252, and one or more separate antennas256could be coupled to the receiver(s)254. Each memory258includes any suitable volatile and/or non-volatile storage and retrieval device(s) such as those described above in connection to the ED110inFIG.2. The memory258stores instructions and data used, generated, or collected by the base station170. For example, the memory258could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processing unit(s)250. Each input/output device266permits interaction with a user or other devices in the network. Each input/output device266includes any suitable structure for providing information to or receiving/providing information from a user, including network interface communications. It should be appreciated that one or more steps of the embodiment methods provided herein may be performed by corresponding units or modules, according toFIG.4. For example, a signal may be transmitted by a transmitting unit or a transmitting module. A signal may be received by a receiving unit or a receiving module. A signal may be processed by a processing unit or a processing module. Other steps may be performed by an artificial intelligence (AI) or machine learning (ML) module. The respective units/modules may be implemented using hardware, one or more components or devices that execute software, or a combination thereof. For instance, one or more of the units/modules may be an integrated circuit, such as field programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). It will be appreciated that where the modules are implemented using software for execution by a processor for example, they may be retrieved by a processor, in whole or part as needed, individually or together for processing, in single or multiple instances, and that the modules themselves may include instructions for further deployment and instantiation. Additional details regarding the EDs such as110and base stations such as170are known to those of skill in the art. As such, these details are omitted here. Referring back toFIG.1, different pairs of communicating devices (i.e., a transmission sending device and a transmission receiving device), such as ED110acommunicating with BS170aor ED110bcommunicating with BS170a, may have different transmission capabilities and/or transmission requirements. The different transmission capabilities and/or transmission requirements typically cannot be met optimally by a single air interface or air interface configuration. As discussed above, a configurable air interface has been proposed to address this issue.FIG.5illustrates a diagram of an example of a configurable air interface300. Air interface300comprises a number of building blocks that collectively specify how a transmission is to be made and/or received. The building blocks of air interface300may include waveform building block305, frame structure building block310, multiple access scheme building block315, a protocols building block320, a coding and modulation building block325, and an antenna array processing building block330. Frame structure building block310may specify a configuration of a frame or group of frames. Non-limiting examples of frame structure options include a configurable multi-level transmission time interval (TTI), a fixed TTI, a configurable single-level TTI, a co-existence configuration, or configurable slot, mini slot, or configurable symbol duration block (SDB) and the like. The lengths of a TTI, slot, mini slot or SDB may also be specified. Frame structure building block310may also or instead specify timing parameters for DL and/or UL transmission, such as a transmission period for DL and/or UL, and/or a time switch gap between DL and UL transmissions. The frame structure can be for various duplexing schemes, such as time domain duplexing (TDD), frequency division duplexing (FDD) and full duplex operation. Multiple access scheme building block315may specify how access to a channel is scheduled or configured for one or more users. Non-limiting examples of multiple access technique options include scheduled access, grant-free access, dedicated channel resource (no sharing between multiple users), contention based shared channel resource, non-contention based shared channel resource, cognitive radio based access, and the like. Protocols building block320may specify how a transmission and/or a re-transmission are to be made. Non-limiting examples of transmission and/or re-transmission mechanism options include those that specify a scheduled data pipe size, a signaling mechanism for transmission and/or re-transmission, a re-transmission mechanism, and the like. Coding and modulation building block325may specify how information being transmitted may be encoded (decoded) and modulated (demodulated) for transmission (reception) purposes. Non-limiting examples of coding and/or modulation technique options include low density parity check (LDPC) codes, polar codes, turbo trellis codes, turbo product codes, fountain codes, rateless codes, network codes, binary phase shift keying (BPSK), m/2-BPSK, quadrature phase shift keying (QPSK), quadrature amplitude modulation (QAM) such as 16QAM, 64QAM, 256QAM, hierarchical modulation, low PAPR modulation, non-linear modulation non-QAM based modulation, and the like. Waveform building block305may specify a shape and form of a signal being transmitted. Non-limiting examples of waveform options include Orthogonal Frequency Division Multiplexing (OFDM) based waveform such as filtered OFDM (f-OFDM), Wavelet Packet Modulation (WPM), Faster Than Nyquist (FTN) Waveform, low Peak to Average Ratio Waveform (low PAPR WF such as DFT spread OFDM waveform), Filter Bank Multicarrier (FBMC) Waveform, Single Carrier Frequency Division Multiple Access (SC-FDMA), and the like. For OFDM-based waveforms, the waveform building block305may specify the associated waveform parameters such as sub-carrier spacings and cyclic prefix (CP) overhead. Antenna array processing building block330may specify parameters for antenna array signal processing for channel acquisition and precoding/beamforming generation. In some embodiments, the functionality of the waveform building block305and the antenna array processing building block330may be combined as a multiple antenna waveform generator block. Since the air interface300comprises a plurality of building blocks, and each building block may have a plurality of candidate technologies, it may be possible to configure a large number of different air interface profiles, where each air interface profile defines a respective air interface configuration option. For example, the configurable air interface proposed for new radio (NR) networks allows service or slice based optimization, which can be advantageous because the potential application requirements for air interface technologies can be complex and diverse. Similar to the air interface300shown inFIG.3, the configurable air interface proposed for 5G networks supports adaptive waveform, adaptive protocols, adaptive frame structure, adaptive coding and modulation family and adaptive multiple access schemes. With such mechanisms, the air interface can potentially accommodate a wide variety of user services, spectrum bands and traffic levels. FIG.6illustrates an example of components in a transmit chain400of a base station170and components of a receive chain450of a UE110that may be configurable as part of a configurable air interface to allow the base station170and the UE110to communicate. The components of the transmit chain400of the base station170include a source encoder402, a channel encoder404and a modulator406. Source encoder402, channel encoder404and modulator406may each be implemented as a specific hardware block, or may be implemented in part as software modules executing in a processor, such as a microprocessor, a digital signal processor, a custom application specific integrated circuit, or a custom compiled logic array of a field programmable logic array. The components of the receive chain450of the UE110include a demodulator452and a channel decoder454. Demodulator452and channel decoder454may each be implemented as a specific hardware block, or may be implemented in part as software modules executing in a processor, such as a microprocessor, a digital signal processor, a custom application specific integrated circuit, or a custom compiled logic array of a field programmable logic array. In operation, source encoder402encodes uncompressed raw data to generate compressed information bits, which are in turn encoded by channel encoder to generate channel coded information bits, which are then modulated by modulator406to generate modulated signals. In this example, the modulation performed by modulator406includes quadrature amplitude modulation (QAM) mapping and waveform generation. The modulated signals generated by modulator406are transmitted from base station170to UE110over one or more wireless channels. A base station can have multiple transmit antennas, in which case a waveform may be generated for each of the antennas. In such cases, the generated waveforms may contain different contents for each of the multiple transmit antennas, e.g., in a MIMO mode transmission. At UE110, the received signals from base station170are demodulated by demodulator452to generate demodulated signals. A UE can have multiple receive antennas, in which case demodulator452may be configured to process waveforms received from multiple receive antennas as part of the waveform recovery process. The demodulated signals generated by demodulator452are decoded by channel decoder454to generate recovered compressed information bits. Source decoder456decodes the recovered compressed information bits to generate recovered uncompressed raw data. Waveform here in the embodiment ofFIG.4or the following embodiments, may specify a shape and form of a signal being transmitted. Non-limiting examples of waveform options include Orthogonal Frequency Division Multiplexing (OFDM) based waveform such as filtered OFDM (f-OFDM), Wavelet Packet Modulation (WPM), Faster Than Nyquist (FTN) Waveform, low Peak to Average Ratio Waveform (low PAPR WF such as DFT spread OFDM waveform), Filter Bank Multicarrier (FBMC) Waveform, Single Carrier Frequency Division Multiple Access (SC-FDMA), and the like. For OFDM-based waveforms, the waveform may specify the associated waveform parameters such as sub-carrier spacings and cyclic prefix (CP) overhead. The coding and modulation performed by the components of the transmit chain400and the corresponding demodulation and decoding performed by the components of the receive chain450may be configured according to a modulation and coding scheme (MCS) corresponding to a service or slice specific air interface in order to support delivery of a service or application to UE110according to the selected code scheme and modulation scheme. If the service and/or network slice over which the service is provided changes, the configurations of the components of the transmit and receive chains of the base station170and UE110may be changed to match a new predetermined service or slice specific air interface corresponding to the new service or network slice. As noted above, a service or slice specific air interface such as this, which is based on selecting from a predetermined subset of parameters or technologies for a predetermined subset of air interface components, can potentially accommodate a wide variety of user services, spectrum bands and traffic levels. However for each service, the transmission condition and requirements can still be quite different for each UE/device, which means, for example, that an air interface configuration that may be optimal for delivering a service to one UE/device may not necessarily be optimal for delivering the same service to another UE. Therefore, it would be desirable to provide further optimization of a UE/device specific air interface configuration. Machine learning (ML) and artificial intelligence (AI) approaches have been used for solving many difficult and complex problems. To assist in understanding the present disclosure, some background discussion of ML and AI is now provided. AI is an emerging and fast-growing field thanks to the advances made in the field of computer architecture and in particular general purpose graphics processing units (GP-GPUs). A neural network, which is a form of ML, may be considered as a type of fitting function. Deep learning is one realization of a neural network, which contains more than one interconnected layer of artificial neurons. To train a deep neural network to fit a function (e.g., training using a great amount of input samples and output samples), the weight and threshold of each neuron are updated iteratively, so as to minimize an overall loss function or maximize an overall reward function. The iteration may be achieved by a gradient-descent or ascent back-propagation algorithm over training samples, which may require that the deep neural network architecture and the loss or reward function be mathematically differentiable. Trainability typically requires: a function set (the neural network architecture) that defines an exploration space boundary within which a gradient-descent algorithm may traverse; and one or more loss (or reward) function(s) being differentiable with respect to each neuron's coefficient (for gradient-ascent or descent training) on that neural network architecture. A deep neural network is often used for performing feature capture, and for performing prediction. Feature capture serves to extract useful information from a number of complex data, and this may be considered a form of dimension reduction. Prediction involves interpolation or extrapolation, to generate new data (generally referred to as predicted or estimated data) from sample data. Both these tasks may assume that the input data possess an intrinsic autoregression characteristic. For example, a pixel of an image usually has some relationship with its neighboring pixels. A convolutional neural network (CNN) may be developed to use this relationship to reduce the dimension of the data. The present disclosure describes examples that may be used to implement new air interfaces for wireless communication that are tailored or personalized on a device-specific basis using AI/ML to provide device-specific air interface optimization. For example, embodiments of the present disclosure include new air interfaces that go beyond a network slice/service specific air interface to a personalized tailored air interface that includes a personalized service type and a personalized air interface setting. Examples of such personalized air interface settings may include one or more of the following: customized code scheme and modulation scheme; customized transmission scheme such as MIMO beamforming (BF), including channel acquisition/reconstruction and precoding; customized waveform type and associated parameters such as customized pulse shapes and parameters such as roll-off factors of an RRC pulse; customized frame structure; customized transmission/retransmission scheme and associated parameters such as product-code or inter-codebook or inter-TB 2D joint coding based retransmission and parameters such as incremental parity bit size and interleavers used; UE cooperation based retransmission and/or customized transmit-receive point (TRP) layer/type. In some embodiments, the personalized tailored air interface parameters may be determined using AI/ML based on the physical speed/velocity at which the device is moving, a link budget of the device, the channel conditions of the device, one or more device capabilities and/or a service type that is to be supported. In some embodiments, the service type itself can be customized with UE-specific service parameters, such as quality of service (QoS) requirement(s), traffic pattern, etc. In some embodiments, the personalized tailored air interface parameters may be optimized on the fly with minimal signaling overhead. For example, for 5G network implementations, the parameters may be configured from predefined candidate parameter sets. For next generation network implementations, e.g., for sixth generation (6G) networks, the parameters maybe adapted in a more flexible manner with real time or near real time optimization. As will be discussed later, the level or type of air interface optimization available to a device may depend on the AI/ML capability of the device. If a user equipment has some AI/ML capability, the UE can work together with network device(s) to optimize its air interface (i.e., both sides of the air interface apply AI/ML to optimize the air interface). A UE that has no AI/ML capability may still help a network device to optimize an air interface during a training phase and/or during a normal operation phase by providing some types of measurement results to the network device for use in training AI/ML component(s) of the network device. For example, a high end AI/ML capable device may be able to benefit from full scale self-optimization of each component of an air interface (e.g., optimization of coding, modulation and waveform generation, MIMO operation optimization). A lower end AI/ML capable device may only be able to benefit from partial self-optimization of less than all components of an air interface. In some cases, a device may be dependent on centralized learning/training (e.g., all learning is done centrally in the network, such as at a base station). In other cases, learning/training may be based on federated learning, which is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging their data samples. In still other cases, learning/training may also or instead involve device cooperative learning. As discussed above, an air interface generally includes a number of components and associated parameters that collectively specify how a transmission is to be made and/or received over a wireless communications link between two or more communicating devices. For example, an air interface may include one or more components defining the waveform(s), frame structure(s), multiple access scheme(s), protocol(s), coding scheme(s) and/or modulation scheme(s) for conveying data over a wireless communications link. The methods and devices disclosed herein provide a mechanism of AI/ML enabled/assisted air interface personalized optimization that supports different levels of per-UE/device based optimization. The disclosed examples also provide over the air signaling mechanisms to support per-UE/device based air interface function optimization. FIG.7Aillustrates a first example of a transmit chain500of a base station170and a receive chain550of a UE110that each include an AI/ML module502,552that is trainable in order to provide a tailored personalized air interface between the base station170and UE110, in accordance with an embodiment of the present disclosure. AI/ML components as referenced herein are intended to be modules or blocks based on an implementation of ML mechanisms. One example of an ML implementation is a neural network implemented in hardware, one or more components that execute software, or a combination thereof. The AI/ML module502of the base station170includes a joint source and channel encoder component504, a modulator component506and a waveform generator component508. The AI/ML module552of the UE110includes a joint waveform recovery, demodulator and source and channel decoder component554. The AI/ML module502provides AI/ML based autonomous optimization of all basic baseband signal processing functions including channel coding (or source coding plus channel coding) via encoding component504, modulation via modulation component506and waveform generation via waveform generator508. The base station170may have multiple transmit antennas, and in such embodiments the waveform generator508may be configured to generate a waveform for each of the transmit antennas. The AI/ML module552at the UE110provides the reciprocal based band processing functionality in order to recover information bits/raw data from signals received from the base station170. The UE110may have multiple receive antennas, and in such embodiments the AI/ML module552may be configured to process waveforms received from multiple receive antennas as part of the waveform recovery process. The coding, modulation and waveform generation may be optimized individually or two or more may be jointly optimized. Several options are possible for individual optimization of the various components of the AI/ML modules502,552. Some non-limiting examples of these options are described below. For example, for individual optimization of channel coding without a predefined coding scheme and parameters, self-learning/training and optimization may be used to determine an optimal coding scheme and parameters. For example, in some embodiments, a forward error correction (FEC) scheme is not predefined and AI/ML is used to determine a UE specific customized FEC scheme. In such embodiments, autoencoder based ML may be used as part of an iterative training process during a training phase in order to train an encoder component at a transmitting device and a decoder component at a receiving device. For example, during such a training process, an encoder at a base station and a decoder at a UE may be iteratively trained by exchanging a training sequence/updated training sequence. In general, the more trained cases/scenarios, the better performance. After training is done, the trained encoder component at the transmitting device and the trained decoder component at the receiving device can work together based on changing channel conditions to provide encoded data that may outperform results generated from a non-AI/ML based FEC scheme. In some embodiments, the AI/ML algorithms for self-learning/training and optimization may be downloaded by the UE from a network/server/other device. For individual optimization of channel coding with predefined coding schemes, such as low density parity check (LDPC) code, Reed-Muller (RM) code, polar code or other coding scheme, the parameters for the coding scheme can be optimized. The parameters for channel coding can be signaled to UE from time to time (periodically or event triggered), e.g., via radio resource control (RRC) signaling or dynamically through downlink control information (DCI) in a dynamic downlink control channel or the combination of the RRC signaling and DCI, or group DCI, or other new physical layer signaling. Training can be done all on the network side or assisted by UE side training or mutual training between the network side and the UE side. In the example illustrated inFIG.7A, the input to AI/ML module502is uncompressed raw data and source coding and channel coding are done jointly by AI/ML component504. An alternative example is illustrated inFIG.7B, in which source coding is done separately by a source encoder501to generate compressed information bits that are then received by AI/ML module502where they are channel coded by AI/ML component504. Similarly, in the example illustrated inFIG.7B, the output of the AI/ML module552at the UE110is recovered compressed information bits that are then decoded by a source decoder555to generate recovered raw data, whereas the AI/ML module552inFIG.7Aoutputs recovered raw data. For individual optimization of modulation without a predefined constellation, modulation may be done by an AI/ML module, the optimization targets and or algorithms of which (e.g., the AI/ML component506) are understood by both the transmitter and the receiver (e.g., the base station170and UE110, respectively, in the example scenario shown inFIG.7A). For example, the AI/ML algorithm may be configured to maximize Euclidian or non-Euclidian distance between constellation points. For individual optimization of modulation with a predefined non-linear modulator, the parameters for the modulation may be done by self-optimization. For individual optimization of waveform generation without a predefined waveform type, without a predefined pulse shape and without predefined waveform parameters, self-learning/training and optimization may be used to determine optimal waveform type, pulse shape and waveform parameters. In some embodiments, the AI/ML algorithms for self-learning/training and optimization may be downloaded by the UE from a network/server/other device. In some embodiments, there may be a finite set of predefined waveform types, and selection of a predefined waveform type from the finite set and determination of the pulse shape and other waveform parameters may be done through self-optimization. Several options are also possible for joint optimization of two or more of the components of the AI/ML modules502,552. Some non-limiting examples of these options are described below. For example, in some embodiments, the coding via component504(channel coding or joint source and channel coding) and the modulation implemented via component506may be jointly optimized with AI/ML, and the waveform generation via component508may be optimized separately. Multi-dimensional modulation, which is conceptually similar to trellis-coded modulation, is one example of a combined coding and modulation scheme that may be used in some embodiments of the present disclosure. For example, in some embodiments, AI/ML may be used to create a customized multi-dimensional modulation scheme for a pair of communicating devices, e.g., a base station and a UE. In other embodiments, the modulation via component504and the waveform generation via component508may be jointly optimized with AI/ML, and the coding via component504may be optimized separately. In other embodiments, the coding, modulation and waveform generation may all be jointly optimized with AI/ML. FIGS.8A and8Billustrate examples of a transmit chain600of a base station170and a receive chain650of a UE110that each include an AI/ML module602,652that is trainable in order to realize UE specific optimization and/or provide a tailored or personalized air interface between the base station170and UE110, in accordance with a second embodiment of the present disclosure. In the example shown inFIG.8A, the transmit chain600of base station170includes an AI/ML module602and a waveform generator605. AI/ML module602of the base station170includes a joint source and channel encoder and modulation component604. Similarly, in this example the receive chain650of UE110includes a waveform processor651and an AI/ML module652, which includes a joint demodulator and source and channel decoder component654. Unlike the examples shown inFIGS.7A and7B, in which the AI/ML modules502,552provide AI/ML based autonomous optimization of all basic baseband signal processing functions including coding/decoding, modulation/demodulation and waveform generation/processing, in the example shown inFIG.8Athe AI/ML module602provides AI/ML based autonomous optimization of coding and modulation via component604, and non-AI/ML based waveform generation is managed independently via waveform generator605. The base station170may have multiple transmit antennas, and in such embodiments the waveform generator605may be configured to generate a waveform for each of the transmit antennas. The AI/ML module652at the UE110provides the reciprocal optimized baseband processing functionality on modulated signals recovered by waveform processor651. The UE110may have multiple receive antennas, and in such embodiments the waveform processor651may be configured to process waveforms received from multiple receive antennas as part of the waveform recovery process. In the example illustrated inFIG.8A, the input to AI/ML module602is uncompressed raw data and joint source and channel coding and modulation are done by AI/ML component604. The example illustrated inFIG.8Bdiffers from the example illustrated inFIG.8Ain that inFIG.8Asource coding is done separately by a source encoder601to generate information bits that are then received by AI/ML module602where they are jointly channel coded and modulated by AI/ML component604. Similarly, in the example illustrated inFIG.8B, the output of the AI/ML module652at the UE110is recovered compressed information bits that are then decoded by a source decoder655to generate recovered raw data, whereas the AI/ML module652inFIG.8Aoutputs recovered raw data. In the examples shown inFIGS.8A and8B, training of the AI/ML modules602and652may be done by self-learning/training optimization. Coding and modulation may be optimized by AI/ML separately or jointly, as discussed earlier. As mentioned above, in the examples shown inFIGS.8A and8B, waveform generation via waveform generator605at base station170and waveform processing via waveform processor651at UE110, may be managed without AI/ML. For example, waveform types and waveform parameters may be predefined and a waveform may be selected from a predefined set of candidate waveforms according to transmission requirements, such as peak to average power ratio (PAPR), frequency band, frequency localization, and the like. Alternatively, the waveform type and waveform parameters may be dynamically signaled to a UE via for example downlink control information (DCI) or radio resource control (RRC) signaling. In some embodiments, the predefined set of candidate waveforms may include single-carrier waveform and multi-carrier waveforms. Furthermore, the predefined set of candidate waveforms may include multiple candidate waveforms that differ in terms of one or more parameters. For example, there may be multiple candidate single-carrier waveforms predefined, such as single carrier offset QAM (OQAM) waveforms, with root-raised cosine pulse, and predefined roll-off factors. FIG.9illustrates an example of a transmit chain700of a base station170and a receive chain750of a UE110that each include an AI/ML module702,752that is trainable in order to provide a tailored personalized air interface between the base station170and UE110, in accordance with a third embodiment of the present disclosure. In the example shown inFIG.9, the transmit chain700of base station170includes a source encoder701, a channel encoder703and an AI/ML module702that includes a modulation component704and a waveform generator component706. In this example the receive chain750of UE110includes an AI/ML module752, which includes a waveform processor component756and a demodulator component754, a channel decoder755and a source decoder757. Unlike the previous examples shown inFIGS.7A,7B,8A and8B, the example shown inFIG.9utilizes non-AI/ML based source and channel coding/decoding and AI/ML based modulation/demodulation and waveform generation/processing. At UE110, the waveform processor component756and the demodulator component754of the AI/ML module652provide the reciprocal optimized modulated signal recovery and demodulation functionality to recover modulated information bits. The recovered modulated information bits are decoded by channel decoder755to generate recovered compressed information bits, which are in turn decoded by source decoder757to generate recovered raw data. In the example shown inFIG.9, training of the AI/ML modules602and652may be done by self-learning/training optimization. Modulation and waveform generation may be optimized by AI/ML separately or jointly, as discussed earlier. As mentioned above, in the example shown inFIG.9, source and channel coding via source encoder701and channel encoder703at base station170and channel and source decoding via channel decoder755and source decoder757at UE110, may be managed without AI/ML. For example, coding schemes and associated parameters may be predefined and a coding scheme may be selected from a predefined set of coding schemes according to a transmission requirement. Alternatively, the coding scheme and associated parameters may be dynamically signaled to a UE via for example downlink control information (DCI) or radio resource control (RRC) signaling. FIG.10illustrates an example of a transmit chain800of a base station170and a receive chain850of a UE110that each include an AI/ML module802,852that is trainable in order to provide a tailored personalized air interface between the base station170and UE110, in accordance with a fourth embodiment of the present disclosure. In the example shown inFIG.10, the transmit chain800of base station170includes a source encoder801, a channel encoder803, an AI/ML module802, which includes a modulation component804, and a waveform generator805. In this example the receive chain850of UE110includes a waveform processor851, an AI/ML module852, a channel decoder855and a source decoder857. The AI/ML module852includes a demodulator component854. Unlike the previous examples, the example shown inFIG.10utilizes non-AI/ML based channel coding/decoding and waveform generation/processing and AI/ML based modulation/demodulation. At UE110, the waveform processor851, channel decoder855and source decoder857provide non AI/ML based signal recover, channel decoding and source decoding, respectively, and the demodulator component854of the AI/ML module852provides optimized demodulation functionality that is the reciprocal of the modulation functionality performed by the modulation component804at base station170. For optimization of modulation without a predefined constellation, an AI/ML algorithm implemented by modulation component804may be configured to maximize Euclidian or non-Euclidian distance between constellation points. For optimization of modulation with a predefined non-linear modulator, the parameters for the modulation may be done by self-optimization, e.g., to optimize the distance of modulated symbols. In some scenarios, non-AI/ML based optimization of modulation may also or instead be utilized. As mentioned above, in the example shown inFIG.10, source and channel coding via source encoder801and channel encoder803and waveform generation via waveform generator805at base station170and waveform processing via waveform processor851and channel and source decoding via channel decoder855and source decoder857at UE110, may be managed without AI/ML. For example, waveform types and associated parameters as well as coding schemes and associated parameters may be predefined and a waveform type and a coding scheme may be selected from predefined sets according to a transmission requirement, as discussed previously. Alternatively, the coding scheme and associated parameters and/or the waveform type and waveform parameters may be dynamically signaled to a UE via for example downlink control information (DCI) or radio resource control (RRC) signaling. FIG.11illustrates an example of a transmit chain900of a base station170and a receive chain950of a UE110that each include an AI/ML module902,952that is trainable in order to provide a tailored personalized air interface between the base station170and UE110, in accordance with a fifth embodiment of the present disclosure. In the example shown inFIG.11, the transmit chain900of base station170includes a source encoder901, a channel encoder903, a QAM mapping component905and an AI/ML module902that includes a waveform generation component904. In this example the receive chain950of UE110includes an AI/ML module952, a QAM demapping component953, a channel decoder955and a source decoder957. The AI/ML module952includes a waveform processing component954. Unlike the previous examples, the example shown inFIG.11utilizes non-AI/ML based source and channel coding/decoding and modulation/demodulation and AI/ML based or assisted waveform generation. The AI/ML based or assisted waveform generation may enable per UE based optimization of one or more waveform parameters, such as pulse shape, pulse width, subcarrier spacing (SCS), cyclic prefix, pulse separation, sampling rate, PAPR and the like. For optimization of waveform generation without a predefined waveform type, without a predefined pulse shape and without predefined waveform parameters, self-learning/training and optimization may be used to determine optimal waveform type, pulse shape and waveform parameters. In some embodiments, the AI/ML algorithms for self-learning/training and optimization may be downloaded by the UE from a network/server/other device. In some embodiments, there may be a finite set of predefined waveform types, and selection of a predefined waveform type from the finite set and determination of the pulse shape and other waveform parameters may be done through self-optimization. In some scenarios, non-AI/ML based optimization of waveform generation may also or instead be utilized. As mentioned above, in the example shown inFIG.11, source and channel coding via source encoder901and channel encoder903and modulation via QAM mapping component905at base station170and demodulation via QAM demapping component953and channel and source decoding via channel decoder955and source decoder957at UE110, may be managed without AI/ML. For example, a modulation and coding scheme and associated parameters may be selected from a predefined set of modulation and coding schemes according to a transmission requirement, as discussed previously. Alternatively, the modulation and coding scheme and associated parameters may be dynamically signaled to a UE via for example downlink control information (DCI) or radio resource control (RRC) signaling. Examples of over the air information exchange procedures that may facilitate training of ML components of communicating devices, such as various ML components of the base stations170and UEs110of the examples shown inFIGS.7to11will now be described with reference toFIGS.12to14. FIG.12is a signal flow diagram1000of an example of an over the air information exchange procedure for a training phase of machine learning components enabling device-specific tailoring/customization of an air interface, in accordance with an embodiment of this disclosure. In the signal flow diagram1000, a UE and a BS or other network device are involved in an information exchange for an AI/ML training phase1150. Although only one UE and one BS are shown inFIG.12to avoid congestion in the drawing, data collection or information sharing during training, and similarly operation of a communication network, are expected to involve more than one UE and more than one BS. For example, in some embodiments training may be done with the joint efforts from multiple network devices and multiple UEs and air interface optimization may be done on a per UE basis. The information exchange procedure begins with UE sending information indicating an AI/ML capability of the UE to the BS at1010. The information indicating an AI/ML capability of the UE may indicate whether or not the UE supports AI/ML for optimization of an air interface. If the UE is capable of supporting AI/ML optimization, the information may also or instead indicate what type and/or level of complexity of AI/ML the UE is capable of supporting, e.g., which function/operation AI/ML can be supported, what kind of AI/ML algorithm can be supported (for example, autoencoder, reinforcement learning, neural network (NN), deep neural network (DNN), how many layers of NN can be supported, etc.). In some embodiments, the information indicating an AI/ML capability of the UE may also or instead include information indicating whether the UE can assist with training. In some embodiments, the information sent at1010may include information indicating an AI/ML capability type of the UE. The AI/ML capability type may identify whether the UE supports AI/ML optimization of one or more components of the air interface of the device. For example, the AI/ML capability type may be one of a plurality of AI/ML capability types, where each AI/ML capability type corresponds to support for a different level of AI/ML capability. For example, the plurality of AI/ML capability types may include an AI/ML capability type that indicates the UE supports deep learning. As another example, the plurality of AI/ML capability types may include different types that indicate different combinations of air interface components that are optimizable by AI/ML. For example, the plurality of AI/ML capability types may include one or more of the following types:a type corresponding to support for AI/ML based optimization of all baseband signal processing components, such as coding (channel coding or joint source and channel coding), modulation and waveform generation (e.g., similar to the examples shown inFIGS.7A and7B);a type corresponding to support for AI/ML based optimization of coding and modulation, but not waveform generation (e.g., similar to the examples shown inFIGS.8A and8B);a type corresponding to support for AI/ML based optimization of modulation and waveform generation, but not coding (e.g., similar to the example shown inFIG.9);a type corresponding to support for AI/ML based optimization of modulation, but not coding and waveform generation (e.g., similar to the example shown inFIG.10);a type corresponding to support for AI/ML based optimization of waveform generation, but not coding and modulation (e.g., similar to the example shown inFIG.11). In some embodiments, the information sent by the UE to the BS at1010may be sent by the UE to the BS as part of an initial access procedure to access the network. In other embodiments, the information may also or instead be sent by the UE in response to a capability enquiry from the BS (not shown). After receiving AI/ML capability information from the UE indicating that the UE supports AI/ML and can assist with training, the BS sends a training request to the UE at1012to trigger a training phase1050. In some embodiments, the training request may be sent to the UE through DCI (dynamic signaling) on a downlink control channel or on a data channel. For example, in some embodiments the training request may be sent to the UE as UE specific or UE common DCI. For example, UE common DCI may be used to send a training request to all UEs or a group of UEs. The UE may send a response to the training request to the BS, as indicated at1014. This response may confirm that the UE has entered a training mode. However, such a response can be optional and may not be sent by a UE in some embodiments. At1016the BS starts the training phase1050by sending a training signal that includes a training sequence or training data to the UE. In some embodiments, the BS may send a training sequence/training data to the UE after a certain predefined time gap following transmission of the training request at1012. In other embodiments, the BS may immediately transmit a training sequence/training data to the UE after transmitting the training request at1012. In still other embodiments, the BS may wait until it has received a response to the training request from the UE before transmitting the training sequence/training data to the UE. Non-limiting examples of channels that may be used by the BS to send training sequences/training data to UE include:Dynamic control channel: When the number of bits required to send the training sequence/training data is less than a certain threshold, a dynamic control channel may be used to send the training sequence/training data. In some embodiment, several levels of bit lengths may be defined. The different bit lengths may correspond to different DCI formats or different DCI payloads. The same DCI can be used for carrying training sequences/data for different AI/ML modules. In some embodiments, a DCI field may contain information indicating an AI/ML module the training sequence/training data is to be used to train.Data channel: In some embodiments, a data channel may be used to carry a training sequence/training data. In such embodiments, the payload of the data channel depends on the training sequence length or the amount of training data that is to be sent. The DCI used to schedule such a data channel can carry the information required for decoding the data channel and AI/ML module indicator(s) to indicate which AI/ML module(s) the training sequence/data is for.RRC channel: In some embodiments, training sequences/training data can be sent to UE via RRC signaling. For its part, the UE starts to search for a training signal (e.g., a training sequence or training data) sent by the network after sending back a response to the training request at1014or after receiving the training request at1012with or without a predefined time gap. The channel resource and the transmission parameters for the training signal, such as MCS and demodulation reference signal (DMRS), can be predefined or preconfigured (for example by RRC signaling) or signaled by dynamic control signaling (similar to the detection of DCI for a scheduled data channel). In some embodiments, the training sequence/training data may be carried in a dynamic control channel directly (e.g., certain bits in a dynamic control channel may be reserved for carrying training sequence/training data). At1018the UE sends a training response message to the BS that includes feedback information based on processing of the received training signal. In some embodiments, the training response message may include feedback information indicating an updated training sequence for an iterative training process (e.g., for autoencoder based ML) or certain type(s) of measurement results to help Tx/Rx to further train or refine the training of a NN, e.g., for enforcement learning. In some embodiments, such measurements may include, for example, the error margin obtained by the UE in receiving the training sequence/data from the BS. For example the measurement results may include information indicating the mean square of errors and/or an error direction (e.g., error increase or decrease). In some embodiments, the training response message may also or instead include other feedback information, such as an adjustment step size and direction (e.g., increase or decrease by X amount, where X is the adjustment step size). In some cases, the measurement results or feedback may be provided implicitly. For example the adjustment of beamforming can be indicated by the beam direction of the feedback signal. In some embodiments, the training response message may be sent by the UE through an uplink (UL) control channel. In other embodiments, the training response message may be partially or entirely sent through an UL data channel. An AI/ML module that includes one or more AI/ML components, such as a neural network, is trained in the network based on the received training response message from the UE. InFIG.12, this training is indicated at1019. For example, the parameters of an AI/ML module, such as neural network weights, may be updated/modified based on measurement results returned by the UE. In some embodiments this training may be performed at least in part in the BS, while in other embodiments the training may be performed in part or in whole by another network device, such as a centralized AI/ML server (not shown). At1020, the BS sends information to the UE to update AI/ML parameters, such as neural network weights, in order to optimize one or more aspects of the air interface between the UE and BS. In some embodiments this training process is done iteratively, as indicated at1040, whereby the BS repeatedly transmits training sequence/data and iteratively refines AI/ML parameters based on training response messages from the UE. In some embodiments this iterative process may continue until one or more target criteria is satisfied or until a predefined number of iterations have occurred. It should be noted that not all embodiments involve AI/ML functionality at UEs and therefore AI/ML parameters need not necessarily be signaled to a UE in all embodiments. At1022, the BS terminates the training process by sending a termination signal to the UE indicating the training phase is finished, in response to which the UE transitions to a normal operation phase1060. In some embodiments, the training termination signal may be transmitted to the UE through dynamic signaling. In the normal operations phase1060the UE and BS may then communicate via the updated air interface. In some embodiments, the information exchange procedure shown inFIG.12occurs at least partially in the Radio Resource Control (RRC) layer. In some embodiments, the information exchange procedure shown inFIG.12occurs at least partially in a Medium Access Control (MAC) layer. For example, the information exchange signaling may be carried by a MAC control element (MAC CE) implemented as a special bit string in a logical channel ID (LCID) field of a MAC header. In the example embodiment shown inFIG.12, the AI/ML training is performed in the network and the results of the training are sent to the UE, which may be referred to as network oriented training. In other embodiments, training may take place jointly at the UE and in the network. FIG.13is a signal flow diagram1100of an example of an over the air information exchange procedure for a training phase of machine learning components enabling device-specific tailoring/customization of an air interface, in accordance with an embodiment of this disclosure in which the training takes place jointly at the UE and BS. In the signal flow diagram1100, a UE and a BS or other network device are involved in an information exchange for an AI/ML training phase1150. The information exchange procedure begins with the UE sending information indicating an AI/ML capability of the UE to the BS at1110. The information indicating an AI/ML capability of the UE may include the same or similar information to that described above with reference to the example embodiment shown inFIG.12, but in this example the information further also indicates that the UE is capable of joint AI/ML training with the network. In some embodiments, the information sent by the UE to the BS at1110may be sent as part of an initial access procedure to access the network. In other embodiments, the information may also or instead be sent by the UE in response to a capability enquiry from the BS (not shown). After receiving AI/ML capability information from the UE indicating that the UE supports network and UE joint AI/ML training, the BS sends a training request to the UE at1112to trigger a training phase1150. In some embodiments, the training request may be sent to the UE through DCI (dynamic signaling) on a downlink control channel or on a data channel. For example, in some embodiments the training request may be sent to the UE with UE specific or UE common DCI. For example, UE common DCI may be used to send a training request to all UEs or a group of UEs. In some embodiments, the training request may be set to the UE via RRC signaling. In some embodiments, the training request may include initial training setting(s)/parameter(s), such as initial NN weights. In some embodiments, the BS may also send AI/ML related information to the UE to facilitate joint training such as:Information indicating which AI/ML module is to be trained if there is more than one AI/ML module that is trainable;Information about the AI/ML algorithm and initial setting/parameters. The AI/ML related information may be sent as part of the training request sent at1112or may be sent separately from the training request. The AI/ML related information sent to the UE, such as information indicating AI/ML algorithm(s) and setting/parameters, may have been selected by the BS or another network device based at least in part on the AI/ML capability information received from the UE. In some embodiments, the AI/ML related information may include an instruction for the UE to download initial AI/ML algorithm(s) and/or setting(s)/parameter(s), in response to which the UE may then download initial AI/ML algorithms and/or setting(s)/parameter(s) in accordance with the instruction. In some embodiments, after the UE has received the training request and initial training information from the network, the UE may send a response to the training request to the BS, as indicated at1114inFIG.13. This response may confirm that the UE has entered a training mode. However, such a response can be optional and may not be sent by a UE in some embodiments. At1116the BS starts the training phase1150by sending a training signal that includes a training sequence or training data to the UE. In some embodiments, the BS may send a training sequence/training data to the UE after a certain predefined time gap following transmission of the training request at1112. In other embodiments, the BS may immediately transmit a training sequence/training data to the UE after transmitting the training request at1112. In still other embodiments, the BS may wait until it has received a response to the training request from the UE before transmitting the training sequence/training data to the UE. As noted above, in some embodiments the BS notifies the UE which AI/ML module(s)/component(s) is/are to be trained by including information in the training request that identifies one or more AI/ML modules/components or by sending such information to the UE in a separate communication. By doing so, the BS informs the UE which AI/ML modules(s)/component(s) is/are to be trained based on the training signal transmitted by the BS at1116. Non-limiting examples of channels that may be used by the BS to send training sequences or training data to UE include those discussed above with reference toFIG.12, namely a dynamic control channel, a data channel and/or RRC channel. Similar to the UE in the example embodiment show inFIG.12, the UE in the example embodiment shown inFIG.13may start to search for a training signal (e.g., a training sequence or training data) after sending back a response to the training request at1114or after receiving the training request at1112with or without a predefined time gap. The channel resource and the transmission parameters for the training signal, such as MCS and DMRS, can be predefined or preconfigured (e.g., by RRC signaling) or signaled by dynamic control signaling. In some embodiments, the training sequence/training data may be carried in a dynamic control channel directly (e.g., certain bits in a dynamic control channel may be reserved for carrying training sequence/training data). At1118the UE sends a training response message to the BS. In some embodiments, the training response message may include feedback information indicating an updated training sequence for an iterative training process (e.g., for autoencoder based ML) or certain type(s) of measurement results to help further train or refine the training of a NN, e.g., for enforcement learning. In some embodiments, such measurements may include, for example, the error margin obtained by the UE in receiving the training sequence/data from the BS. For example the measurement results may include information indicating the mean square of errors and/or an error direction (e.g., error increase or decrease). In some embodiments, the training response message may also or instead include other feedback information, such as an adjustment step size and direction (e.g., increase or decrease by X amount, where X is the adjustment step size). In some cases, the measurement results or feedback may be provided implicitly. For example the adjustment of beamforming can be indicated by the beam direction of the feedback signal. In some embodiments, the training response message may be sent by the UE through an uplink (UL) control channel. In other embodiments, the training response message may be partially or entirely sent through an UL data channel. In this embodiment, training of an AI/ML module that includes one or more AI/ML components takes place jointly in the network and at the UE, as indicated at1119inFIG.13. For example, parameters of an AI/ML module, such as neural network weights, may be updated/modified based on measurement results returned by the UE for the training sequence/data that was transmitted by the BS. In some embodiments, the UE and BS exchange updates of the training setup and parameters, such as neural network weights, in order to optimize one or more aspects of the air interface between the UE and BS, as indicated at1120inFIG.13. In other embodiments, the UE and/or the BS may be able to update the training setup and parameters autonomously based on their own training process at1119without the further information exchange indicated at1120. In some embodiments this training process is done iteratively, as indicated at1140, whereby the BS repeatedly transmits training sequence/data and the UE and BS iteratively refine AI/ML parameters based on training response messages from the UE. In some embodiments this iterative process may continue until one or more target criteria is satisfied or until a predefined number of iterations have occurred. In some embodiments, the training sequence/data may be updated during the iterative training process. At1122, the BS terminates the training process by sending a termination signal to the UE indicating the training phase is finished, in response to which the UE transitions to a normal operation phase1160. In some embodiments, the UE may initiate termination of the training phase by sending a termination recommendation signal to the BS. In the normal operations phase1160the UE and BS may then communicate via the updated air interface. In some embodiments, the AI/ML algorithms and/or parameters may have been pre-downloaded by the UE. In some embodiments, the AI/ML capability information the UE sends at1110may include information indicating pre-downloaded AI/ML algorithms and parameters. In such embodiments, the BS may transmit a download instruction to a UE to instruct the UE to download a selected AI/ML algorithm or parameters if the AI/ML capability information received from the UE indicates the selected AI/ML algorithm or parameters have not been pre-downloaded by the UE. In some embodiments, the information exchange procedure shown inFIG.13occurs at least partially in the RRC layer. In some embodiments, the information exchange procedure shown inFIG.13occurs at least partially in a MAC layer. For example, the information exchange signaling may be carried by a MAC CE implemented as a special bit string in a LCID field of a MAC header. It should be understood that the specific AI/ML component architectures that may be used in embodiments of the present disclosure may be designed based on the particular application. For example, where an AI/ML component is implemented with a deep neural network (DNN), the specific DNN architecture that should be used for a given application (e.g., joint coding and modulation optimization or individual waveform generation optimization) may be standardized (e.g., in agreed upon industry standards). For example, standardization may include a standard definition of the type(s) of neural network to be used, and certain parameters of the neural network (e.g., number of layers, number of neurons in each layer, etc.). Standardization may be application-specific. For example, a table may be used to list the standard-defined neural network types and parameters to be used for specific applications. In the context of the wireless system100ofFIG.1, standardized definitions may be stored in the memory of the BS170, to enable the BS170to select the appropriate DNN architecture and parameters to be trained for a particular wireless communication scenario. As discussed above with respect toFIGS.12and13, training of DNN(s) (e.g., a single DNN implementing coding, modulation and/or waveform generation, or separate DNNs for each) or other AI/ML components may be performed at a BS or jointly at a BS and a UE, and may be performed at the time of initial setup and association between the BS and UE. In some examples, it may be sufficient for the BS and/or the UE to train an AI/ML component, e.g., DNN(s), at the time of setup. As well, training or re-training may also be performed on-the-fly, for example in response to significant change in the UE or BS and/or the environment (e.g., addition of new UE(s), disassociation of a UE, significant change in UE mobility, change in UE state or significant change in channel, among other possibilities). In some examples, training of the AI/ML components, such as DNNs, at the BS and/or UE may be performed offline, for example using data collected by the BS or UE. The collected data may represent different wireless communication scenarios, such as different times of day, different days of the week, different traffic levels, etc. Training may be performed for a particular scenario, to generate different sets of DNN weights for different scenarios. The different sets of weights may be stored in association with the different specific scenarios (e.g., in a look-up table), for example in the memory of the BS or UE. The BS or UE may then select and use a particular set of weights for the DNN(s), in accordance with the specific scenario. For example, the BS or UE may determine that it is handling communications for a weekend evening (e.g., using information from an internal clock and/or calendar) and use the corresponding set of weights to implement the DNN(s) for coding, modulation and/or waveform generation. This would result in the transmitter of the BS170performing coding, modulation and/or waveform generation suitable for a weekend evening. In some embodiments, offline and on-the-fly training may be applied jointly. For example, on-the-fly re-training may be performed to update training that was previously performed offline. For example, a BS and/or UE may also retrain AI/ML components such as DNN(s) on-the-fly, in response to dynamic changes in the environment and/or in the UE or BS, as discussed above. Thus, the BS or UE may update the table of weights dynamically. In some examples, the table of weights may include sets of weights that are standardized (e.g., defined in standards for very common scenarios) and may also include sets of weights that are generated offline and/or on-the-fly for certain scenarios. The BS may provide an indexed table of weights and associated scenarios to the UE. The BS may instruct the UE a selected set of weights to use, for example by indicating the corresponding index of a selected set of weights. The BS and/or UE may retrain their AI/ML components and update their tables of weights (e.g., in response to a new scenario) and communicate the updated tables to one another, e.g., on a periodic or aperiodic basis. FIG.14is a signal flow diagram1200of an example of an over the air information exchange procedure for a normal operations phase1260of machine learning components enabling on-the-fly device-specific tailoring/customization of an air interface, in accordance with an embodiment of this disclosure. In this embodiment, the on-the-fly update of AI/ML parameters may be triggered by the network during the normal operation phase1260, as indicated at1210. The network may trigger the on-the-fly update by sending updated AI/ML parameters, such as DNN weights. In this embodiment the on-the-fly update may also or instead be triggered by the UE during the normal operation phase1260, as indicated at1212. The UE may trigger the on-the-fly update by sending updated AI/ML parameters, such as DNN weights to the BS if the UE is capable of self-training. Otherwise, the trigger that the UE sends the BS at1212may simply comprise a request for an update from the BS. In addition or instead of being triggered by the BS and/or the UE, an on-the-fly update during the normal operation phase1260may occur on a periodic or aperiodic basis, and may involve a mutual information update exchange, as indicated at1214. FIG.15is a signal flow diagram1300of an example of an over the air information exchange procedure for a re-training phase of machine learning components enabling device-specific tailoring/customization of an air interface, in accordance with an embodiment of this disclosure. In the signal flow diagram1300, a UE and a BS or other network device are involved in an information exchange for an AI/ML re-training phase1350. In this embodiment, the re-training phase may be triggered by the network, as indicated at1310. In some embodiments, the BS may trigger the re-training by sending a training request to the UE, e.g., through DCI, RRC or MAC signaling as discussed earlier with reference toFIGS.12and13. In this embodiment the re-training phase may also or instead be triggered by the UE, as indicated at1312. In either case, during the re-training phase1350the UE and BS exchange re-training signaling as indicated at1314in order to facilitate re-training of AI/ML components in the network and/or at the UE. For example, in some embodiments the re-training signaling may include information exchanges and signaling such as that indicated at1016,1018and1020inFIG.12or at1116,1118and1120inFIG.13. In some embodiments, re-training of an AI/ML module that includes one or more AI/ML components may take place in the network or jointly in the network and at the UE, as indicated at1319inFIG.15. In some embodiments this re-training process is done iteratively, as indicated at1340, whereby the BS repeatedly transmits training sequence/data and the UE and BS iteratively refine AI/ML parameters based on re-training response messages from the UE. In some embodiments this iterative process may continue until one or more target criteria is satisfied or until a predefined number of iterations have occurred. In some embodiments, the re-training sequence/data may be updated during the iterative re-training process. At1316, the BS terminates the re-training process by sending a termination signal to the UE indicating the re-training phase is finished, in response to which the UE transitions to a normal operation phase1360. In some embodiments, the UE may instead initiate termination of the re-training phase by sending a termination recommendation signal to the BS, as indicated at1318. In the normal operations phase1360the UE and BS may then communicate via the updated air interface resulting from the re-training. The above discussion refers to examples where the network side training is performed by the BS. In other examples, AI/ML component training may not be performed by the BS. For example, referring again toFIG.1, training may be performed by the core network130or elsewhere in the wireless system100(e.g., using cloud computing). A BS170may simply collect the relevant data and forward the data to the appropriate network entity (e.g., the core network130) to perform the necessary training. The trained AI/ML component parameters, e.g., weights of trained DNN(s), may then be provided to the BS170and ED(s)110. Although the above discussion is in the context of the BS170in the role of a transmitter and the ED110in the role of a receiver, it should be understood that the transmitter and receiver roles may be reversed (e.g., for uplink communications). Further, it should be understood that the transmitter and receiver roles may be at two or more EDs110a,110b,110c(e.g., for sidelink communications). The BS170(or core network130or other network entity) may perform the DNN training and may provide the trained weights to the ED110in order for the ED110to implement the DNN(s) for communicating with the BS170. EXAMPLE EMBODIMENTS The following provides a non-limiting list of additional Example Embodiments of the present disclosure:Example Embodiment 1. A method in a wireless communication network, the method comprising:transmitting, by a first device, information regarding an artificial intelligence or machine learning (AI/ML) capability of the first device to a second device over a single air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface configuration over the single air interface.Example Embodiment 2. The method of Example Embodiment 1, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 3. The method of Example Embodiment 1 or 2, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least one air interface configuration.Example Embodiment 4. The method of any of Example Embodiments 1 to 3, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization.Example Embodiment 5. The method of Example Embodiment 4, wherein the at least one component of the at least one air interface configuration includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 6. The method of Example Embodiment 4 or 5, wherein the information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface configuration.Example Embodiment 7. The method of any of Example Embodiments 1 to 6, wherein transmitting the information regarding an AI/ML capability of the first device comprises at least one of:transmitting the information in response to receiving an enquiry; andtransmitting the information as part of an initial network access procedure.Example Embodiment 8. The method of any of Example Embodiments 1 to 7, further comprising:receiving an AI/ML training request from the second device; and after receiving the AI/ML training request, transitioning to an AI/ML training mode.Example Embodiment 9. The method of Example Embodiment 8, wherein receiving the AI/ML training request comprises receiving the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 10. The method of Example Embodiment 8 or 9, further comprising, transmitting a training request response to the second device to confirm that the first device has transitioned to the AI/ML training mode.Example Embodiment 11. The method of any of Example Embodiments 1 to 10, further comprising receiving a training signal from the second device that includes a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface configuration.Example Embodiment 12. The method of Example Embodiment 11, wherein receiving the training signal comprises receiving the training signal on a dynamic control channel.Example Embodiment 13. The method of Example Embodiment 12, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 14. The method of Example Embodiment 11, wherein receiving the training signal comprises receiving the training signal on a scheduled data channel, the method further comprising receiving scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 15. The method of any of Example Embodiments 11 to 14, further comprising, after receiving the training signal, transmitting a training response message to the second device, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 16. The method of Example Embodiment 15, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 17. The method of Example Embodiment 15 or 16, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 18. The method of Example Embodiment 17, wherein the measurement results include an error margin obtained by the first device in receiving the training signal from the second device.Example Embodiment 19. The method of any of Example Embodiments 15 to 18, further comprising, after transmitting the training response message, receiving AI/ML update information from the second device, the AI/ML update information including information indicating updated AI/ML parameters for an AI/ML module based on the feedback information provided by the first device.Example Embodiment 20. The method of Example Embodiment 19, further comprising, updating the AI/ML module in accordance with the updated AI/ML parameters in order to update the at least one air interface configuration for receiving transmissions from the second device.Example Embodiment 21. The method of any of Example Embodiments 15 to 18, further comprising:training one or more AI/ML modules at the first device based on the training signal received from the second device; andtransmitting AI/ML update information to the second device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training performed by the first device.Example Embodiment 22. The method of Example Embodiment 21, further comprising receiving AI/ML update information from the second device, the AI/ML update information from the second device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the second device based on feedback information provided in the training response message.Example Embodiment 23. The method of Example Embodiment 22, further comprising, updating the at least one air interface configuration for receiving transmissions from the second device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters based on the training performed by the first device and the updated AI/ML parameters received from the second device.Example Embodiment 24. The method of any of Example Embodiments 1 to 23, further comprising:receiving a training termination signal from the second device; andafter receiving the training termination signal, transitioning the first device from the training mode to a normal operations mode.Example Embodiment 25. The method of any of Example Embodiments 1 to 24, wherein the first device is user equipment and the second device is a network device.Example Embodiment 26. A method in a wireless communication network, the method comprising:receiving, by a second device, information regarding an artificial intelligence or machine learning (AI/ML) capability of a first device over a single air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface configuration over the single air interface; and transmitting an AI/ML training request to the first device based at least in part on the information regarding the AI/ML capability of the first device.Example Embodiment 27. The method of Example Embodiment 26, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 28. The method of Example Embodiment 26 or 27, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least on air interface configuration.Example Embodiment 29. The method of any of Example Embodiments 26 to 28, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization.Example Embodiment 30. The method of Example Embodiment 29, wherein the at least one component of the at least one air interface configuration includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 31. The method of Example Embodiment 29 or 30, wherein the information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface configuration.Example Embodiment 32. The method of any of Example Embodiments 26 to 31, wherein receiving the information regarding an AI/ML capability of the first device comprises receiving the information as part of an initial network access procedure for the first device.Example Embodiment 33. The method of any of Example Embodiments 26 to 32, wherein transmitting the AI/ML training request comprises transmitting the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 34. The method of Example Embodiment 33, further comprising, receiving a training request response from the device confirming that the device has transitioned to an AI/ML training mode.Example Embodiment 35. The method of any of Example Embodiments 26 to 34, further comprising transmitting a training signal to the first device, the training signal including a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface configuration.Example Embodiment 36. The method of Example Embodiment 35, wherein transmitting the training signal comprises transmitting the training signal on a dynamic control channel.Example Embodiment 37. The method of Example Embodiment 36, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 38. The method of Example Embodiment 35, wherein transmitting the training signal comprises transmitting the training signal on a scheduled data channel.Example Embodiment 39. The method of Example Embodiment 38, further comprising transmitting scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 40. The method of any of Example Embodiments 35 to 39, further comprising receiving a training response message from the first device, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 41. The method of Example Embodiment 40, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 42. The method of Example Embodiment 40 or 41, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 43. The method of Example Embodiment 42, wherein the measurement results include an error margin obtained by the first device in receiving the training signal.Example Embodiment 44. The method of any of Example Embodiments 40 to 43, further comprising:training one or more AI/ML modules based on the feedback information provided in the training response message from the first device.Example Embodiment 45. The method of Example Embodiment 44, further comprising:transmitting AI/ML update information to the first device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training.Example Embodiment 46. The method of any of Example Embodiments 40 to 45, further comprising:receiving AI/ML update information from the first device, the AI/ML update information from the first device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the first device based on the training signal.Example Embodiment 47. The method of Example Embodiment 46, further comprising updating the at least one air interface configuration for transmitting to the first device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters transmitted to the first device and the updated AI/ML parameters received from the first device.Example Embodiment 48. The method of any of Example Embodiments 26 to 47, further comprising:transmitting a training termination signal to the first device to indicate that a training phase has finished.Example Embodiment 49. The method of any of Example Embodiments 26 to 48, wherein the first device is user equipment and the second device is a network device.Example Embodiment 50. An apparatus comprising:a wireless interface;a processor operatively coupled to the wireless interface; anda computer readable storage medium operatively coupled to the processor, the computer readable storage medium storing programming for execution by the processor, the programming comprising instructions to:transmit, from a first device via the wireless interface, information regarding an artificial intelligence or machine learning (AI/ML) capability of the first device to a second device over a single air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface configuration over the single air interface.Example Embodiment 51. The apparatus of Example Embodiment 50, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 52. The apparatus of Example Embodiment 50 or 51, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least one air interface configuration.Example Embodiment 53. The apparatus of any of Example Embodiments 50 to 52, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization.Example Embodiment 54. The apparatus of Example Embodiment 53, wherein the at least one component of the at least one air interface configuration includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 55. The apparatus of Example Embodiment 53 or 54, wherein the information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface configuration.Example Embodiment 56. The apparatus of any of Example Embodiments 50 to 55, wherein the instructions to transmit the information regarding an AI/ML capability of the first device comprises at least one of:instructions to transmit the information in response to receiving an enquiry; andinstructions to transmit the information as part of an initial network access procedure.Example Embodiment 57. The apparatus of any of Example Embodiments 50 to 56, wherein the programming further comprises instructions to:receive an AI/ML training request from the second device; andafter receiving the AI/ML training request, transition to an AI/ML training mode.Example Embodiment 58. The apparatus of Example Embodiment 57, wherein the instructions to receive the AI/ML training request comprises instructions to receive the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 59. The apparatus of Example Embodiment 57 or 58, wherein the programming further comprises instructions to transmit a training request response to the second device to confirm that the first device has transitioned to the AI/ML training mode.Example Embodiment 60. The apparatus of any of Example Embodiments 50 to 59, wherein the programming further comprises instructions to receive a training signal from the second device that includes a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface configuration.Example Embodiment 61. The apparatus of Example Embodiment 60, wherein the instructions to receive the training signal comprise instructions to receive the training signal on a dynamic control channel.Example Embodiment 62. The apparatus of Example Embodiment 61, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 63. The apparatus of Example Embodiment 60, wherein the instructions to receive the training signal comprise instructions to receive the training signal on a scheduled data channel, the program further comprising instructions to receive scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 64. The apparatus of any of Example Embodiments 60 to 63, wherein the programming further comprises instructions to: transmit a training response message to the second device after receiving the training signal, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 65. The apparatus of Example Embodiment 64, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 66. The apparatus of Example Embodiment 64 or 65, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 67. The apparatus of Example Embodiment 66, wherein the measurement results include an error margin obtained by the first device in receiving the training signal from the second device.Example Embodiment 68. The apparatus of any of Example Embodiments 64 to 67, wherein the programming further comprises instructions to:receive AI/ML update information from the second device after transmitting the training response message, the AI/ML update information including information indicating updated AI/ML parameters for an AI/ML module based on the feedback information provided by the first device.Example Embodiment 69. The apparatus of Example Embodiment 68, wherein the programming further comprises instructions to update the AI/ML module in accordance with the updated AI/ML parameters in order to update the at least one air interface configuration for receiving transmissions from the second device.Example Embodiment 70. The apparatus of any of Example Embodiments 64 to 67, wherein the programming further comprises instructions to:train one or more AI/ML modules at the first device based on the training signal received from the second device; andtransmit AI/ML update information to the second device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training performed by the first device.Example Embodiment 71. The apparatus of Example Embodiment 70, wherein the programming further comprises instructions to receive AI/ML update information from the second device, the AI/ML update information from the second device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the second device based on feedback information provided in the training response message.Example Embodiment 72. The apparatus of Example Embodiment 71, wherein the programming further comprises instructions to update the at least one air interface configuration for receiving transmissions from the second device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters based on the training performed by the first device and the updated AI/ML parameters received from the second device.Example Embodiment 73. The apparatus of any of Example Embodiments 50 to 72, wherein the programming further comprises instructions to:receive a training termination signal from the second device; andafter receiving the training termination signal, transition the first device from the training mode to a normal operations mode.Example Embodiment 74. The apparatus of any of Example Embodiments 50 to 73, wherein the first device is user equipment and the second device is a network device.Example Embodiment 75. An apparatus comprising:a wireless interface;a processor operatively coupled to the wireless interface; anda computer readable storage medium operatively coupled to the processor, the computer readable storage medium storing programming for execution by the processor, the programming comprising instructions to:receive, by a second device via the wireless interface, information regarding an artificial intelligence or machine learning (AI/ML) capability of a first device over a single air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface configuration over the single air interface; andtransmit an AI/ML training request to the first device based at least in part on the information regarding the AI/ML capability of the first device.Example Embodiment 76. The apparatus of Example Embodiment 75, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 77. The apparatus of Example Embodiment 75 or 76, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least on air interface configuration.Example Embodiment 78. The apparatus of any of Example Embodiments 75 to 77, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization.Example Embodiment 79. The apparatus of Example Embodiment 78, wherein the at least one component of the at least one air interface configuration includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 80. The apparatus of Example Embodiment 78 or 79, wherein the information indicating at least one component of the at least one air interface configuration for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface configuration.Example Embodiment 81. The apparatus of any of Example Embodiments 75 to 80, wherein receiving the information regarding an AI/ML capability of the first device comprises receiving the information as part of an initial network access procedure for the first device.Example Embodiment 82. The apparatus of any of Example Embodiments 75 to 81, wherein transmitting the AI/ML training request comprises transmitting the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 83. The apparatus of Example Embodiment 82, wherein the programming further comprises instructions to receive a training request response from the device confirming that the device has transitioned to an AI/ML training mode.Example Embodiment 84. The apparatus of any of Example Embodiments 75 to 83, wherein the programming further comprises instructions to transmit a training signal to the first device, the training signal including a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface configuration.Example Embodiment 85. The apparatus of Example Embodiment 84, wherein transmitting the training signal comprises transmitting the training signal on a dynamic control channel.Example Embodiment 86. The apparatus of Example Embodiment 85, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 87. The apparatus of Example Embodiment 84, wherein transmitting the training signal comprises transmitting the training signal on a scheduled data channel.Example Embodiment 88. The apparatus of Example Embodiment 87, wherein the programming further comprises instructions to transmit scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 89. The apparatus of any of Example Embodiments 84 to 88, wherein the programming further comprises instructions to receive a training response message from the first device, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 90. The apparatus of Example Embodiment 89, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 91. The apparatus of Example Embodiment 89 or 90, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 92. The apparatus of Example Embodiment 91, wherein the measurement results include an error margin obtained by the first device in receiving the training signal.Example Embodiment 93. The apparatus of any of Example Embodiments 89 to 92, wherein the programming further comprises instructions to:train one or more AI/ML modules based on the feedback information provided in the training response message from the first device.Example Embodiment 94. The apparatus of Example Embodiment 93, wherein the programming further comprises instructions to:transmit AI/ML update information to the first device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training.Example Embodiment 95. The apparatus of any of Example Embodiments 89 to 94, wherein the programming further comprises instructions to:receive AI/ML update information from the first device, the AI/ML update information from the first device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the first device based on the training signal.Example Embodiment 96. The apparatus of Example Embodiment 95, wherein the programming further comprises instructions to update the at least one air interface configuration for transmitting to the first device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters transmitted to the first device and the updated AI/ML parameters received from the first device.Example Embodiment 97. The apparatus of any of Example Embodiments 75 to 96, wherein the programming further comprises instructions to:transmit a training termination signal to the first device to indicate that a training phase has finished.Example Embodiment 98. The apparatus of any of Example Embodiments 75 to 97, wherein the first device is user equipment and the second device is a network device.Example Embodiment 99. An apparatus comprising:a transmitting module configured to transmit, from a first device, information regarding an artificial intelligence or machine learning (AI/ML) capability of the first device to a second device over an air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface component over the air interface.Example Embodiment 100. The apparatus of Example Embodiment 99, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 101. The apparatus of Example Embodiment 99 or 100, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least one air interface component.Example Embodiment 102. The apparatus of any of Example Embodiments 99 to 101, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface component for which the first device supports AI/ML optimization.Example Embodiment 103. The apparatus of Example Embodiment 102, wherein the at least one air interface component includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 104. The apparatus of Example Embodiment 102 or 103, wherein the information indicating at least one component of the at least one air interface component for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface component.Example Embodiment 105. The apparatus of any of Example Embodiments 99 to 104, wherein the transmitting module is configured to transmit the information regarding an AI/ML capability of the first device in response to receiving an enquiry or as part of an initial network access procedure.Example Embodiment 106. The apparatus of any of Example Embodiments 99 to 105, further comprising:a receiving module configured to receive an AI/ML training request from the second device; anda processing module configured to transition to an AI/ML training mode after the AI/ML training request is received.Example Embodiment 107. The apparatus of Example Embodiment 106, wherein the receiving module is configured to receive the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 108. The apparatus of Example Embodiment 106 or 107, wherein the transmitting module is configured to transmit a training request response to the second device to confirm that the first device has transitioned to the AI/ML training mode.Example Embodiment 109. The apparatus of any of Example Embodiments 99 to 108, wherein the receiving module is configured to receive a training signal from the second device that includes a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface component.Example Embodiment 110. The apparatus of Example Embodiment 109, wherein the receiving module is configured to receive the training signal on a dynamic control channel.Example Embodiment 111. The apparatus of Example Embodiment 110, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 112. The apparatus of Example Embodiment 109, wherein the receiving module is configured to:receive the training signal on a scheduled data channel; andreceive scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 113. The apparatus of any of Example Embodiments 109 to 112, wherein the transmitting module is configured to:transmit a training response message to the second device after receiving the training signal, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 114. The apparatus of Example Embodiment 113, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 115. The apparatus of Example Embodiment 113 or 114, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 116. The apparatus of Example Embodiment 115, wherein the measurement results include an error margin obtained by the first device in receiving the training signal from the second device.Example Embodiment 117. The apparatus of any of Example Embodiments 113 to 116, wherein the receiving module is configured to:receive AI/ML update information from the second device after transmitting the training response message, the AI/ML update information including information indicating updated AI/ML parameters for an AI/ML module based on the feedback information provided by the first device.Example Embodiment 118. The apparatus of Example Embodiment 117, further comprising a processing module configured to update the AI/ML module in accordance with the updated AI/ML parameters in order to update the at least one air interface component for receiving transmissions from the second device.Example Embodiment 119. The apparatus of any of Example Embodiments 113 to 116, further comprising a processing module configured to train one or more AI/ML modules at the first device based on the training signal received from the second device, wherein the transmitting module is configured to transmit AI/ML update information to the second device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training performed by the first device.Example Embodiment 120. The apparatus of Example Embodiment 119, wherein the receiving module is configured to receive AI/ML update information from the second device, the AI/ML update information from the second device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the second device based on feedback information provided in the training response message.Example Embodiment 121. The apparatus of Example Embodiment 120, wherein the processing module is configured to update the at least one air interface component for receiving transmissions from the second device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters based on the training performed by the first device and the updated AI/ML parameters received from the second device.Example Embodiment 122. The apparatus of any of Example Embodiments 99 to 121, wherein the receiving module is configured to receive a training termination signal from the second device, and the processing module is configured to transition the first device from the training mode to a normal operations mode after the training termination signal is received.Example Embodiment 123. The apparatus of any of Example Embodiments 99 to 122, wherein the first device is user equipment and the second device is a network device.Example Embodiment 124. An apparatus comprising:a receiving module configured to receive, by a second device, information regarding an artificial intelligence or machine learning (AI/ML) capability of a first device over an air interface between the first device and the second device, the information regarding an AI/ML capability of the first device identifying whether the first device supports AI/ML for optimization of at least one air interface component over the air interface; anda transmitting module configured to transmit an AI/ML training request to the first device based at least in part on the information regarding the AI/ML capability of the first device.Example Embodiment 125. The apparatus of Example Embodiment 124, wherein the information regarding an AI/ML capability of the first device comprises information indicating the first device is capable of supporting a type and/or level of complexity of AI/ML.Example Embodiment 126. The apparatus of Example Embodiment 124 or 125, wherein the information regarding an AI/ML capability of the first device comprises information indicating whether the first device assists with an AI/ML training process for optimization of the at least on air interface component.Example Embodiment 127. The apparatus of any of Example Embodiments 124 to 126, wherein the information regarding an AI/ML capability of the first device comprises information indicating at least one component of the at least one air interface component for which the first device supports AI/ML optimization.Example Embodiment 128. The apparatus of Example Embodiment 127, wherein the at least one component of the at least one air interface component includes at least one of a coding component, a modulation component and a waveform component.Example Embodiment 129. The apparatus of Example Embodiment 127 or 128, wherein the information indicating at least one component of the at least one air interface component for which the first device supports AI/ML optimization further comprises information indicating whether the first device supports joint optimization of two or more components of the at least one air interface component.Example Embodiment 130. The apparatus of any of Example Embodiments 124 to 129, wherein receiving the information regarding an AI/ML capability of the first device comprises receiving the information as part of an initial network access procedure for the first device.Example Embodiment 131. The apparatus of any of Example Embodiments 124 to 130, wherein transmitting the AI/ML training request comprises transmitting the AI/ML training request through downlink control information (DCI) on a downlink control channel or RRC signaling or the combination of the DCI and RRC signaling.Example Embodiment 132. The apparatus of Example Embodiment 131, wherein the receiving module is configured to receive a training request response from the device confirming that the device has transitioned to an AI/ML training mode.Example Embodiment 133. The apparatus of any of Example Embodiments 124 to 132, wherein the transmitting module is configured to transmit a training signal to the first device, the training signal including a training sequence or training data for training at least one AI/ML module responsible for one or more components of the at least one air interface component.Example Embodiment 134. The apparatus of Example Embodiment 133, wherein transmitting the training signal comprises transmitting the training signal on a dynamic control channel.Example Embodiment 135. The apparatus of Example Embodiment 134, wherein the dynamic control channel includes a dynamic control information (DCI) field containing information indicating an AI/ML module that is to be trained.Example Embodiment 136. The apparatus of Example Embodiment 133, wherein transmitting the training signal comprises transmitting the training signal on a scheduled data channel.Example Embodiment 137. The apparatus of Example Embodiment 136, wherein the transmitting module is configured to transmit scheduling information for the data channel on a dynamic control channel that includes a DCI field containing information indicating an AI/ML module that is to be trained.Example Embodiment 138. The apparatus of any of Example Embodiments 133 to 137, wherein the receiving module is configured to receive a training response message from the first device, the training response message including feedback information based on processing of the received training signal at the first device.Example Embodiment 139. The apparatus of Example Embodiment 138, wherein the feedback information included in the training response message includes an updated training sequence for an iterative training process.Example Embodiment 140. The apparatus of Example Embodiment 138 or 139, wherein the feedback information included in the training response message includes measurement results based on the received training signal.Example Embodiment 141. The apparatus of Example Embodiment 140, wherein the measurement results include an error margin obtained by the first device in receiving the training signal.Example Embodiment 142. The apparatus of any of Example Embodiments 138 to 141, further comprising a processing module configured to train one or more AI/ML modules based on the feedback information provided in the training response message from the first device.Example Embodiment 143. The apparatus of Example Embodiment 142, wherein the transmitting module is configured to:transmit AI/ML update information to the first device, the AI/ML update information including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on the training.Example Embodiment 144. The apparatus of any of Example Embodiments 133 to 143, wherein the receiving module is configured to:receive AI/ML update information from the first device, the AI/ML update information from the first device including information indicating updated AI/ML parameters for at least one of the one or more AI/ML modules based on training of one or more AI/ML modules at the first device based on the training signal.Example Embodiment 145. The apparatus of Example Embodiment 144, further comprising a processing module configured to update the at least one air interface component for transmitting to the first device by updating the one or more AI/ML modules in accordance with the updated AI/ML parameters transmitted to the first device and the updated AI/ML parameters received from the first device.Example Embodiment 146. The apparatus of any of Example Embodiments 124 to 145, wherein the transmitting module is configured to transmit a training termination signal to the first device to indicate that a training phase has finished.Example Embodiment 147. The apparatus of any of Example Embodiments 124 to 146, wherein the first device is user equipment and the second device is a network device. Although the present disclosure describes methods and processes with steps in a certain order, one or more steps of the methods and processes may be omitted or altered as appropriate. One or more steps may take place in an order other than that in which they are described, as appropriate. Although the present disclosure is described, at least in part, in terms of methods, a person of ordinary skill in the art will understand that the present disclosure is also directed to the various components for performing at least some of the aspects and features of the described methods, be it by way of hardware components, software or any combination of the two. Accordingly, the technical solution of the present disclosure may be embodied in the form of a software product. A suitable software product may be stored in a pre-recorded storage device or other similar non-volatile or non-transitory computer readable medium, including DVDs, CD-ROMs, USB flash disk, a removable hard disk, or other storage media, for example. The software product includes instructions tangibly stored thereon that enable a processing device (e.g., a personal computer, a server, or a network device) to execute examples of the methods disclosed herein. The machine-executable instructions may be in the form of code sequences, configuration information, or other data, which, when executed, cause a machine (e.g., a processor or other processing device) to perform steps in a method according to examples of the present disclosure. The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. The described example embodiments are to be considered in all respects as being only illustrative and not restrictive. Selected features from one or more of the above-described embodiments may be combined to create alternative embodiments not explicitly described, features suitable for such combinations being understood within the scope of this disclosure. All values and sub-ranges within disclosed ranges are also disclosed. Also, although the systems, devices and processes disclosed and shown herein may comprise a specific number of elements/components, the systems, devices and assemblies could be modified to include additional or fewer of such elements/components. For example, although any of the elements/components disclosed may be referenced as being singular, the embodiments disclosed herein could be modified to include a plurality of such elements/components. The subject matter described herein intends to cover and embrace all suitable changes in technology. | 127,561 |
11863401 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding. One or more embodiments may be practiced without these specific details. Features described in one embodiment may be combined with features described in a different embodiment. In some examples, well-known structures and devices are described with reference to a block diagram form in order to avoid unnecessarily obscuring the present invention.1. GENERAL OVERVIEW2. SYSTEM ARCHITECTURE3. GENERATING AND DISPLAYING A COMBINED VISUAL REPRESENTATION OF A NETWORK WITH MULTIPLE CONSTITUENT SUB-NETWORKS4. EXAMPLE EMBODIMENTS5. COMPUTER NETWORKS AND CLOUD NETWORKS6. MISCELLANEOUS; EXTENSIONS7. HARDWARE OVERVIEW 1. General Overview One or more embodiments generate a combined visual representation of subsets of devices associated with corresponding sub-networks of a private network, where at least two devices in corresponding sub-networks share a same private internet protocol (IP) address. The system generates a separate profile for each device using a combination of elements including at least (a) a private IP address corresponding to the device and (b) a network identifier corresponding to a sub-network associated with the device. The use of the combination of elements results in generating different profiles for devices, associated with different sub-networks, that share the same private IP address. The system may analyze the characteristics of packets transmitted by a device to identify elements for mapping to a corresponding profile. The characteristics may include for example, a source IP address and a network identifier corresponding to a packet. One or more embodiments described in this Specification and/or recited in the claims may not be included in this General Overview section. 2. Architectural Overview FIG.1illustrates a merged network environment100in accordance with one or more embodiments. As illustrated inFIG.1, merged network environment100includes a first private sub-network104, a second private sub-network120, communications systems136, network communication sensor140, a tagging system144, and a network visualization system148. In one or more embodiments, the system100may include more or fewer components than the components illustrated inFIG.1. The components illustrated inFIG.1may be local to or remote from each other. The components illustrated inFIG.1may be implemented in software and/or hardware. Each component may be distributed over multiple applications and/or machines. Multiple components may be combined into one application and/or machine. Operations described with respect to one component may instead be performed by another component. As will be appreciated, the merged network environment100is provided for convenience of illustration and may represent a variety of different private network configurations in which different devices may share a same private IP address. For example, the first private sub-network104and the second private sub-network120may be associated with different sub-networks within a same organization. Alternatively, the first private sub-network104and the second private sub-network120may be associated with networks corresponding to different organizations that have merged and also consolidated their previously distinct networks. The sub-network104includes a mobile phone108with corresponding private IP address ABC, a desktop computer112with corresponding private IP address DEF, and a networked medical device116with corresponding private IP address GHI. The sub-network120includes a desktop computer124with corresponding private IP address ABC, a mobile computing device128with corresponding private IP address DEF, and a networked multi-function device132with corresponding private IP address GHI. The merged network environment100thus presents the challenge described above, namely the duplication of private IP addresses within a private communication network. As shown, the mobile phone108of the sub-network104has the same private IP address (ABC) as the desktop computer124of the sub-network120. Similarly, the desktop computer112of the sub-network104has the same private IP address (DEF) of the mobile computing device128of the sub-network120. The networked medical device116of the sub-network104has the same private IP address (GHI) as the networked multi-function device132of the sub-network120. The communication systems136facilitate the transmission of packets between devices of the merged network environment100. Elements of the communication systems136include, but are not limited to, routers, switches, bridges, hubs, gateways, servers, and/or data repositories. In some examples, the communication systems136enable communication between “end devices.” End devices may be either a source and/or a destination of transmissions (e.g., data packets) and may include devices such as computers, printers, servers, smartphones, smart appliances, security cameras, networked medical equipment, networked manufacturing machines, networked sensors, and/or “internet of things” (“IoT”) devices. These elements of the communications systems136may apply techniques associated with the transport layer of the TCP/IP protocol to properly transmit and route packets to and from the devices of the private sub-networks104,120despite the duplicative private IP addresses. However, the system may employ one or more of the following techniques to accurately visualize the sub-networks104,120within the context of the merged network environment100. Additional embodiments and/or examples relating to computer networks are described below in Section5, titled “Computer Networks and Cloud Networks.” The network communication sensor140includes systems that may observe and/or copy packets transmitted via the communications systems136. In some examples, the network communication sensor140may be configured as a Test Access Point (TAP) or a Switched Port Analyzer (SPAN). In examples in which a TAP (or other passive packet duplication systems) is used, the network communication sensor140may passively split transmission signals received through a particular port. The network communication sensor140then forwards the traffic to at least two ports: one port associated with the intended destination of the traffic, and a monitoring port. Data packets received at the monitoring port of the sensor may be analyzed. In examples in which a SPAN is used, the network communication sensor140transmits the packets to a SPAN port (also known as a mirror port), which duplicates the packets and forwards one set of the duplicated packets for further analysis. Regardless of the technique or system used, packets copied by the network communication sensor140are forwarded to the tagging system144and network visualization system148for further processing and analysis. In one or more embodiments, the network communication sensor140may be placed in communication with a distribution layer of the merged network environment100. Since the distribution layer processes traffic between sub-networks (e.g., sub-networks104,120), virtual local area networks (VLANs), and/or broadcast domains of the network, the network communication sensor140that is in communication with the distribution layer may be able to capture a significant portion of all traffic in the network. In other embodiments, the network communication sensor140may be attached to additional or alternative layers of the network hierarchy. For example, the network communication sensor140may be in communication with one or more core network devices (e.g., a switch, a router of the communication systems136), and/or one or more access network devices (e.g., a hub or access server of the communication systems136). Packets forwarded by the network communication sensor140to the tagging system144are analyzed by the tagging system to determine the sub-network from which the packets originate. The tagging system144may then apply a label to the packets indicating the origin sub-network. In some examples the tagging system144may determine a source sub-network of a packet by analyzing one or more attributes associated with the packet. In some examples, this may include an IP address. However, as indicated above, some IP addresses may be duplicated between different sub-networks104,120. This duplication of IP addresses may render the use of an IP address alone insufficient for identifying a source sub-network. In some cases, the tagging system144may use additional packet attributes to identify a source (e.g., a source device, a source sub-network, or both) of a transmission. Example attributes that may be used include, but are not limited to:(a) Flow attributes: attributes associated with a flow of a communication session, including attributes associated with an Internet Protocol (such as, Internet Protocol version 4 (IPv4), Internet Protocol version 6 (IPv6)) used by a communication session;(b) DNS attributes: attributes associated with a Domain Name System (DNS) protocol used by a communication session;(c) DHCP attributes: attributes associated with a Dynamic Host Configuration Protocol (DHCP) used by a communication session;(d) DICOM attributes: attributes associated with a Digital Imaging and Communications in Medicine (DICOM) protocol used by a communication session;(e) POCT attributes: attributes associated with a Point of Care Testing (POCT) protocol used by a communication session;(f) CIP attributes: attributes associated with a Common Industrial Protocol (CIP) used by a communication session;(g) SIP attributes: attributes associated with a Session Initiation Protocol (SIP) used by a communication session;(h) RTSP attributes: attributes associated with a Real Time Streaming Protocol (RTSP) used by a communication session; and/or(i) BACnet attributes: attributes associated with a Building Automation and Control network (BACnet) protocol used by a communication session. Attributes associated with a flow of a communication session may also include any of: a source address (such as an IP address and/or a Media Access Control (MAC) address); a destination address; a source port; a destination port; a number of transmitted bytes; a number of received bytes; a source subnet; and a destination subnet. Attributes associated with a particular protocol (such as, IPv4, IPv6, DNS, DICOM, POCT, CIP, SIP, RTSP, DHCP, and BACnet) include values for standard fields specified and/or defined by a corresponding protocol specification. The standard fields may be included in a header, tail, and/or other portion of a data packet. As an example, standard fields in an IPv4 data packet include any of: Internet Protocol Version; Internet Header Length; Differentiated Services Code Point (DSCP); Explicit Congestion Notification (ECN); Total Length; Identification (for example, for identifying the group of fragments of a single IP datagram); Flags; Fragment Offset; Time to Live (TTL); Protocol (for example, for defining the protocol used in the data portion of the IP datagram); Header Checksum; Source Address; Destination Address; and Options. Additional and/or alternative standard fields may be used. A value for a standard field in an IPv4 data packet may be a value for an attribute of a communication session. As another example, standard fields in a DNS query or response include any of: Identification; Flags; Number of Questions; Number of Answers; Number of Authority Resource Records (RRs); Number of Additional RRs; Request Type. Additional and/or alternative standard fields may be used. A value for a standard field in a DNS query or response may be a value for an attribute of a communication session. As another example, standard fields in a DHCP packet include any of: MAC address; IP address; subnet; host name; DHCP Options; DHCP Class Identifier; Manufacturer; DHCP Parameter List; and DHCP Vendor Class. Additional and/or alternative standard fields may be used. A value for a standard field in a DHCP data packet may be a value for an attribute of a communication session. As another example, DICOM is a protocol for the communication and management of medical imaging information and related data. Standard fields in a DICOM data packet include any of: Creation Time; Manufacturer; Institution Name; Referring Physician's Name; Consulting Physician's Name; Operator's Name; Warning Reason; Failure Reason; Patient's Name; Patient Identifier; Patient's Birth Date; Patient's Sex; Image Size. Additional and/or alternative standard fields may be used. A value for a standard field in a DICOM data packet may be a value for an attribute of a communication session. Additionally or alternatively, an attribute of a communication session may include statistics and/or characteristics of the communication session. For example, attributes may include any of: a number of data packets in the communication session; a number of communication sessions that share a common set of attribute values; a frequency of communication sessions that share a common set of attribute values; a duration of the communication session; and whether or not the communication session is secure. Any one or more of the attributes above may be used by the tagging system144to identify a source sub-network of a packet. Once identified, the tagging system144may associate a particular set of attributes with a source sub-network in a profile. The tagging system may then apply a label or tag to a packet indicating its source sub-network, thereby providing an abbreviated, concise, and easily analyzed indication of the sub-network associated with the packet to be detected by the network visualization system148. The network visualization system148may receive the packets tagged with a source sub-network identifier and generate a visualization of a network environment as a whole, including any constituent sub-networks, and corresponding devices with the constituent sub-networks. This visualization of the various components and devices of the network as a whole may improve various network administration functions, some of which are described above. Upon receiving the tagged packets, the network visualization system148may detect, within the tagged data packets, a private IP addresses associated with a source device of the data packets and a tag or label indicating the source sub-network. Detecting both the private IP address and the identity of the source sub-network enables the network visualization system148to distinguish between devices communicating from distinct sub-networks In some embodiments, the network visualization system148may detect other management data associated with the packets that may be used for network administration. These other management data may include various attributes, such as those described above, that may be correlated with a device profile. The attributes and profile may, together, be used to uniquely identify a device within the network environment100regardless of the private IP address. The network visualization system148may display a visual representation of the network environment100as a whole. This visual representation includes distinct interface elements that correspond to distinct sub-networks, in this case sub-network104and sub-network120. Because the visual representation generated by the network visualization system148displays distinct interface elements for constituent sub-networks in a single display, the single visual representation may be referred to as a combined visual representation for convenience. In some examples, the network visualization system148may display representations corresponding to constituent devices within each sub-network interface element. For example, the network visualization system148may generate a distinct interface for the sub-network104that identifies the device108,112,116with respective private IP addresses ABC, DEF, and GHI. The separate interface elements for each sub-network prevents ambiguity regarding duplicated private IP addresses. The network visualization system148is illustrated in more detail inFIG.2. In the example illustrated, the network visualization system148includes a monitoring data analyzer204, a visualization engine208, a tagging user interface212, and a backend interface216. The monitoring data analyzer204may receive data packets from the tagging system144. Upon receiving the data packets, the monitoring data analyzer204may analyze the packets to determine the sub-network from which the packets originated (e.g., via the identifying tag applied by the tagging system144), the private IP address of the source device, among other attributes. In some examples, it is the monitoring data analyzer204that analyzes data packets to identify various attributes. For example, the monitoring data analyzer204may detect a communication protocol associated with packets, a MAC address associated with a source device, among other attributes. In some examples, the monitoring data analyzer204may map, for a particular set of one or more packets, identifying attributes to a profile associated with a source device from which the set of packets originated. For example, the monitoring data analyzer204may identify within a set of packets a private IP address and a sub-network identifier corresponding to the source of the packets. These may be associated with (or “mapped” to) a profile associated with a source device, where the source device is identified by a combination of its corresponding private IP address and sub-network identifier. In other examples, the monitoring data analyzer204may detect additional attributes and associate the additional attributes with the source profile. These attributes include MAC address, communication protocol, as well as other attributes, such as behavioral patterns associated with the source device. Behavioral patterns include average packet size, average payload size, times of day when packets are transmitted, data consumption rates, average inactive (e.g., sleep) times, among others. The visualization engine208may generate a combined visual representation that depicts the network environment100as a whole. That is, the visual representation may include distinct interface elements, each of which corresponds to a distinct constituent sub-network (e.g., sub-networks104,120) of a network environment (e.g., network environment100). Each of the distinct interface elements may further identify (optionally in response to user selection) devices, device attributes, and communication patterns associated with the various devices. Example user interface elements are described below in the context ofFIGS.4-9. The tagging user interface212provides graphical functions in one or more of the distinct interface elements to apply a sub-network identification tag to a device, a packet, a set of packets, a communication session, and combinations thereof. In one or more embodiments, tagging user interface212includes more general functions of a frontend interface that, whether embodied as hardware and/or software, is configured to facilitate communications between a user and network visualization system148. That is, the tagging user interface212renders user interface elements and receives input via user interface elements. Examples of interfaces include a graphical user interface (GUI), a command line interface (CLI), a haptic interface, and a voice command interface. Examples of user interface elements include checkboxes, radio buttons, dropdown lists, list boxes, buttons, toggles, text fields, date and time selectors, command lines, sliders, pages, and forms. In an embodiment, different components of tagging user interface212are specified in different languages. The behavior of user interface elements is specified in a dynamic programming language, such as JavaScript. The content of user interface elements is specified in a markup language, such as hypertext markup language (HTML) or XML User Interface Language (XUL). The layout of user interface elements is specified in a style sheet language, such as Cascading Style Sheets (CSS). Alternatively, the frontend interface184is specified in one or more other languages, such as Java, C, or C++. Backend interface216may include an API, CLI, or other interfaces for invoking functions to execute actions. One or more of these functions may be provided through cloud services or other applications, which may be external to the system148. For example, one or more components of system148may invoke an API to access information stored in data repository220, such as device profiles. As another example, an API in the backend interface216may enable communication with other elements of the network environment100, such as communication systems136. It will be appreciated considering these examples that the actions that are performed may vary from implementation to implementation. In some embodiments, the system148may access external resources, such as cloud services. Example cloud services may include, but are not limited to, social media platforms, email services, short messaging services, enterprise management systems, data storage systems, virtualized communication interfaces, and other cloud applications. Backend interface216may serve as an API endpoint for invoking a cloud service. For example, backend interface216may generate outbound requests that conform to protocols ingestible by external resources. Backend interface216may process and translate inbound requests to allow for further processing by other components of the system148. The backend interface216may store, negotiate, and/or otherwise manage authentication information for accessing external resources. Example authentication information may include, but is not limited to, digital certificates, cryptographic keys, usernames, and passwords. Backend interface216may include authentication information in the requests to invoke functions provided through external resources. In one or more embodiments, the network visualization system148may be in communication with a data repository220. The data repository220is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data. Further, a data repository220may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site. Further, a data repository220may be implemented or may execute on the same computing system as the data repository220. Alternatively or additionally, a data repository220may be implemented or executed on a computing system separate from the data repository220. A data repository104may be communicatively coupled to the data repository220via a direct connection or via a network. Information describing the data repository220may be implemented across any of components within the system100. In one or more embodiments, the network visualization system148refers to hardware and/or software configured to perform operations described herein for generating one or more combined visual representations of private sub-networks in communication with a network environment, in which multiple devices share a same IP address. Examples of operations and example visual representations (e.g., user interfaces and user interface elements) for visualizing sub-networks and their corresponding devices are described below with reference toFIGS.3-9. In an embodiment, the network visualization system148is implemented on one or more digital devices. The term “digital device” generally refers to any hardware device that includes a processor. A digital device may refer to a physical device executing an application or a virtual machine. Examples of digital devices include a computer, a tablet, a laptop, a desktop, a netbook, a server, a web server, a network policy server, a proxy server, a generic machine, a function-specific hardware device, a hardware router, a hardware switch, a hardware firewall, a hardware firewall, a hardware network address translator (NAT), a hardware load balancer, a mainframe, a television, a content receiver, a set-top box, a printer, a mobile handset, a smartphone, a personal digital assistant (“PDA”), a wireless receiver and/or transmitter, a base station, a communication management device, a router, a switch, a controller, an access point, and/or a client device. 3. Generating and Displaying a Combined Visual Representation of a Network with Multiple Constituent Sub-Networks FIG.3illustrates an example set of operations, referred to collectively as a method300, for analyzing packets transmitted through a private network, in accordance with one or more embodiments. The method300may generate a combined visual representation of multiple sub-networks within the private network, in accordance with one or more embodiments. As described herein, the example set of operations also enable devices in sub-networks with duplicative private IP addresses to be distinguished from one another in one or more visual displays. One or more operations illustrated inFIG.3may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated inFIG.2should not be construed as limiting the scope of one or more embodiments. The method300may begin by detecting a first set of data packets transmitted through a first sub-network of a private network (operation304). The mechanisms for detecting the first set of data packets are described above in the context ofFIG.1and may include using a TAP, a SPAN, or other similar network traffic monitoring technique. As also described above, the operation304includes detecting and/or identifying various attributes associated with the data packets of the first set of data packets. These attributes may be used to uniquely identify not only the sub-network from which the data packets originated, but also the device from which the data packets originated. While many possible attributes detected in operation304are described above in the context ofFIG.1, three example attributes are specifically identified inFIG.3for convenience of illustration. The attributes may include a first network identifier308, a device IP address312, and a first device identifier316. These three attributes illustrate only one combination of many different combinations of attributes that maybe detected in the operation304. The first network identifier308includes a tag, label, or other indicator that is applied to or otherwise associated with the packets of the first set. The first network identifier308is used to identify the origin sub-network in preparation for generating a combined visual display of sub-networks within a network environment. This label enables duplicative IP addresses for different devices to be used in distinct sub-networks of a common private network so that an accurate visualization of the network and constituent sub-networks may be generated. The first network identifier (“tag” or “label”) may be associated with an IP address and not with a device. In this way, a device with a tagged IP address is moved to another network may still be associated with a profile (described below) for the device, thereby maintaining continuity of identification for the method300(and other embodiments described herein). In some examples, an administrator may use the first network identifier308to designate the sub-network and the devices connected thereto. In other words, the first network that is associated with the first network identifier308need not be a physical network or a logical network. In some cases, the first network identifier308may simply correspond to a group of devices that an administrator associates with one another. An administrator may select devices as components of these “convenience” or “constructive” networks for criteria beyond physical and/or topological reasons. The IP address312is the private internet protocol device associated with the packet that may be used to identify a device from which the packet originates. As described above, the IP address312may be unique to a particular sub-network but is not globally unique. Therefore, the IP address312, associated with a device in a first private sub-network within a network environment, may be duplicative of one or more additional devices in one or more corresponding private sub-networks in communication with the same network environment as the first private sub-network. In some examples, the first device identifier316may include a unique identifier, such as a MAC address. In other examples, a combination of device attributes may be used to form a first device identifier. For example, a combination of a device manufacturer serial number and an operating system version number may be used collectively as the first device identifier316. In other examples, a pattern of behavior exhibited by the device may be associated with a system (or administrator) assigned unique device identifier that is used to uniquely identify the device. For example, patterns in data usage over time, computing application usage, computing application versions, server requests, among others, may be characteristic of a device. Once the first set of packets are detected, the system may associate, store, or otherwise “map” the identifiers of the first set of data packets to a first profile corresponding to the first device that originated the data packets (operation320). In one example, the profile stores the network identifier308and the IP address312in the profile associated with the first device. These two data may be, at a minimum, to distinguish between devices in separate sub-networks of a private network having duplicative IP addresses. A device identifier, such as the device identifier316, may also be associated with the profile for convenience. The system may then display a first interface element that represents, and visually presents, (1) the sub-network from which the first set of data packets originated and (2) the first device within the sub-network (operation324). At a high level, presenting the first interface element that includes a representation of the first sub-network and representations of the devices associated with the first sub-network (e.g., the first device) improves the efficiency and convenience of a variety of network administration functions. For example, representing sub-networks and associated devices in separate interface elements enables a network administrator to more conveniently perform device inventory operations, device security auditing, and device service agreement compliance auditing. Absent an embodiment of the combined visual representation described herein, executing inventory, auditing, and other administrative functions for networks that include duplicative IP addresses is time consuming and prone to error. Continuing with the method300, the system continues monitoring network traffic until a second set of packets originating from a second sub-network within the same network environment as the first sub-network are detected (operation328). Similar to the preceding operations304to324, the packets of the second set of packets may originate from a second device and be identified using various attributes. These attributes may include any of those indicated above. In particular, the system may detect packet attributes that include, but are not limited to, a second network identifier332, an IP address336, and a second device identifier340. In some examples, the IP address336for the second device in the second sub-network is the same as the IP address312for the first device in the first sub-network. As indicated above, while network infrastructure devices like routers employ techniques to properly route packets from different devices having the same (duplicated) IP addresses, no analogous techniques exist for network administration functions. This complicates the work of maintaining device inventories and performing other network administration functions. These attributes332,336, and340detected in the second set of data packets originating from the second sub-network are used by the system in ways analogous to the attributes detected in the first set of data packets. Namely, the attributes332,336, and340are used to identify the second packets as originating from a second sub-network within the same private network as the first sub-network. Furthermore, the attributes332,336, and340may be used to uniquely identify a device generating the second set of packets. As described above in the context of operation320, the attributes used to identify the origin second sub-network and device of the second set of data packets are mapped to a profile corresponding to the second device (operation344). The system may then display a second interface element for the second device and second sub-network along with the first interface element for the first device and first sub-network in the combined visual representation (operation348). In this way, the system visually represents the constituent sub-networks and their corresponding devices. As described herein, embodiments of this visual representation may be used to administer and maintain a network. Even though the first device and the second device in this example have a same private IP address, the system-generated visual representation using the network identifier and IP address (optionally among others) allows these devices to be visually distinguishable from one another in the combined visual representation. 4. Example Embodiments Detailed examples are described below for purposes of clarity. Components and/or operations described below should be understood as one specific example which may not be applicable to certain embodiments. Accordingly, components and/or operations described below should not be construed as limiting the scope of any of the claims. FIGS.4-9illustrate various example embodiments of visual representation generated using the techniques described above. In some examples, one or more embodiments display information related to multiple sub-networks within a private network environment in which different devices in different sub-networks have a same IP address. FIG.4illustrates an example user interface400. This example user interface400presents a list of sub-networks that are components (or “constituents”) of a same network (also referred to as a “network environment”). The example user interface400includes an index number404, a sub-network identifier408, a sub-network short name412, and a sub-network long name416. The index number404simply identifies a list position for each sub-network. An index number may improve the readability and communication of some embodiments of the display400, particularly those that include many different sub-networks. The sub-network identifier408, labeled inFIG.4as the “domain ID” is a system-recognized identifier associated with a corresponding sub-network. In some examples, the sub-network identifier408may correspond to a label used by the system inFIG.3that is applied to packets to indicate a source sub-network (i.e., network identifiers308,332). The sub-network short name412and sub-network long name416may include readable labels that are provided for the convenience of administrators and/or users. This user interface400provides information to an administrator regarding the identities of constituent sub-networks and their corresponding system-applied labels and colloquial names more conveniently used and remembered by administrators. The example user interface500lists network sensors associated with different sub-networks. More specifically, the example user interface500identifies the IP address of the network sensor504, a sub-network identifier508, and a sub-network short name512. At a high level, the example user interface500coordinates an identify of a network sensor (e.g., the IP address of the network sensor504) and one or more labels used to identify a sub-network to which an identified network sensor is connected (e.g., a sub-network ID508and/or a sub-network name512). This user interface500enables an administrator to identify particular network sensors (e.g., devices associated with TAPs and/or SNAPs) that are connected to particular corresponding sub-networks within a network environment. This in turn may improve the distinction between devices that share a same IP address. For example, a network administrator may identify a source network of data packets having a duplicative IP address by understanding (1) which network sensor is detecting a particular set of data packets and (2) the sub-network to which the network sensor is connected. In this way, the example user interface500provides data regarding network sensors and network topology that may be used by an administrator to identify a source sub-network with which a device communicates. FIG.6illustrates an example sensor tagging user interface600that enables an administrator to apply a sub-network identifier to a particular set of data packets. The system may subsequently apply a tag to packets according to the designated rule, thereby identifying a sub-network source of the data packets despite a duplicative device IP address. That is, an administrator may create a rule in the example user interface600that applies a sub-network identifying label to data packets based on one or more attributes specified in the rule. The example interface600includes sensor information604, a tag selector608, a packet attribute selector612, and a packet attribute selection interface616. Using the example interface600, an administrator may use the sensor information604to uniquely identify a sensor in a network environment. For example, an administrator may use the sensor information604(e.g., a sensor name or “label,” a sensor MAC address, an IP address associated with a gateway device to which the sensor is in communication with) to identify the sub-network to which the sensor is in communication with. In one example, an administrator may identify the sub-network associated with the sensor by using information displayed in the example user interface500. An administrator may then establish a rule using the interface elements608,612, and616. The rule, when established, may subsequently apply a tag to packets identifying the origin sub-network based on the one or more attributes specified in the rule by interface element616. In one example, the “apply tag” field608may be used to apply a tag to data packets indicating a particular sub-network for packets with one or more particular attributes selected using the attribute selection interface616. Example attributes include an IP address associated with an origin device of the packet(s), which may be duplicative, and a sub-network identifier. Other attributes may optionally be selected in the attribute selection interface616, such as those described above. Example user interfaces700inFIG.7A and704inFIG.7Billustrates full network environment and individual sub-network device inventories, respectively, according to some embodiments of the present disclosure. The example user interface700illustrated inFIG.7Ais a device inventory for a full network environment that, by employing the techniques described herein, visually distinguishes between different devices even though they have a same IP address. For one or more devices in communication with a network environment (e.g., that includes one or more sub-networks), the example user interface700includes an index number708, a device MAC address712, an IP address716, and a sub-network identifier720. The index number708simply identifies a list position in the user interface700. The MAC address712is a unique device identifier described above. As shown in this example, the IP addresses716of the devices listed in the example user interface700are all the same (192.168.107.230). However, other features of the example user interface700, namely the MAC address712and the sub-network identifier720enable these different devices to be distinguished from one another despite sharing an IP address. As described above, the example user interface700thus enables a network administrator to effectively execute various management functions, such as inventory and auditing functions, that would otherwise be frustrated by the presence of duplicative IP addresses. The example user interface704illustrated inFIG.7Benables a display of devices in communication with a single, particular subnetwork. In some examples, the example user interface704may be selected via the example user interface700. The example user interface800illustrated inFIG.8displays attributes and/or identifying information associated with a specific device, in accordance with some embodiments of the present disclosure. The example user interface800includes a device identifier804, a MAC address808, a device description812, a device manufacturer816, and a sub-network identifier tag (“label”) selector820. Using the example user interface800, an administrator may manually apply a tag to a particular device that indicates a sub-network that the particular device is in communication with. For example, one or more of the device identifier804, MAC address808, device description812, and/or device manufacturer816may be used to uniquely identify a device. Other attributes and/or behavioral patterns may also be presented in the example user interface800so as to identify a device. Using any of preceding techniques, user interface displays, and/or understanding of a network topology, an administrator may manually use the sub-network identifier tag selector820to associate a particular device (e.g., device with MAC address808) with a particular constituent sub-network of a network with multiple sub-networks. The example user interface800may be accessed via one or both of the user interfaces700and/or704. The example user interface900illustrates a device inventory interface that displays various characteristics of a particular network and/or sub-network within the particular network, in accordance with some embodiments. A network/sub-network selector904may be used to identify a network and/or sub-network to display in the remaining elements of the user interface900. The example user interface900may include various elements, such as a device count element908, a device status912, a device type inventory916, and network device attribute summaries920. In some cases, the various attributes, counts, and statuses illustrated in the example user interface900may be incomplete or otherwise inaccurate because devices from different sub-networks within a network environment may have duplicative IP addresses. However, in this case, using the tags described above, the example user interface may accurately depict connected devices and summarize the collections of their various attributes. In fact, the systems and techniques described above may even analyze the tags applied to packets so that a number of various different types of devices associated with each sub-network may be enumerated in a device type sub-network count918of the device type inventory916. 5. Computer Networks and Cloud Networks In one or more embodiments, a computer network provides connectivity among a set of nodes. The nodes may be local to and/or remote from each other. The nodes are connected by a set of links. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, an optical fiber, and a virtual link. A subset of nodes implements the computer network. Examples of such nodes include a switch, a router, a firewall, and a network address translator (NAT). Another subset of nodes uses the computer network. Such nodes (also referred to as “hosts”) may execute a client process and/or a server process. A client process makes a request for a computing service (such as, execution of a particular application, and/or storage of a particular amount of data). A server process responds by executing the requested service and/or returning corresponding data. A computer network may be a physical network, including physical nodes connected by physical links. A physical node is any digital device. A physical node may be a function-specific hardware device, such as a hardware switch, a hardware router, a hardware firewall, and a hardware NAT. Additionally or alternatively, a physical node may be a generic machine that is configured to execute various virtual machines and/or applications performing respective functions. A physical link is a physical medium connecting two or more physical nodes. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, and an optical fiber. A computer network may be an overlay network. An overlay network is a logical network implemented on top of another network (such as, a physical network). Each node in an overlay network corresponds to a respective node in the underlying network. Hence, each node in an overlay network is associated with both an overlay address (to address to the overlay node) and an underlay address (to address the underlay node that implements the overlay node). An overlay node may be a digital device and/or a software process (such as, a virtual machine, an application instance, or a thread) A link that connects overlay nodes is implemented as a tunnel through the underlying network. The overlay nodes at either end of the tunnel treat the underlying multi-hop path between them as a single logical link. Tunneling is performed through encapsulation and decapsulation. In an embodiment, a client may be local to and/or remote from a computer network. The client may access the computer network over other computer networks, such as a private network or the Internet. The client may communicate requests to the computer network using a communications protocol, such as Hypertext Transfer Protocol (HTTP). The requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API). In an embodiment, a computer network provides connectivity between clients and network resources. Network resources include hardware and/or software configured to execute server processes. Examples of network resources include a processor, a data storage, a virtual machine, a container, and/or a software application. Network resources are shared amongst multiple clients. Clients request computing services from a computer network independently of each other. Network resources are dynamically assigned to the requests and/or clients on an on-demand basis. Network resources assigned to each request and/or client may be scaled up or down based on, for example, (a) the computing services requested by a particular client, (b) the aggregated computing services requested by a particular tenant, and/or (c) the aggregated computing services requested of the computer network. Such a computer network may be referred to as a “cloud network.” In an embodiment, a service provider provides a cloud network to one or more end users. Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). In SaaS, a service provider provides end users the capability to use the service provider's applications, which are executing on the network resources. In PaaS, the service provider provides end users the capability to deploy custom applications onto the network resources. The custom applications may be created using programming languages, libraries, services, and tools supported by the service provider. In IaaS, the service provider provides end users the capability to provision processing, storage, networks, and other fundamental computing resources provided by the network resources. Any arbitrary applications, including an operating system, may be deployed on the network resources. In an embodiment, various deployment models may be implemented by a computer network, including but not limited to a private cloud, a public cloud, and a hybrid cloud. In a private cloud, network resources are provisioned for exclusive use by a particular group of one or more entities (the term “entity” as used herein refers to a corporation, organization, person, or other entity). The network resources may be local to and/or remote from the premises of the particular group of entities. In a public cloud, cloud resources are provisioned for multiple entities that are independent from each other (also referred to as “tenants” or “customers”). The computer network and the network resources thereof are accessed by clients corresponding to different tenants. Such a computer network may be referred to as a “multi-tenant computer network.” Several tenants may use a same particular network resource at different times and/or at the same time. The network resources may be local to and/or remote from the premises of the tenants. In a hybrid cloud, a computer network comprises a private cloud and a public cloud. An interface between the private cloud and the public cloud allows for data and application portability. Data stored at the private cloud and data stored at the public cloud may be exchanged through the interface. Applications implemented at the private cloud and applications implemented at the public cloud may have dependencies on each other. A call from an application at the private cloud to an application at the public cloud (and vice versa) may be executed through the interface. In an embodiment, tenants of a multi-tenant computer network are independent of each other. For example, a business or operation of one tenant may be separate from a business or operation of another tenant. Different tenants may demand different network requirements for the computer network. Examples of network requirements include processing speed, amount of data storage, security requirements, performance requirements, throughput requirements, latency requirements, resiliency requirements, Quality of Service (QoS) requirements, tenant isolation, and/or consistency. The same computer network may need to implement different network requirements demanded by different tenants. In one or more embodiments, in a multi-tenant computer network, tenant isolation is implemented to ensure that the applications and/or data of different tenants are not shared with each other. Various tenant isolation approaches may be used. In an embodiment, each tenant is associated with a tenant ID. Each network resource of the multi-tenant computer network is tagged with a tenant ID. A tenant is permitted access to a particular network resource only if the tenant and the particular network resources are associated with a same tenant ID. In an embodiment, each tenant is associated with a tenant ID. Each application, implemented by the computer network, is tagged with a tenant ID. Additionally or alternatively, each data structure and/or dataset, stored by the computer network, is tagged with a tenant ID. A tenant is permitted access to a particular application, data structure, and/or dataset only if the tenant and the particular application, data structure, and/or dataset are associated with a same tenant ID. As an example, each database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular database. As another example, each entry in a database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular entry. However, the database may be shared by multiple tenants. In an embodiment, a subscription list indicates which tenants have authorization to access which applications. For each application, a list of tenant IDs of tenants authorized to access the application is stored. A tenant is permitted access to a particular application only if the tenant ID of the tenant is included in the subscription list corresponding to the particular application. In an embodiment, network resources (such as digital devices, virtual machines, application instances, and threads) corresponding to different tenants are isolated to tenant-specific overlay networks maintained by the multi-tenant computer network. As an example, packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network. Encapsulation tunnels are used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks. Specifically, the packets, received from the source device, are encapsulated within an outer packet. The outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network). The second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device. The original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network. 6. Miscellaneous; Extensions Embodiments are directed to a system with one or more devices that include a hardware processor and that are configured to perform any of the operations described herein and/or recited in any of the claims below. In an embodiment, a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims. Any combination of the features and functionalities described herein may be used in accordance with one or more embodiments. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. 7. Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or network processing units (NPUs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, FPGAs, or NPUs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.10is a block diagram that illustrates a computer system1000upon which an embodiment of the invention may be implemented. Computer system1000includes a bus1002or other communication mechanism for communicating information, and a hardware processor1004coupled with bus1002for processing information. Hardware processor1004may be, for example, a general purpose microprocessor. Computer system1000also includes a main memory1006, such as a random access memory (RAM) or other dynamic storage device, coupled to bus1002for storing information and instructions to be executed by processor1004. Main memory1006also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1004. Such instructions, when stored in non-transitory storage media accessible to processor1004, render computer system1000into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system1000further includes a read only memory (ROM)1008or other static storage device coupled to bus1002for storing static information and instructions for processor1004. A storage device1010, such as a magnetic disk or optical disk, is provided and coupled to bus1002for storing information and instructions. Computer system1000may be coupled via bus1002to a display1012, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device1014, including alphanumeric and other keys, is coupled to bus1002for communicating information and command selections to processor1004. Another type of user input device is cursor control1016, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor1004and for controlling cursor movement on display1012. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system1000may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system1000to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system1000in response to processor1004executing one or more sequences of one or more instructions contained in main memory1006. Such instructions may be read into main memory1006from another storage medium, such as storage device1010. Execution of the sequences of instructions contained in main memory1006causes processor1004to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device1010. Volatile media includes dynamic memory, such as main memory1006. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, content-addressable memory (CAM), and ternary content-addressable memory (TCAM). Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1002. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor1004for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system1000can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus1002. Bus1002carries the data to main memory1006, from which processor1004retrieves and executes the instructions. The instructions received by main memory1006may optionally be stored on storage device1010either before or after execution by processor1004. Computer system1000also includes a communication interface1018coupled to bus1002. Communication interface1018provides a two-way data communication coupling to a network link1020that is connected to a local network1022. For example, communication interface1018may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface1018may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface1018sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link1020typically provides data communication through one or more networks to other data devices. For example, network link1020may provide a connection through local network1022to a host computer1024or to data equipment operated by an Internet Service Provider (ISP)1026. ISP1026in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”1028. Local network1022and Internet1028both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link1020and through communication interface1018, which carry the digital data to and from computer system1000, are example forms of transmission media. Computer system1000can send messages and receive data, including program code, through the network(s), network link1020and communication interface1018. In the Internet example, a server1030might transmit a requested code for an application program through Internet1028, ISP1026, local network1022and communication interface1018. The received code may be executed by processor1004as it is received, and/or stored in storage device1010, or other non-volatile storage for later execution. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. | 63,734 |
11863402 | Unless otherwise indicated, the drawings provided herein are meant to illustrate features of embodiments of this disclosure. These features are believed to be applicable in a wide variety of systems including one or more embodiments of this disclosure. As such, the drawings are not meant to include all conventional features known by those of ordinary skill in the art to be required for the practice of the embodiments disclosed herein. DETAILED DESCRIPTION In the following specification and claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where the event occurs and instances where it does not. Approximating language, as used herein throughout the specification and claims, may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately,” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Here and throughout the specification and claims, range limitations may be combined and/or interchanged; such ranges are identified and include all the sub-ranges contained therein unless context or language indicates otherwise. As used herein, the terms “processor” and “computer” and related terms, e.g., “processing device”, “computing device”, and “controller” are not limited to just those integrated circuits referred to in the art as a computer, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit (ASIC), and other programmable circuits, and these terms are used interchangeably herein. In the embodiments described herein, memory may include, but is not limited to, a computer-readable medium, such as a random access memory (RAM), and a computer-readable non-volatile medium, such as flash memory. Alternatively, a floppy disk, a compact disc—read only memory (CD-ROM), a magneto-optical disk (MOD), and/or a digital versatile disc (DVD) may also be used. Also, in the embodiments described herein, additional input channels may be, but are not limited to, computer peripherals associated with an operator interface such as a mouse and a keyboard. Alternatively, other computer peripherals may also be used that may include, for example, but not be limited to, a scanner. Furthermore, in the exemplary embodiment, additional output channels may include, but not be limited to, an operator interface monitor. Further, as used herein, the terms “software” and “firmware” are interchangeable, and include any computer program storage in memory for execution by personal computers, workstations, clients, and servers. As used herein, the term “non-transitory computer-readable media” is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer-readable instructions, data structures, program modules and sub-modules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. Moreover, as used herein, the term “non-transitory computer-readable media” includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal. The embodiments described herein provide innovative systems and methods for computer networks within NFV environments. The present embodiments introduce, among other solutions, techniques for communication between virtual network functions (VNF), virtual network function managers (VNFM), operations support systems/business support systems (OSS/BSS), and VNF vendors that sell and/or licenses access to one of more VNFs to ensure secure and efficient communication. The present embodiments are advantageously applicable in the ETSI NFV Management and Orchestration (MANO) environment and architecture. Network Functions Virtualization (NFV) adds new capabilities to communications networks and requires a new set of management and orchestration functions to be added to the current model of operations, administration, maintenance and provisioning. In legacy networks, Network Function (NF) implementations are often tightly coupled with the infrastructure they run on. NFV decouples software implementations of Network Functions from the computation, storage, and networking resources they use. The virtualization insulates the Network Functions from those resources through a virtualization layer. The decoupling exposes a new set of entities, the Virtualized Network Functions (VNFs), and a new set of relationships between them and the NFV Infrastructure (NFVI). VNFs can be chained with other VNFs and/or Physical Network Functions (PNFs) to realize a Network Service (NS). The management and orchestration of virtualized resources are leveraged for providing VNFs with the resources they need. Resource allocation in the NFVI is a potentially complex task as a lot of requirements and constraints may need to be met simultaneously. Particularly requirements for network allocation add complexity compared to known resource allocation strategies for computing resources in virtualized environments. For example, some VNFs require low latency or high bandwidth links to other communication endpoints. The systems and methods disclosed herein describe specific connections between the VNFs and the virtual network function manager (VNFM) to (i) improve security, (ii) increase efficiency, and (iii) ensure proper distribution. More specifically, the systems and methods describe a VNF license manager for controlling access to licensed VNFs. The VNF license manager supports standardized Application Programming Interface (API) transactions for dynamic license management. The VNF license manager moves distribution of VNFs and enforcement of VNF contracts to a centralized location. This allows the individual multiple system operators (MSO) and system operators to use standardized calls to access VNFs. This also allows for a standardized interface to handle different types of VNFs. In this manner the VNF license manager interfaces with each of the individual VNF vendors and provides a standardized interface to the MSOs and system operators. Therefore, the VNF license manager may act more like an app store for VNFs and reduce the amount of actions required on the part of the VNF consumer. As the supply chain for VNFs becomes more complex, different VNF providers and users will attempt to implement different commercial licensing schemes. The systems described herein provide new network operations functionality to provide dynamic license management. This allows the NFV Infrastructure to support standardized API-based transactions for dynamic license management. FIG.1is a schematic illustration of an exemplary computer network100for an NFV architecture102. NFV architecture102represents, for example, a system according to the ETSI NFV Management and Operations (MANO) specification, and includes an NFV orchestrator (NFVO)104, an NS catalog106, a virtual network functions (VNF) catalog108, NFV instances110, NFVI resources112, a VNF manager (VNFM)114, and a virtualized infrastructure manager (VIM)116. In an exemplary embodiment, network100includes an operations support systems/business support systems (OSS/BSS) functional block120for and in communication with the NFV architecture102. Network100also includes element managers (EM)124, virtual network functions126, and network functions virtualization infrastructure (NFVI)128. NFV orchestrator104orchestrates the NFVI resources across multiple VIMs116and manages the lifecycle of network services. NS Catalogue106represents the repository of all on-boarded Network Services and supports the creation and management of the network services deployable templates. VNF Catalogue108represents the repository of all of the on-boarded VNF packages and supports the creation and management of the VNF packages. NFV Instances110repository holds information of all VNF instances126and network service instances. Each VNF instance126is represented by a VNF record and each ES instance is represented by an ES record. These records are updated during the lifecycle of respective instances. NFVI Resources112repository holds information about available, reserved, and allocated NFVI resources as abstracted by VIM116across the operator's Infrastructure Domains. VNFM114is responsible for the lifecycle management of VNF126instances. In some embodiments, VNFM114is assigned the management of a single VNF126instance. In the exemplary embodiment, VNFM114is assigned the management of a plurality of VNF126instances, of the same type or of different types. In some embodiments, VNFM114functions may be generic common functions applicable to any type of VNF126. In other embodiments, some VNF126instances require specific functionality associated with their individual lifecycle. This functionality may be specified in the individual VNF's package. In the exemplary embodiment, VNFM114performs multiple functions for each VNF126associated with it. These functions may include, but are not limited to, VNF instantiation (including VNF configuration), VNF instantiation feasibility checking, VNF instance software updates and upgrades, VNF instance modification, VNF instance scaling, VNF instance-related collection of NFVI performance measurement results, VNF instance healing, VNF instance termination, VNF lifecycle management change notification, management of the integrity of the VNF instance throughout its lifecycle, and the overall coordination and adaption role for configuration and event reporting between VIM116and EM124. VIM116is responsible for controlling NFVI128resources. OSS/BSS120are a combination of the operator's other operations and business support functions that are not explicitly captures by NFV102. EM124is responsible for the FCAPS management functionality of a VNF126. FCAPS stands for “Fault Management, Configuration Management, Accounting Management, Performance Management, and Security Management.” EM124performs functions such as, but not limited to, configuration for the network functions provided by VNF126, fault management for the network functions provided by VNF126, accounting for the usage of the VNF functions, collecting performance measurement results for the functions provided by VNF126, and security management for the VNF functions. In some embodiments, EM124collaborates with VNFM114to perform those functions that require exchanges of information regarding the NFVI resources associated with VNF126. NFVI128encompasses the hardware and software components that provided the infrastructure resources where VNFs126are deployed. FIG.2is a schematic illustration of an NFVI200. In an exemplary embodiment, NFVI200is similar to NFVI128(shown inFIG.1). In the exemplary embodiment, NFVI200describes the hardware and software components on which the virtual networks are built. NFVI200virtualizes network services rather than operating them on proprietary dedicated hardware. NFVI200treats hardware resources202as commodity hardware that runs software to accomplish functions such as routing, and firewalls. NFVI200executes a virtualization layer204where computing hardware206, storage hardware208, and network hardware210are used to perform virtual computing212, virtual storage214, and virtual networks216. NFVI200acts as an interface between hardware resources202and VNFs126(shown inFIG.1) that are desired to be executed. FIG.3is a schematic illustration of a VNF Licensing Architecture300, in accordance with an embodiment of the disclosure. In the exemplary embodiment, VNF Licensing Architecture300acts as an interface between VNF vendors and NFV architecture102(shown inFIG.1) for a multiple system operator (MSO), such as a cable operator. In the exemplary embodiment, VNF License Manager302is in communication with an MSO Operations and Business support systems304, and with a plurality of vendors306, such as Vendor A, B, & C. Each vendor306provides one or more VNFs126. VNF License Manager302interacts with individual vendors306to provide access to their VNFs126to NFV architecture102associated with the individual MSOs. VNF License Manager302stores images of individual VNFs126in a VNF repository308, and stores the policies associated with each VNF126and corresponding vendor306in a VNF license policies database310. VNF repository308acts as an app store for individual VNFs126, where an MSO or NFV architecture102may request a copy of a particular VNF126stored in VNF repository308. In the exemplary embodiment, VNF License Manager302receives updates to VNFs126and stores those updated VNFs126in VNF repository308. Furthermore, VNF License Manager302may provide the updated VNFs126to those NFV architectures102that are currently using or licensing to use the updated VNF126. In the exemplary embodiment, VNF License Manager302interfaces with the MSO or NFV architecture102through MSO Operations and Business support systems304, which may be similar to OSS/BSS functional block120(shown inFIG.1). In the exemplary embodiment, MSO Operations and Business support systems304includes a VNF licensing agent314, which acts as an interface between VNF License Manager302and NFV architectures102. In the exemplary embodiment, VNF licensing agent314requests access to a particular VNF126from VNF License Manager302. VNF License Manager302provides VNF licensing agent314with access to a copy of VNF126from VNF repository308. VNF licensing agent314provides VNF License Manager302with payment for access to VNF126. VNF License Manager302routes the payment to a particular vendor306associated with VNF126. In some embodiments, VNF License Manager302keeps a portion of the payment as a management fee. In the exemplary embodiment, VNF licensing agent314also provides VNF License Manager302with information about the usage of VNF126by its NFV architectures102. VNF License Manager302compares the usage information with the policy for VNF126from VNF license policies database310. VNF License Manager302determines if there is a violation of the policies and then determines if VNF licensing agent314may continue to access VNF126. In some embodiments, the communications between VNF License Manager302and VNF licensing agent314may be viewed and stored by a blockchain ledger318. For example, the blockchain ledger318may keep track of the payments, usage information, and other provided information, which allows for an immutable set of records for VNF126. In some embodiments, blockchain ledger318may be accessible by one or more of VNF License Manager302, VNF licensing agent314, and a particular vendor306associated with that VNF126. In some embodiments, there is a single ledger for each vendor306. In other embodiments, there is a ledger for each VNF126. In still other embodiments, multiple VNFs126from multiple vendors306may be monitored in a single ledger318, and each individual vendor306may only have access to those records associated with their own VNFs126. In the exemplary embodiments, VNF license manager302, and communications with VNF licensing agent314, are protected using a public key infrastructure (PKI)312. In some further embodiments, VNF repository308and VNF license policy database310are also protected by PM312. The above system allows for the separation of the software cycle of functionality from the hardware cycle. In the exemplary embodiment, racks of servers may be running thousands of VNFs. According to the advantageous embodiments described herein, the software lifecycle may be completely decoupled from the hardware lifecycle as NFVI128provides the interface between hardware resources202and VNFs126. By these innovative techniques, many different vendors306will be potentially enabled to provide VNFs126, which is significantly advantageous for smaller vendors. This advantageous separation of software cycles from hardware cycles further enables users to dynamically change software to meet new service needs without having to upgrade the associated hardware. The systems described herein allow for management of the diversity of licensing management mechanism that exists across multiple VNF vendors306. VNF License Manager302may provide the interface to multiple VNF vendors306and handle the variety of different interfaces through a single manager, thus removing that requirement from NFV architecture102. Furthermore VNF License Manager302may handle the updates to VNFs126, which allows VNF License Manager302to provide the latest version of VNF126to the interested VNF architecture102. More complication and diversity in vendors306makes provisioning and license renewal a more complex, error-prone, and time-consuming process. The complication and diversity also inhibits automation and may potentially lead to service outages. Thus the goal of this system is to provide interoperability for automated license management transaction between service providers and vendors. The systems described herein allow all VNFs126to use the same licensing methods, mechanisms, and protocols by communicating with a single point, VNF License Manager302. VNF License Manager302provides a consistent interface for VNF vendors306. The systems described herein, through VNF License Manager302, provide a fully automated license management process requiring no manual intervention. VNF License Manager302is also scalable to handle a large number of VNF instances. In some embodiments, VNF License Manager302prevents service outages due to administrative errors by defaulting VNFs to running and being active. VNF License Manager302may also support multiple different VNF licensing models, such as, but not limited to, periodic billing, usage billing, and one-time payment. VNF License Manager302may also keep the accounting of the usage of the VNF separate from the billing. Furthermore, the usage data could be authenticated and auditable, such as through the use of one or more blockchain ledgers318. In the exemplary embodiment, system300is implemented for secure management of licensing and distributing virtual network functions (VNF). System300includes VNF license manager302, VNF repository308for storing a plurality of VNFs including a first VNF126, and VNF license database310for storing a plurality of polices associated with the plurality of VNFs. VNF license manager302is in communication with VNF repository308and VNF license database310. In the exemplary embodiment, VNF license manager302is programmed to receive a request for access to the first VNF126from a virtual network, such as NFV architecture102. The virtual network is configured to execute the first VNF126. In the exemplary embodiment, the request may be received from VNF licensing agent314. VNF license manager302determines if the virtual network may access the first VNF126based on one or more policies of the plurality of policies associated with the first VNF126. If the virtual network may access the first VNF126, VNF license manager302retrieves the first VNF126from VNF repository308and transmits the first VNF126to the virtual network. In some embodiments, VNF license manager302receives usage information about the virtual network and the first VNF126. VNF license manager302analyzes the usage information in view of the one or more policies associated with the first VNF126. VNF license manager302transmits a message indicating that the first VNF126is no longer usable by the virtual network based on the usage information. In some embodiments, VNF license manager302calculates billing information based on the analysis of the usage information. In some further embodiments, VNF license manager302receives an updated version of the first VNF126from a computer device of vendor306. VNF license manager302stores the updated version of the first VNF126in VNF repository308. VNF license manager302transmits the updated version of the first VNF126to the virtual network. In some other embodiments, in the case where system300also includes blockchain ledger318, VNF license manager302may be configured to store a plurality of communications with the virtual network in blockchain ledger318. In some embodiments, VNF license manager302transmits a copy of the one or more policies associated with the first VNF126to the virtual network. VNF license manager302then receives acknowledgement of receipt of the one or more policies by the virtual network. VNF license manager302transmits the first VNF126to the virtual network upon receipt of the acknowledgement. In the exemplary embodiment, in the case where system300also includes VNF licensing agent314, VNF licensing agent314may be in communication with at least one NFV architecture102. VNF licensing agent314receives from NFV architecture102a request for a first VNF126. VNF licensing agent314transmits the request for the first VNF126to VNF license manager302. In some embodiments, the request includes payment information. VNF licensing agent314receives a copy of the first VNF126from VNF license manager302. VNF licensing agent314transmits the copy of the first VNF126to NFV architecture102. NFV architecture102is configured to execute one or more instantiations of the first VNF126. In some embodiments, VNF licensing agent314receives usage information associated with the first VNF126from NFV architecture102. VNF licensing agent314transmits the usage information to VNF license manager302. In some embodiments, VNF licensing agent314receives a request to access the first VNF126from a second NFV architecture102. VNF licensing agent314transmits the copy of the first VNF126to the second NFV architecture102. The second NFV architecture102is configured to execute one or more instantiations of the first VNF126. In some further embodiments, the VNF licensing agent314receives an updated copy of the first VNF126from VNF license manager302. VNF licensing agent314transmits the updated copy of the first VNF126to the first NFV architecture102and the second NFV architecture102. The first NFV architecture102and the second NFV architecture102are configured to halt execution of the instantiations of the first VNF126and execute instantiations of the updated VNF126. In still further embodiments, VNF licensing agent314receives policy information associated with the first VNF126from VNF license manager302. VNF licensing agent314stores the policy information associated with the first VNF126. VNF licensing agent314transmits an acknowledgement of the policy information to VNF license manager302. In some embodiments, VNF licensing agent314receives a message indicating that the first VNF126is no longer usable. VNF licensing agent314transmits the message indicating that the first VNF126is no longer usable to NFV architecture102. In this case, NFV architecture102may be configured to halt execution of instantiations of the first VNF126. As the supply chain for VNFs becomes more complex, different VNF providers and users will attempt to implement different commercial licensing schemes. The systems described herein provide new network operations functionality to provide dynamic license management, which advantageously allows the NFVI to support standardized API-based transactions for dynamic license management. According to the several embodiments described herein, separate management infrastructure and hosted infrastructure in an NFV environment may be centrally implemented in a variety of different technological environments, and without requiring any structural (i.e., hardware) changes to the computer networks of such technological environments. The present embodiments therefore provide significant advantages over computer network environments with combined or intertwined infrastructures. Exemplary embodiments of systems and methods for separate management infrastructure and hosted infrastructure in an NFV environment are described above in detail. The systems and methods of this disclosure though, are not limited to only the specific embodiments described herein, but rather, the components and/or steps of their implementation may be utilized independently and separately from other components and/or steps described herein. Although specific features of various embodiments of the disclosure may be shown in some drawings and not in others, this convention is for convenience purposes and ease of description only. In accordance with the principles of the disclosure, a particular feature shown in a drawing may be referenced and/or claimed in combination with features of the other drawings. For example, the following list of example claims represents only some of the potential combinations of elements possible from the systems and methods described herein. The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, and/or sensors (such as processors, transceivers, and/or sensors mounted on vehicles or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer-executable instructions stored on non-transitory computer-readable media or medium. Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium. The improvements described herein may be achieved by performing one or more of the following steps: (a) receive a request for access to the first VNF from a virtual network, wherein the virtual network is configured to execute the first VNF; (b) determine if the virtual network may access the first VNF based on one or more policies of the plurality of policies associated with the first VNF; (c) if the virtual network may access the first VNF, retrieve the first VNF from the VNF repository and transmit the first VNF to the virtual network; (d) receive usage information about the virtual network and the first VNF; (e) analyze the usage information in view of the one or more policies associated with the first VNF; (f) transmit a message indicating that the first VNF is no longer usable by the virtual network based on the usage information; (g) calculate billing information based on the analysis of the usage information; (f) receive an updated version of the first VNF from a vendor computer device; (g) store the updated version of the first VNF in the VNF repository; (h) transmit the updated version of the first VNF to the virtual network; (i) store a plurality of communications with the virtual network in a blockchain ledger; (j) transmit a copy of the one or more policies associated with the first VNF to the virtual network; (k) receive acknowledgement of receipt of the one or more policies by the virtual network; and (l) transmit the first VNF to the virtual network upon receipt of the acknowledgement. The improvement may also be achieved by performing one or more of the following steps: (a) receive from the NFV architecture a request for a first VNF; (b) transmit the request for the first VNF to a VNF license manager, wherein the request includes payment information; (c) receive a copy of the first VNF from the VNF license manager; (d) transmit the copy of the first VNF to the NFV architecture, wherein the NFV architecture is configured to execute one or more instantiations of the first VNF; (e) receive usage information associated with the first VNF from the NFV architecture; (f) transmit the usage information to the VNF license manager; (g) receive a request to access the first VNF from a second NFV architecture; (h) transmit the copy of the first VNF to the second NFV architecture, wherein the second NFV architecture is configured to execute one or more instantiations of the first VNF; (i) receive an updated copy of the first VNF from the VNF license manager; (j) transmit the updated copy of the first VNF to the NFV architecture and the second NFV architecture, wherein the NFV architecture and the second NFV architecture are configured to halt execution of the instantiations of the first VNF and execute instantiations of the updated VNF; (k) receive policy information associated with the first VNF from the VNF license manager; (l) store the policy information associated with the first VNF; (m) transmit an acknowledgement of the policy information to the VNF license manager; (n) receive a message indicating that the first VNF is no longer usable; and (o) transmit the message indicating that the first VNF is no longer usable to the NFV architecture, wherein the NFV architecture is configured to halt execution of instantiations of the first VNF. The aspects described herein may be implemented as part of one or more computer components such as a client device and/or one or more back-end components, such as a host device, for example. Furthermore, the aspects described herein may be implemented as part of a computer network architecture and/or a cognitive computing architecture that facilitates communications between various other devices and/or components. Thus, the aspects described herein address and solve issues of a technical nature that are necessarily rooted in computer technology. For instance, aspects include routing communications between separate networks to ensure security, distribution, and management of VNFs that may be provided by third-party vendors. In doing so, the aspects overcome issues associated with having to have individual virtual networks deal with a plurality of different interfaces for a plurality of different vendors of VNFs. Furthermore, these aspects reduce the chance of data compromise and allow for proper access to the VNFs in accordance with their policies. Without the improvements suggested herein, additional processing and memory usage, or even direct human intervention, would be required to perform such activities. Additional technical advantages include, but are not limited to: i) improved speed and responsiveness in providing and updating VNFs; ii) improved monitoring for compliance with policies; iii) allowing the virtual network function infrastructure to interface with new VNF vendors without requiring specialized interfaces; iv) reducing the chance of malicious communications and VNFs; v) allowing for protected two-way communication between the vendor and the user; and vi) preventing the VNFIs from having direct access to the vendors. Additional technical advantages are described in other sections of the specification. Furthermore, the embodiments described herein improve upon existing technologies, and improve the functionality of computers, by more accurately predict or identify the current security status of any connected device. The present embodiments improve the speed, efficiency, and accuracy in which such calculations and processor analysis may be performed. Due to these improvements, the aspects address computer-related issues regarding efficiency over conventional techniques. Thus, the aspects also address computer related issues that are related to computer security, for example. Accordingly, the innovative systems and methods described herein are of particular value within the realm of virtual network functions, which have been historically associated with a poor record of securing communications and data. The present embodiments enable more reliable updating and control of such functions, but without compromising data and communications. Furthermore, according to the disclosed techniques, the monitoring and updating of virtual network functions in greatly improved to improve the security, distribution, and support of these functions, the associated computer devices, and the associated computer networks. Exemplary embodiments of systems and methods for separating license management entities from the hosted infrastructure are described above in detail. The systems and methods of this disclosure though, are not limited to only the specific embodiments described herein, but rather, the components and/or steps of their implementation may be utilized independently and separately from other components and/or steps described herein. Although specific features of various embodiments may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the systems and methods described herein, any feature of a drawing may be referenced or claimed in combination with any feature of any other drawing. Some embodiments involve the use of one or more electronic or computing devices. Such devices typically include a processor, processing device, or controller, such as a general purpose central processing unit (CPU), a graphics processing unit (GPU), a microcontroller, a reduced instruction set computer (RISC) processor, an application specific integrated circuit (ASIC), a programmable logic circuit (PLC), a programmable logic unit (PLU), a field programmable gate array (FPGA), a digital signal processing (DSP) device, and/or any other circuit or processing device capable of executing the functions described herein. The methods described herein may be encoded as executable instructions embodied in a computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processing device, cause the processing device to perform at least a portion of the methods described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term processor and processing device. This written description uses examples to disclose the embodiments, including the best mode, and also to enable any person skilled in the art to practice the embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. | 35,659 |
11863403 | DETAILED DESCRIPTION OF EMBODIMENTS In the following description, for purposes of explanation, specific details are set forth to understand the disclosure. It will be apparent, however, to one skilled in the art that the disclosure can be practiced without these details. Furthermore, one skilled in the art will recognize that embodiments of the present disclosure, described below, may be implemented in a variety of ways, such as a process, an apparatus, a system/device, or a method on a tangible computer-readable medium. Components, or modules, shown in diagrams are illustrative of exemplary embodiments of the disclosure and are meant to avoid obscuring the disclosure. It shall also be understood that throughout this discussion that components may be described as separate functional units, which may comprise sub-units, but those skilled in the art will recognize that various components, or portions thereof, may be divided into separate components or may be integrated together, including integrated within a single system or component. It should be noted that functions or operations discussed herein may be implemented as components. Components may be implemented in software, hardware, or a combination thereof. Furthermore, connections between components or systems within the figures are not intended to be limited to direct connections. Rather, data between these components may be modified, re-formatted, or otherwise changed by intermediary components. Also, additional or fewer connections may be used. It shall also be noted that the terms “coupled,” “connected,” or “communicatively coupled” shall be understood to include direct connections, indirect connections through one or more intermediary devices, logical connections, and wireless connections. Reference in the specification to “one embodiment,” “preferred embodiment,” “an embodiment,” or “embodiments” means that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the disclosure and may be in more than one embodiment. Also, the appearances of the above-noted phrases in various places in the specification are not necessarily all referring to the same embodiment or embodiments. The use of certain terms in various places in the specification is for illustration and should not be construed as limiting. The terms “include,” “including,” “comprise,” and “comprising” shall be understood to be open terms and any lists the follow are examples and not meant to be limited to the listed items. A service, function, or resource is not limited to a single service, function, or resource; usage of these terms may refer to a grouping of related services, functions, or resources, which may be distributed or aggregated. The use of memory, database, information base, data store, tables, hardware, and the like may be used herein to refer to system component or components into which information may be entered or otherwise recorded. The terms “data,” “information,” along with similar terms may be replaced by other terminologies referring to a group of bits, and they may be used interchangeably. It shall be noted that: (1) certain steps may optionally be performed; (2) steps may not be limited to the specific order set forth herein; (3) certain steps may be performed in different orders; and (4) certain steps may be done concurrently. It shall also be noted that although certain embodiments described herein may be within the context of wireless communication networks, wireline communication networks, and combination thereof, aspects of the present disclosure are not so limited. Accordingly, the aspects of the present disclosure may be applied or adapted for use in wireless communication networks and other contexts. In this document, “MIMO” refers to Multiple Input Multiple Output systems, which utilize several antennas per user. Orthogonal Frequency Division Multiplexing (OFDM) refers to a system that uses equal energy on all of a set of adjacent frequency dimensions that often appears in wireless communication standards like Wi-Fi and LTE. A group of channels may be referred to as “band” and may be labeled with the same or similar indices. The following terms may be used within the specification and are defined as follows: WFH: WFH (Work-From-Home) occurs when one or more users remotely performs work-related tasks using a network connection, where “remotely” means that the one or more users may be located within a residence or yard, library, coffee shop, hotel, park or any other location outside of a physical building where the work related tasks would traditionally take place (such as an office, school, hospital, medical office, etc.). Examples of WFH situations or activities include video-conferencing, document management, audio conference calls, interactive document or project sharing among collaborators, or other instances where an user is engaged in work-related activity using one or more network connections from the remote location. Application Specific Connectivity (ASC) and Application Specific Connectivity Metric (ASC Metric): Application specific connectivity (ASC) is a defined metric that measures a connection's specific influence on productivity level of a specific application; e.g., WFH. In general, the ASC metric is an aggregated, overall metric derived from a family or class of metrics related to a specific application's productivity influence and defined as functions of various parameters that may include link or transmission channel operational/performance data (X), user feedback data (F), user preference data (P), time-stamping information/data (T) or other parameters relevant to a connection's specific influence on productivity. Specific applications include, but are not limited to, WFH, telemedicine, remote-learning and instruction, remote (including multi-player) gaming, delivery of entertainment, and meetings of religious institutions, social clubs, or family gatherings. Productivity and Productivity Metric: Productivity measures the value of one or more activities performed by the user(s) of a specific application (e.g., WFH), considering the nature and value of those activities. Productivity may vary across different users based on, for instance, the type of activities, context of the performed activities, quality and volume of the activities, financial return of the activities, etc. In certain situations, the stakeholder may determine what is the appropriate Productivity metric. Stakeholder: The stakeholder is a business, educational institution, service provider, parent, employee manager, or other entity/individual who may oversee/monitor the application-specific (e.g., WFH) activities. In certain instances, a networked-device's user within the application-specific activities may also be a stakeholder, while in other instances the stakeholder may not be directly engaged within the application-specific activities. User Feedback: User feedback comprises information about the performance and/or usability of users' activities (e.g., video-conferencing). User feedback may be directly provided by the user or may be indirectly determined based on user action or other information related to the application specific activities. Examples of indirect user feedback for WFH video-conferencing activities include measures of churn (e.g., either switching the user's ISP or switching the video-conferencing software provider), refusals to use a particular video-conferencing software, counts and characterization of the nature of calls and/or emails to help desks, excessive repeats of collaborative sessions, etc. In certain instances, user feedback may be a component in determining Quality of Experience (QoE). Quality of Experience (QoE): QoE measures user satisfaction during an application-specific activity. This measurement may indicate the usability or performance of specific software tools(s) and/or applications, user-perceived network performance, the performance of other participants' software or network tools and/or applications, or other metrics related to user satisfaction during an application specific activity. Quality of Service (QoS): QoS is data that is monitored, measured, retrieved, or otherwise obtained that quantify the performance of a network or software tool during application-specific activities. The network may comprise multiple connections that are controlled by different third-parties or by a single entity. QoS data may include packet loss count, signal levels, noise levels, outages, margin levels, data rates, throughputs, latency (delay), and all other forms of both current and historical operational and performance data. QoS performance data relate to the performance of a communications link (e.g., throughput, jitter, packet loss, etc.), while QoS operational data relate to the operation of the communications link (e.g., queue length, target data rate, port usage, etc.). Both operational data and performance data can affect QoS. Estimation: Estimation determines a function of input data that produces outputs based on those inputs. The estimating function or estimator may be parametrized, and estimation often includes learning or otherwise computing and/or inferring the parameter values from QoE data or from other knowledge about the function's desired behavior. For example, an estimator may learn from a set of QoS data, plus user feedback information, and labelled QoE from earlier or training uses, and generate a function that will predict the current QoE measure. This estimated QoE parameters are often learned by correlating a user's QoE training or other data indicative of QoE within application-specific activities to the network performance measured by QoS data. This estimated data may be used to train a second machine-learning model that attempts to optimize or improve the values of the parameters and their consequent function output's predicted QoE by adapting various user profile parameters. Prediction: Prediction is the application of a trained or otherwise derived model to input data to generate outputs based on the inputs. A trained model can predict QoE, ASC or other productivity-related metrics for specific applications based on a variety of different inputs. For example, a machine-learning model may be trained using correlated QoE, productivity metrics such as diminished work throughput, or other information related to the desired output of the model. Because WFH provides the most likely commercial application of the inventions described herein, many embodiments are described with reference to a WFH application; however, this should not be read to suggest that the sole application for the present inventions is WFH. Indeed, many applications of the present inventions may not involve employment or work; for instance, other, non-work applications include tele-medicine, remote learning/teaching, interactive gaming, virtual gatherings of families, religious institutions, or social clubs, remote security applications, distribution of streaming entertainment media, etc. Moreover, the present invention provides a mechanism for various applications (WFH and remote learning, for instance) to be improved when running simultaneously on the same remote network. The disclosed architecture may provide, for instance, a business the ability to track its employees' productivity, improve employees' home network connectivity, and collect and manage business-related information across a diverse and large WFH network comprising a diverse set of network connections provided by a variety of ISPs. It could, alternatively, be used to provide a university the ability to track its students' productivity, improve students' home network connectivity, and collect and manage learning-related information across a diverse and large remote learning network comprising a diverse set of network connections provided by a variety of ISPs. Other applications of the disclosed architecture will be readily understood by those of skill in the art, notwithstanding the description herein as being related to, for instance, a WFH application. As is understood, households with multiple household members can present a particular challenge to working from home. For example, a home worker may find the quality of a video conferencing call is impacted upon by another household member playing a video game in another room. In such circumstances, it may be desirable to prioritize the connection of the home worker. The solutions as herein described can also address this problem. In addition, with more companies' employees working from home, much of the usual network traffic has moved from closed company networks, which in-house company IT Departments can easily monitor and control, to traffic across multiple public and private networks, which are traditionally not monitored or controlled in the same way by those IT Departments. For example, the intra-company network traffic of a given company will be dispersed across multiple ISPs when the workforce is WFH, and the company IT Department does not generally have access to the relevant ISP's data for analysis. Thus, it is desirable to be able to provide the same monitoring and control to companies in these circumstances. The solutions described herein also aim to provide virtual company networks which can fulfill this role. Previously, many home internet connections and home networks were used primarily for entertainment (e.g. streaming audio and video, social media, video gaming, etc.). However, with many more people now working from home, network usage has changed considerably. For example, entertainment-based network use is generally asymmetric, with a higher amount of download data than upload data. In contrast, working from home tends to be more symmetric in terms of uploaded and downloaded data (e.g. video-conferencing is fairly symmetric). Similarly, the daily cycle of connectivity from home has changed. Previously, many home networks were used most heavily in the evenings and on weekends. In contrast, working from home means that bandwidth requirements during weekday daytime have increased significantly. The solutions described herein therefore aim to monitor and improve network usage in view of these changes Diagnostics provided by the disclosed architecture will accurately find a problem location and severity and will determine the problems' likelihood to recur at certain times and/or under certain operating conditions. Data from ISPs can improve remote collaboration solutions with information regarding overall access network performance (e.g., copper, fiber, wireless, etc.). This overall network performance data will assist remote collaboration solutions to consider location, severity, time of occurrence, and other parameters. This combined data's net effect is better end-user experience and satisfaction; and in the case of working from home, will translate directly to improved remote worker productivity and efficiency, which can be accurately and concisely captured by a variety of performance and productivity metrics. FIG.2Ashows an example overall framework200for WFH solutions according to various embodiments of the invention. These solutions accurately diagnose connectivity issue locations and severity from any, some, or all points in this framework200. The solutions may use artificial intelligence and machine learning to process customer quality-of-experience (QoE) feedback and other relevant indicators. As shown in this framework200, a WFH user may connect to a network access point205using a variety of different types of device210. These devices210may interface with the network access point205using a wireless connection or a wireline connection, each having potentially different connectivity issues. The connectivity between devices210and access point205may be a point-to-point network, a mesh network, star topologies and other network architectures known of skill in the art. The network access point205may connect to a backhaul using a wireless connection215and/or wireline connection220depending on the particular access point. In one instance, a first ISP240may provide network connectivity for some of the various devices via the wireline access network220. In other instances, a second ISP230may provide different network connectivity for some of the various devices via a wireless access network215. The diversity of network connections and potential connectivity issues is clear to one of skill in the art and results in a significant increase in network management complexity relative to private networks controlled by a single business or education entity. As different households across multiple cities participate in a company's WFH model and interact with each other across these diverse networks, the complexity in managing and monitoring the network performance further increases. Embodiments of the invention provide a system-level, cloud-based management system that interacts with various connections within this diverse network. The management system may comprise a server270. The server270is defined as one or more servers or computing device coupled to one or more interfaces within the remote collaboration architecture. The server270may be coupled within a cloud, a private or public network, or directly to a device within the remote collaboration architecture. This server270is able to measure network metrics, analyze, and improve connectivity through this system. Furthermore, the server270can monitor remote worker productivity and business-related activities to better understand how employees are operating within the WFH environment. In so doing, the server270is able to take multiple network measurements, improve performance by adjusting parameters, interact with one or more software agents located on devices within the WFH architecture, and monitor network traffic across the diverse set of WFH users and connections that enable work collaboration. Aspects of this server270are described below and allow a company, education institution, or other collaboration sponsor visibility in the WFH network, provide the ability to improve network performance and productivity for employees, students, and other collaborators, and manage business throughput and educational achievement accordingly. Dynamic Analytics, WFH Diagnosis, and the Application Specific Connectivity (ASC) Metric Internet connection variability occurs in many ways. For example, wireless connections can experience variability through communicating devices' movements, as well as movements of nearby non-communicating people and objects. Such movements cause changes in the physical radio path characteristics that affect transmission quality. Wireless and wireline connections can also experience variability through electromagnetic interference generally, and particularly from other nearby communicating systems that share the same spatial, time, and/or spectral domains. Variability is also evident indirectly through the transmitted data types. For instance, high quality video signals require the transmission to have better performance than a short email download. Different users may also perceive the same connection quality differently based on their own perspectives, and this induces a form of variability as well. Various embodiments of the invention learn and/or estimate the different variabilities through collected connection quality and/or operational data. Connection quality, throughput, or stability is a function of many or all the parameters specific to a connection of which can be derived an application specific connectivity (ASC) metric. Application specific connectivity (ASC) is a defined metric that measures a connection's specific influence on an user application or class of applications, and in the case of WFH, the application specific connectivity metric measures a connection's influence on the WFH productivity level, particularly in but not limited to the context of online collaboration with co-workers, customers, and/or partners. In general, the ASC metric is an aggregated, overall metric derived from a family or class of WFH metrics defined as functions of various parameters that may include link operational/performance data (X), user feedback data (F), user preference data (P), time-stamping information/data (T), etc. This definition of ASC metric for WFH is intentionally defined to be broad such that it captures a wide array of metrics that relate to a variety of different combined network connectivity measurements that affect a remote worker's ability to operate within the WFH environment. Those of skill in the art will realize that the use of an ASC metric is an appropriate measure of usefulness, that is productivity, for an application supporting a variety of on-line collaborative endeavors, regardless of whether it specifically supports employment and work or is for other purposes. Similarly, those of skill in the art will realize that techniques for improvements in WFH performance or productivity would also apply to many other collaborative activities, not all of which involve employment or work. Examples of such collaborations include but are not limited to tele-medicine, remote learning/teaching, interactive (multi-player) gaming, virtual gatherings of families, religious institutions, or social clubs, remote security applications, distribution of streaming entertainment media, etc. In certain embodiments, the ASC metric measures and determines an application-specific economic value to any, or all, of internet service providers, application providers, and most importantly, employers and their employees (or any other stakeholders). A user's quality-of-experience (QoE) and ASC metric can also depend on all the above-mentioned variables and variabilities in different ways at different times and with different devices. The sections below describe these variabilities, and how dynamic, analytic WFH solutions address them. Various embodiments of the invention expand to consider WFH improvement solutions that exploit the learned analytics for automatic, proactive and reactive connection repair and improvements, as well as instruction-based manual repair when necessary, in the context of the ASC metric and remote worker productivity associated therewith. System Overview and Description FIG.2Bshows a WFH system architecture according to various embodiments of the invention. In this example, the WFH system consists of three component networks: the in-home network280(e.g., local area networks or LAN and/or wireless local area network or WLAN), the ISPs' access networks281(e.g., wide area networks or WANs), and the Internet282to which application servers connect through peering points. The ISP thus connects the LAN to the application servers through one or more WAN backhauls. The server270collects quality of service (QoS) operational/performance, user QoE feedback, and user preference data at any or all of the three networks. The server270determines a preferred policy (sometimes referred to as a profile as well) and/or policies to provide to at least one (or more) network component and/or devices. This preferred policy and associated improvement will impact the currently active WFH service application's QoE or ASC metric. The in-home network's gateway283connects LAN devices284. These LAN devices284thereby connect through the Internet282to application servers via the ISPs' WAN backhaul(s) and the core ISP network. A gateway283can prioritize applications/devices through priority queues that allow fail-over service by switching to an alternative ISP (e.g., switching from wireline to cellular) when the primary ISP's connection has insufficient positive ASC metric effect. The server270may interface to a gateway-located agent, described in detail below, to collect QoS operational/performance data and correspondingly to re-profile (i.e., improve) based on the WFH policy. The agent may collect data from all in-home devices, but it also can collect data from WLAN interfaces at routing points, from Ethernet routers/switches, from the WAN, and from other supportive network points. The server270may also use an agent to a user interface (UI) to collect the user's preferred service category and/or to collect user QoE feedback. The server270may collect device information directly over an Internet connection to devices or indirectly gather such information from the gateway agent, or possibly also from the application server(s). Each LAN device284may be categorized according to its preferred service category (or categories), particularly including WFH devices284. The server270may detect and analyze the application use based on the collected QoS operational/performance data, if receiving direct user QoE feedback is difficult or infrequent. Other servers and computing devices may also collect QoS and/or QoE feedback within the architecture including, but not limited to, application servers running the application software285, ISP equipment281, and the devices themselves including software within the applications operating on the device or discrete performance software. The collaboration architecture may improve (through policy or direct re-profiling) more than one access connection. These may be from the same or different ISPs. Each ISP's core network connects the access network to the Internet. It is possible to bond these access connections in the core network (at a common server point) so that they appear as one connection to the home, or to the user, using multipath Internet protocols such as MP-TCP (Multipath Transmission Control Protocol) and other technologies known in the art, which also allow fail-over reliability improvement. The server270may require a gateway agent and/or network edge device API to collect data and subsequently to improve the access network and/or the core network profiles (or policies implemented through the agents). Agents may also be located either in the access network, core network, or the OSS/BSS (i.e., operations support system and business support system). The server270can collect data from the access network and then control access network operation using an auto-configuration server; such as those compliant with TR-069 or TR-369 discussed below. ISP networks may provide network slices with different service level agreements (SLAs), which can prioritize different applications or devices described in detail below. The application server connects to devices through the ISP's network (via Internet), unless the ISP also provides the corresponding applications (in which case the server is likely in the ISP's core network). In general, application servers285may support WFH video conferencing, remote learning, entertainment, etc. through user applications on the in-home devices. Application servers may collect QoS data using various techniques such as Real-Time Control Protocol (RTCP) or others known to one of skill in the art. The application servers may also provide application profiles that help adapt functions; such as, video/audio encoder data rates and resolution. In certain embodiments, an application may have direct user feedback, such as thumbs-up and thumbs-down buttons that indicate user satisfaction or dissatisfaction, respectively, or other direct QoE indications. The application server may also collect or derive indirect user QoE feedback such as decreased use, churn (rate at which customers stop doing business with an entity), complaint emails/calls, user activity (e.g., keystroke counters, audio activity, video activity/expression), etc. WFH application servers may deliver analytic results directly to the application's stakeholders, which may be different from the application's user (for instance, their employer). Furthermore, devices284may monitor user activity to measure the frequency and manner in which a user is interacting with the device and/or application such as keystroke counters, voice monitoring, etc. Server FIG.2Cillustrates functionality which can be part of a server270in accordance with various embodiments of the invention. The server270measures network connectivity, calculates network metrics, manages and improves the network performance, and communicates with devices within the architecture. The server270comprises a measurement apparatus287that takes a variety of different measurements across the framework. These measurements may include QoS performance data, QoS operational data, application information, device information, direct user feedback, etc. Each of these types of measurements is described in detail below in accordance with various embodiments of the invention. The server270also comprises a metric generation apparatus288that generates a single aggregated and/or a plurality of different metrics applicable to the framework. These metrics include QoE metrics, including ASC metric(s), which all or some may be associated with a label or service category identifier. Each of these types of metrics, labels, and identifiers are described in detail below in accordance with various embodiments of the invention. The server270also comprises a network manager289that uses the network measurements and metrics to improve performance of the framework. Network performance is improved using at least one of prioritization structures and functionality, policy and profile managers, network-level managers, vector turboing, and meta improvement training. Each of these network improvement components is described in detail below in accordance with various embodiments of the invention. The server270further comprises a user interface286that supports bi-directional communication between the server270and the user/employee working from home, the employer's IT support group, the ISP(s), and/or the application provider(s) in certain embodiments of the invention. This user-interface infrastructure may include any analytics result delivery apparatus that communicates information and analytic results to a user and supports a user feedback window that allows a user to communicate feedback and information to the server270. Each of these user-interface components are described in detail below in accordance with various embodiments of the invention. System QoE Dynamics QoE measures an application user's (either a human or a machine) perceived contentment or satisfaction. QoE can be, and is often related to, but is not necessarily equal to (in fact it is rarely equivalent to), Quality-of-Service (QoS). QoS metrics are usually strict and specific electronic-signal-related measurements largely of interest to engineers and/or designers. Comparatively, QoE measures consumer (or user) reaction to the performance of one or more applications operating on a networked device. In the WFH context, employers value QoE metrics when those metrics measure their employees' productivity level. For example, user feedback such as a “thumbs-down” QoE metric suggests an user's general dissatisfaction, which likely incurs some level of current or future cost to the WFH application provider and to the Internet service provider, along with immediate loss of user/employee or employer productivity. Conversely, a “thumbs-up” QoE metric suggests overall user satisfaction and likely indicates a more productive employee, who is able to complete his/her assigned work in a timely manner and with satisfactory quality through an efficient work-from-home environment. QoE may be based on user feedback, when available. WFH analytics may alternatively, additionally estimate QoE from QoS via correlation or relationships learned through artificial intelligence, machine learning, and/or rule-based designer ingenuity/experience. Such learnings often involve trainings that use actual user QoE reactions (or data), sometimes known in adaptive learning as labels that help create models. Those models then apply to estimate these QoE reactions from future users when these labels are not present. FIG.3Aillustrates aspects of an exemplary QoE learning method that may be implemented within a diagnostics engine310and applied to WFH, such as one applied to the ASC metric. In various embodiments, the diagnostic engine310may be implemented within the WFH metric apparatus288of the server270. The diagnostic engine310receives a plurality of OSI (Open Systems Interconnection) Layer 1 QoS parameters320that will change as network demand, capacity, bandwidth, data rate, application, etc. vary over time. In addition, OSI Layer 2 and above QoS parameters are also inputted into the diagnostics engine310. Based on machine learning methods and processes discussed later, labels are generated that are associated with direct or indirect user reactions and the state of the network being used. The labelled user reactions may be instantaneous or time delayed, and they may include feedback such as thumbs-down buttons, help calls, help chat-box attempts, support escalations like technician dispatch to a complaining customer, etc. In fact, just about anything that measures possible user dissatisfaction, discontent, or difficulty can be a source of labeling, and some additional examples of user reactions include but not limited to a loss of service (temporary drop of attention or permanent disconnection of service), mean-opinion scores, exit or other survey scores, etc. In the WFH context, this also expands to employee-productivity indications that correlate with connection issues. The ASC metric is particularly valuable when it helps to improve the productivity component that derives from the worker's home internet connection in terms of the worker's effective collaboration with co-workers, customers, and others. ASC analysis from QoE estimates provide a utility or cost measure of the WFH improvement's effectiveness. Remote workers may experience frustrating teleconference moments, where poor uplink audio quality often causes an entire group to spend unnecessary time repeating themselves. Disconnection and re-start of call also reduce productivity, and thus changes the ASC metric. Loss of group time in videoconferences multiplies by the number of participants, and consequently causes reduction of the entire groups' enterprise-employee value, or equivalently, their productivity associated with the ASC metric. In response to a learned ASC analysis, ASC improvement solutions address possible corrective actions, pro-active or re-active, that may dynamically tune or improve the internet connection's tunable parameters. These actions will lead to the connection's “current state” or “profile,” whichFIG.3Aalso shows is an input to the learning process, being improved such that the QoE of users on the network improves and associated productivity increases. Additionally, QoS data associated with one user may impact other users within the WFH activity. The potential relationship between QoE of users within the WFH activity is apparent and may be included in the analysis of QoS, QoE an ASC to improve network performance. FIG.3Billustrates a generalized QoE estimation that trains on normalized and aggregated user QoE feedback data according to various embodiments of the invention. The QoE estimator350results provide indications, or sometimes just a single number, that can be used to gauge network performance, employee-user performance, and WFH success. The QoE estimator350may be located within the WFH metric apparatus288of the server270, In various embodiments, the QoE estimator350receives a set of user preference inputs370, operational data375, performance data (QoS)380and direct user feedback360. The operational data375may include informational data such as an application type, a device type, etc., transaction data, and other data such as port usage, queue length, etc. Performance data380may include QoS data from various OSI layers. User feedback data360may, but is not required to, include both real time and delayed user feedback. Direct user QoE feedback360is often rare. However when present, this direct QoE feedback360can help machine learning methods to learn how QoE may be estimated from continuously available QoS data like packet losses, signal levels, noise levels, outages, margin levels, data rates, throughputs, latency (delay), and all other forms of both current and historical operational/performance data. The estimated QoE from QoE estimator350replaces the QoE data whenever direct QoE feedback data are not available. An example of such QoE-from-QoS learning can use for instance, logistic regression where a server270, constructs a logarithmic WFH QoE estimate from a learned linear combination of QoS variables (or functions of those QoS variables). The training illustrated inFIG.3Boccurs when actual user QoE feedback360(such as “labelled data”) is present. The learned functional relationships are then available for subsequent QoE estimation use when that user QoE feedback is no longer present. In certain examples, these QoE estimates then depend on the QoS operational/performance data and possibly any user-preference data370. Training may update each time additional direct user QoE feedback data360is present; the consequent updated QoE-from-QoS functional-estimate relations then continue again when the direct user QoE feedback data ceases to be available. The QoS and QoE data inputs to learned combinations may come from any, combinations of, or all of the sources identified withinFIG.3B. The user preference data370can specify, for example, that the QoE estimator prioritize only WFH applications/devices by setting the preferred service category as solely WFH. The estimated QoE data may itself also become training data for a QoE improvement discussed later. The QoE diagnostic engine310and QoE estimator350may be tuned by employers to their corporate employee-productivity metrics, including the ASC productivity metric. The consequent system can identify an employee whose metric(s) has dropped because of connectivity issues, as well as identify situations where connectivity may be incorrectly posed as a productivity-loss cause. Experiments with higher quality video, audio, or productivity applications and tools may then be more accurately assessed for productivity effect. An employee's ASC metric drop, when caused by connectivity issues, can be further diagnosed for best corrective action. The employee's WFH device may also be targeted during working hours for higher-priority flow, relative to other devices not using the QoE cognizant WFH applications. In addition, redundancy and alternative connectivity may be applied to further improve performance. For example, interconnectivity between WANs may be provided in case performance of one WAN drops or fails. The OSI stack and its seven layers, mentioned inFIG.3A, will be referred to explicitly, or implicitly, throughout this document. An OSI-stack summary appears below. The Open Systems Interconnect model or OSI model in ITU standard X.200 specifies 7 layers (or levels) for data communication:(1) Physical (signals, symbols, codewords)(2) Data Link (data framing above physical layer)(3) Network (packets of data)(4) Segmentation (multiplexing, acknowledgement)(5) Session (creates and later removes a session during which groups of packets are exchanged)(6) Presentation (translation from application to a service that uses sessions)(7) Application (uses application programmer interface or API to translate application's data) This document refers to the 7 layers or levels to indicate at which level is the QoS or QoE data collected and to which layer is the improved parameter(s) tuned. A profile may contain values for several layers. QoS Collection—Communication System Performance Data. Server270collects QoS performance data380from devices and equipment205,210, etc.; such as the home gateways, the Wi-Fi access points, the various home devices including smart phones, as well as DSLAMs, OLTs, cable hubs, routers, peering points, and application servers located beyond the home-based devices. The term “QoS data” is defined to include operational data375and/or performance data380. One QoS data type collected is performance data, and QoS performance data report on the communication system's function. The non-exhaustive list below provides some examples of commonly collected QoS data. QoS data may be collected at regular intervals; however, the server270need not necessarily collect them at regular intervals. Different types of QoS data collections may have different time spans and intervals between those spans. When necessary, intense collection is possible (and may be desirable) for select equipment or devices at shorter intervals. Such rapid, intense collection may be event-driven in response to either equipment alarm reporting or server requests for said more intense data collection. QoS data may be available from any, some, or all of network connections and connectivity points along the end-to-end transmission paths illustrated inFIG.2A. Some QoS data collections conform to well established standards, others to developing standards/specifications, and yet others to equipment vendors' proprietary formats. To further illustrate implementation details about QoS data that are collected across these diverse network connections and subsequently analyzed, a mathematical description is provided below. The quantity Xi,j,k(t) indicates QoS data, with defined subscripts as:i denotes the QoS data's operational data type that includes items from a set that may include for example {downlink/uplink data rates and/or throughput, downlink/uplink packet loss, retrain counts, signal-to-noise ratios, signal levels, interference levels, etc.}, which can be current and/or historical. So, for instance, XSNR,j,k(t) would specify the signal to noise ratio associated with indices j, k at time t.j denotes the device/equipment name type that includes items from a set that may include for example {ALL, smartphone name/brand/type, TV name/brand/type, laptop/desktop name/brand/type, IoT device information and identifier, model numbers, associated internet protocol (IP) addresses or MAC (medium access control) addresses, OLT/DSLAM/CMTS name/brand/type, Wi-Fi Access Point name/brand/type, cell base station name/brand/operator/type, network router name/brand/type, etc.}. Here “ALL” indicates essentially that the data Xi,ALL,k(t) is common to all (or essentially therefore independent of) device types. This index may specifically include a link identifier or at least associates with the equipment pair at the link's endpoints, so Xi,LINK,k(t) for the operational data type i and application k at time t.k denotes the application data type that includes items from a set that may include for example {ALL, WFH application brand/version, voice-over-IP (VoIP) service/application/brand/version, similarly video-entertainment, gaming, other in-home applications' identifying data, etc.}, where ALL indicates that the Xi,j,ALL(t) applies to all applications.t denotes QoS data argument, which is the time when the performance data were collected. Time resolution can be different for different data types, and t can be viewed then as a timestamp for the other subscript-indexed data in quantity Xi,j,k(t). Thus, XSNR,LINK,WFH-BRAND X(2022/05/11 21:00:00) specifies the SNR on a specific LINK when BRAND-X WFH is in use and is collected on May 11, 2022 from 9:00 PM to 9:15 PM. For example, some data types are collected every 15 minutes (as in this example case for illustration) while some data types are collected daily, and yet other metrics are collected as event driven (i.e., only when certain event, like reboot, happens).Ti,j,k(t) associates with the quantity Xi,j,k(t) as the time span that Xi,j,k(t) covers when all 3 indices and the timestamp are the same. For the above example TSNR,LINK,WFH-BRAND X(2022/05/11 21:00:00)=15 minutes, 0 seconds means that the interval over which the SNRs were collected (there may be 1 or several if Xi,j,k(t) is a vector) correspond to the time interval or span of 15 minutes. As such, a vector of 15 values would be 15 SNRs for LINK and application WFH-BRAND X at 1 minute intervals, taken each minute starting at 9:00 PM. Specific QoS data types correspond to the index i that may be reported through Xi,j,k(t) include, but are not limited to, the following:Passive data measurements that may occur without alteration of the corresponding WFH Internet link's user data. Thus, passive data help determine performance, but are not a function of any injected testing patterns nor of the specific user data itself. Typically, the equipment itself generates or computes passive data measurements. Many QoS data are passive. The SNR in the examples above is an example of passive data. Other examples include:Data rate and coding parameters (includes Modulation and Coding Scheme, or MCS). Data rate is the number of bits passed per time unit over a link without regard to whether the bits are user data or other forms of overhead information that may be used for redundancy with the coding scheme employed, for protocol headers, for IP address routing, for application-data segmentation, for flow-priority indication, or for other overhead purposes of the transmission system.Throughput is the actual user data passed successfully per unit time. In most examples, this is less than the data rate and a more realistic estimate of actual use by applications. Throughput may be more accurately measured actively (see below). Throughput may also be viewed as a generic example of the ASC metric, where the application might be interpreted as “ALL” in the case of throughput or applicable more generally to any application.Packet-loss rate measures the number of data packets transmitted in error and cannot be recovered at a particular layer over a specified period of time. Typically, packet-lost rate refers to IP packets, but it can also be other segmentations associated with error detection and coding, sometimes called “code violations”, for example. These coding redundancy (or inserted check sequences) are part of the interoperable, standardized transmission formats and not injected for testing purposes. As such, these measurements are typically considered passive.Errored Seconds measures the number of seconds in which at least one packet was lost over a longer time interval.Signal strength such as RSSI (Received Signal Strength Indication) measures the transmitted signal's power at the receiver, and it can often provide an electrical-equivalent estimate of the length (or distance) of the transmission channel.Transmit power and/or power-spectral density (PSD) levels at the time when other measurements are taken, and this measurement can be important in resolving interference issues.Interference strength is often measured when no main signal is present, but it can also be reported indirectly so that the power level of interfering signals can be derived.Signal quality such as SNR (Signal-to-Noise Ratio) and SINR (Signal-to-Interference-plus-Noise Ratio) measure the power of the desired signal relative to all other interferences and noises, which can be crucial in determining reliably achievable data rates and throughputs, subject to an acceptable threshold error rate.Scheduling delay and jitter measures are usually derived and reported by equipment from the average transmit-buffer-memory depth relative to data rate, as well as the variation (jitter) in such depth.ASC productivity metric is an aggregated, employer-related measure of employee value contribution based on their productivity as related to their WFH connectivity.Active data measurement: These include QoS data that are the result of injected measurement data into an internet link. Again, the following list is exemplary and non-limiting:Speed-test QoS data measure transmission speed, data rate, or throughput between two selected link points with known inserted patterns. These can include:end-to-end or segmented measurement (e.g., Wi-Fi only, access-link only, edge-to-application server, etc.) of raw speed or throughput (for an index j that identifies which link).invasive speed tests that flood the connection with data causing other services to be disrupted, which also often reports an exaggerated indication of what a user would see with other users sharing the link.non-invasive throughput measurement (test of available throughput while other existing applications remain at their nominal priority)Connectivity QoS indications of whether a device can connect to another point, particularly a server. Connectivity can be determined for example by checking TCP SYN-ACK exchange between the device and target server(s) using standard TCP ports such as 80/443. In addition, connectivity can also be determined by checking DNS resolution between the device and the server. This DNS resolution information is important for later debugging. If a number of important applications are known, the connectivity (both TCP SYN-ACK and DNS resolution) between an agent in the gateway and the application servers can be periodically checked. For instance, XCONNECT?,PON6-to-DEVICE8PORT80/443,WFH(test time).Transmission delay QoS data measures (one-half of) the round-trip time for a known packet to be sent and returned to its origin, while transmission jitter measures variance in such transmission delay within the corresponding link's receive buffer of such round-trip measurements (whereas scheduling jitter is only on the transmit buffer associated with a device/equipment, so index j in both cases, but scheduling jitter is for launch into the link and transmission jitter is for round trip traversal of the link):▪ End-to-end, or link-segment specific, so for example, XJITTER-S,WiFi3toLAPTOP2,WFH(t)Transmission delay can be measured against different measurement servers. For a known application-server list, the delay between the device/agent and the application servers (through index j) as well as DNS resolution delay to the application server can be periodically measured. The application server may then have an explicit fixed index pairing of j with k for transmission-delay and jitter data reporting to associate this jitter with an application.Application-specific QoS data may often be available from applications' diagnostic registers and have index k. For example, Layer 4 RTP (Real-Time Protocol) and the associated Real-Time Control Protocol (RTCP) and Layer 6/7 WebRTC are protocols/languages used by applications to derive such application-specific QoS data. Applications may collect QoE data, as well as QoS data, for example using RTCP protocol. The application-specific QoS data is active, whether spoofed or normal, and associates with an application. It applies to an end-to-end connection as XVoice-Packet-errors,Server-to-LAPTOP2,WFH(conference call 3). Many WFH applications collect end-to-end (Layer 7) video conference call specific QoS data for each participant, including the following:Audio/video packet loss (uplink and downlink)Audio/Video codec data rate (uplink and downlink)Some applications may combine these QoS data into application stability measurements and then report these upon query at the application's diagnostic page or information console. In home Wi-Fi issues are a major source of poor connection quality for WFH applications, and Wi-Fi systems may report various QoS data via standardized interfaces, which can provide data about the nature of such degradations. These data can include, but are not limited to, measurements of interference, data rates achieved (minus coding overhead but not necessarily minus other protocol overheads), possibly as a function of different ports/paths and devices used, as may be indicated by a DSCP (Differentiated Services Cope Point) that is here considered as tied to index k but would likely have the ALL category for indices i and j. Newer multiple antenna Wi-Fi systems can also provide spatial-path indices/information within the same spectrum that are largely distinct, although this information is often internal to the Wi-Fi AP box or chipset today. The path index would correspond to subscript j, while the information to subscript k. Prioritization, discussed in detail below, may assign the spatial path to a specific application. Significant future standards efforts may help encourage box/chip manufacturers to report these types of useful information under i, including these spatial indices/information, known as the “array coefficients”, because such reported data could be crucial to improved WFH performance including improved QoE an ASC metric that is not possible via the chip or the Wi-Fi AP box itself directly. DSL systems provide information and data similar to Wi-Fi (e.g., multi-user, vectored DSL systems already report the array coefficients that are the equivalent of advanced Wi-Fi's MIMO). In fact, DSL systems in the field today go further to indicate a full frequency-by-frequency analysis of the entire channel (or all sub-carriers or bins used), often known as the corresponding bit distribution and channel-response (which have the names Hlog and Xlog and/or Hlin and Xlin in DSL standards). Noise power spectral density may be reported, or it may easily be derived from the other reported information. Cable modems increasingly have followed DSL systems in modulation type and reporting capabilities and thus also advance to include similar quantities. Cellular 3GPP systems have a rich set of information within the devices that is currently not well or universally reported externally to exterior, remote, or cloud-based management systems. Such reporting would indeed be very useful in WFH situations, where the WFH interference environment is a more random aggregation of signals from several different ISP's and interferences, as opposed to a more controlled office or campus environment when workers are at their desks or within the campus. This would be particularly useful in situations with 4G and/or 5G systems using unlicensed bands that also support Wi-Fi in the home. As shown above, the ability of the server270may rely on standardization-based measurements, measurements that a third-party ISP provides, measurements derived from other network metrics, and other ways known to one of skill in the art in which the server270may obtain information about the diverse set of connections within the WFH architecture. Profiles and Configuration Data Embodiments of the invention make use of a profile to set and report the system configuration. Management entities often determine the optimal, a near optimal, or a more preferred profile for systems under their management. The server270is an example of such a management entity for WFH activities that may provide guidance or policy on profile setting or that actually sets the profile directly, sometimes dynamically based on current and historical performance data380. The term service profile is a subset of a full profile settings that comprises certain service-related parameters such as data rates and delay. User preference data can be a specific service profile or a set of acceptable service profiles. The term equipment profile means a set of possible tunable parameters. Profile here is used in the broadest sense, and it should be properly interpreted within any specific use case's context. In more restricted situations, the list of possible profiles from which selection can occur come from a limited set. The quantity Si,j,k(t) denotes a profile, where the subscripts i,j,k have similar interpretations as Xi,j,k(t) defined above, but more specifically:i denotes transmission format profile data type that is not directly equipment nor application dependent, with examples including Modulation and Coding Scheme (MCS) recommendations or policies (i.e., data rates attempted, associated codes, physical layer modulation constellation size, etc.), carrier-frequency band, specific channel within a frequency band, aggregation of bands, power levels, number of antennas when appropriate, DSCP data types, etc.j, again, denotes the device type, model, brand etc. associated with the profile or can be a link indication.k, again, denotes the application type associated with the profile. Data-Collection and/or Profile Configuration Standards QoS data collection's specific implementations vary widely today in industry. Some WFH, and more broadly connection-management, implementations may favor certain equipment, while others attempt to be open to all. In many cases, those supposedly open may in fact disguise preferences. A variety of standards organizations also attempt to provide widely available, industrially accepted norms for allowable profiles, to varying success. Fixed-line data collection standards for DSL and GPON have largely converged into a few methods, many (though not all) for example are specified by the Broadband Forum (BBF) through support from international equipment interoperability standards organizations. Similarly, cable management interfaces have increasingly converged to those supported by interoperability standards such as DOCSIS 3.1. Wi-Fi data collection methods, on the other hand, are less mature, including many proprietary definitions and methods. As a result, Wi-Fi data collection today to a server270may require a compatible agent-based API in the Wi-Fi access point (AP)205to accommodate equipment and chipset differences. Perhaps the best known Wi-Fi data collection specification is Broadband Forum's TR-069, which many ISP-supplied access points use widely, but it also suffers from known issues with very slow and limited data collection that largely prevent reasonable assistance to Wi-Fi connections. Furthermore, TR-069 provides no visibility into the Wi-Fi AP's subtended client devices. BBF's recently released TR-369 improves the client-device visibility, but it still has speed limitations as well as data collection limitations. The main limitation that remains in TR-369 is that it employs an approach to collection that does not collect most necessary items for effective Wi-Fi management. There exists a companion BBF standard TR-181, which lists the various data elements (DEs) to be collected in XML (Extensible Markup Language); however, the list in TR-181 is also incomplete on needed items for effective management. Lastly, TR-369 is not yet widely deployed, as it is still in early stages of interoperability testing. The current Wi-Fi Alliance (WFA) EasyMesh work effectively extends the in-home portion of the BBF TR-369 capabilities. However, the two efforts are not viewed as mutually dependent, because EasyMesh is only between access points and mesh points (MPs) or repeater/extender-AP's; as such, it does not extend to servers outside the home equipment. Other inside-AP-agent efforts have emerged through open-source forums such as the Rapid Deployment Kit (RDK) and PrplMesh efforts, and they now appear to be merging through the assistance of the BBF. These attempts to provide an open-source agent API for remote management will likely help the industry. However, there remains a real need for well-understood, data-element definitions appropriate to improved future Wi-Fi management, and one such set of definitions has been proposed in the WFA called “CMDi” (Cloud Management and Diagnostics interface), which would expand data elements monitored and collected to be much more useful and helpful to cloud management of Wi-Fi networks. Servers that try to assist application servers and ISPs with WFH QoE based methods would benefit from a standardized CMDi definition. These data elements might well be added then to TR-181 (thus potentially included in future versions of TR-369 and EasyMesh, as well as supported by future RDK/Prpl documents, drafts, and proposals). For applications, the previously mentioned IETF standards allow RTCP diagnostics on RTP traffic that flows on top of multimedia streaming with network layer (3) UDP (User Datagram Protocol) to test or report on actual streaming data. This in turn allows data collection related to the video-streaming quality to be among the collected data here called QoS performance data. This is typically enabled through application-level webRTC initiation of streaming's use. The application's QoS performance data then constitutes the feedback information from the application-service delivery to the application server, and presumably then also to the server270. The server270may collect data across the diverse set of network connections using one or more of the technologies identified above that will provide visibility into a correspond network connection. One skilled in the art will recognize that other standards and technologies will likely arise in the future which will allow further visibility into network points for the server270. The server270evaluates QoS data types, along with all QoE data described throughout the application, when they are available in any form. This QoE data may be exchanged with the server270using a variety of technologies including, but not limited to REST (Representational State Transfer) or legacy SOAP (Simple Object Access Protocol). REST is an architecture that allows computer-to-computer interfaces so that a server270could communicate for instance with an application server. Interfaces supporting REST are called RESTful. RESTful interfaces allow faster and more efficient transfer of information that is usually in http formats. SOAP is an older predecessor to REST and manages more device elements than pure data. QoE Data Collection—User QoE Feedback and User Preference User QoE feedback data are important to both service providers and application providers, and they are also important to the server's270cloud-based remote management system. As previously mentioned, QoE user feedback data may be real time or delayed, and they may be direct or indirect. Dissatisfied customers result in an increase in cost to ISPs such as support calls, technician dispatches, equipment replacements, and ultimately, churns. Application providers, particularly WFH application providers, may find dissatisfied customers turning to other competing application providers who are able to deliver a better QoE. From the perspective of an average customer, poor QoE is typically interpreted by the user as the application simply not working well as designed and not necessarily recognizing the underlying network connectivity problems. User QoE feedback data help to train; i.e., learn by a ML or AI based method, the proper relationships between the QoS data and the QoE data, so that QoS data can be used in the future to estimate QoE data accurately, when such user QoE-feedback data are not available. Actual user QoE feedback data or QoE estimates from QoS data may assist dynamic improvement methods in supervised or reinforced modes. An example of a dynamic configuration to set the next profile as a learned function of the QoS data is detailed below. One skilled in the art will recognize that other indices may be used within the function or mathematical variants thereof. Fi,j,k(t) denotes the user QoE feedback data360where the indices are similarly defined as above:i is the type of user-specific feedback. For example, elements from the set {thumbs-down, thumbs-up, customer complaint and type, net-promoter score, chat complaint, corrective action previously taken, etc.}.j is the device type, model, etc., as before.k is again the application type identifier. Real-time user QoE feedback data390may be instantaneous or substantially instantaneous where delays are introduced by transmitting the data from a user interface to the server270. Types of real-time user QoE feedback may include (1) input from within a collaboration platform application (such as the previously mentioned input of thumbs up/down, star rating, smiley face, etc.); (2) input from a separate application used to provide feedback on collaboration (such as a smart phone application or web-based application used during a collaboration session); (3) comments from chat/messaging streams running within collaboration platform (for instance, streams may be monitored for comments on connection quality, or users make comments with specific keywords); and (4) user's direct comments on their own connection or a specific person). This user QoE feedback data Fi,j,k(t) allows the QoS's and QoE's associated time span Ti,j,k(t) to be shorter in determining QoS correlation to QoE. Shorter time span means that any consequent corrective actions as well as any reporting of issues, have the possibility of proactive resolution before a WFH customer becomes dissatisfied with the application and/or the ISP's service. Real-time user QoE feedback data390examples may include a “thumbs-down” indication and/or a “thumbs-up” indication, as well as other user ratings such as “1 to 5 stars” at the completion of a session. Such feedback data are examples of direct user feedback data. A “thumbs-down” indication is immediate feedback of a frustrated customer-user who made an effort to provide that indication. Rapid resolution can thus bring financial value to the ISP and also to the application provider. This may be particularly true if the feedback leads the employer paying for the existing WFH service to move to an alternative, improved and more stable, WFH service, or if the ISP's offered fail-over option to ensure stable connection costs significantly more than the default connection. Real-time user QoE feedback data390results also can arise from the immediate use of a chatbot or virtual assistant (e.g., Siri or Alexa, but may be specific to WFH) to request help or to complain. Such feedback data are examples of indirect user feedback data. More sophisticated, evolving systems may even read the user's facial expression (e.g., in a video conference) or voice intonation to flag a real time application or service concern. Certain video-conferencing applications (and other collaboration platforms) have the ability to add third party extensions or bots within their framework. In certain embodiments, these mechanisms may be used to collect data on QoE as it pertains to connectivity and the collaboration platform. Also, certain collaboration applications (e.g., video-conferencing applications) report QoS metrics and embodiments of the invention may use these reported metrics. WFH productivity is particularly sensitive to any issues with teleconferencing, videoconferencing, and other live-group interactions. Different group members may press “thumbs-down” on another participant's communication because they cannot hear or see, which the sender may not realize. In such situations, other group members can indicate a problem with the thumbs-down indication, and the WFH application can try to correct this through profile change for the affected group member. The server270, for example, may be able to repair the issue by improving the audio quality (e.g., through a profile change). A similar example can be a new profile's provision Si,j,k(next) for video and/or screen displays that are functioning poorly because of poor connectivity. This could be particularly problematic for a lecturer/presenter, or a teacher in distance learning situations, whose valued inputs are lost or impaired to a large group of audience members. All group members' productivity or ASC productivity measure decreases because of repetition, restarts, delay of calls, etc. This is a particularly important form of real-time feedback and a poor-performance icon/indicator can be clicked on the speaker's image or the speaker's name in the video conferencing application. Substantially instantaneous QoE feedback data includes reaction to comparison of a user's speed with respect to other system users, whether in the same conferencing session or not. Users seeing inconsistent availability of speed equivalent to their peers are also at high risk of changing ISP, application provider, or both. Current information relative to their neighbors, co-workers, and friends are all helpful. Even more valuable would be similar comparisons of ASC productivity measures. Another example of user QoE feedback occurs with streaming (live) video failure and/or excessive delay, perhaps in a webinar for work or a speaking event or a class lecture from a teacher. Yet another example is the Video “LOADING . . . ” message that continues to appear and disrupts the flow of the viewing experience. Video streaming interruptions cause longer viewing time and annoy the viewers. If one group of users finishes 15 minutes faster than another group viewing the same broadcast, and both groups may rejoin for subsequent discussion or events, this is a loss in ASC productivity metric that scales across the group. Even if a stored streaming video is eventually viewed but with multiple interruptions, the viewer loses productivity and can also become frustrated. A user experiencing these delays loses valuable time, and thus has a lowered ASC productivity measure. Certain real-time user QoE feedback data may be estimated using a model previously learned or perhaps one standardized for QoE estimation. Standardized examples include the ITU-T G.107 E-model that predicts whether a certain connection is sufficient to sustain good-QoE voice traffic. In effect, an advance warning suggests from very recent history that the path can or cannot sustain acceptable voice quality. A similar video-streaming prediction appears in ITU-T standard P.1201.2. Some Mean Opinion Scores (MOS) can also be instantaneously computed and used as estimates of or proxies for QoE. One skilled in the art will recognize that other standards and approaches will likely be developed to predict network connectivity, all of which may be implemented by the server270in managing the WFH network architecture. Delayed user QoE feedback data385expands the associated time span Ti,j,k(t), and may be valuable within the user feedback data360. QoE indications385that are even days or weeks old, if they can be highly correlated with QoS data380in the same time span, may provide valuable learned models for estimating real-time QoE by the QoE estimator350when no actual real-time QoE is available. A history of complaint calls or texts may reveal many past outages or frustrations that the user experienced, and they may be correlated with QoS data380taken at those same times. Comparisons of these actual, but delayed, QoE data380with estimated QoE at those same times can further improve the estimation models. Additional examples of delayed user QoE feedback may be generated by (1) users being prompted at the end of a collaboration session for feedback; (2) users being surveyed occasionally on their connectivity; (3) users commenting on their own connection or their experience with others. User QoE feedback data360may be used to label and consequently help to learn (train) a functional mapping from QoS data to an index-specific reward estimate or Quality Function {circumflex over (Q)}i,j,k(t), so QoS Xi,j,k(t)→{circumflex over (Q)}i,j,k(t), of which the ASC productivity metric can be an example for overall WFH productivity. Qi,j,k(t) is described in much more detail below, but this learned functional mapping can and generally occurs through machine learning/artificial intelligence. In certain embodiments, the quality function {circumflex over (Q)}i,j,k(t) need not be exactly an estimate of the user QoE feedback data Fi,j,k(t), the latter of which may be simply represented as 0/1 for good/bad, net promoter score (say, 0 to 10), mean opinion score (say 0 to 5 stars), etc. The quality function measures QoE on a related but possibly different scale, for instance an employer's customized ASC productivity metric. More QoE feedback data taken under the same conditions help derive more accurate functional description/mapping. This description may be later used for the estimate {circumflex over (Q)}i,j,k(t T) based on simultaneous QoS data Xi,j,k(t T). The functional relationship can be dependent also on the similarly learned, appropriate profile setting Si,j,k(t T). Artificial intelligence methods perform better with more input data and more accurately labelled data, for training (supervision) and adaptive tuning (learning). Various embodiments of the invention attempt to improve estimation of functional models and their predicted outputs as well as improve profile settings dynamically. User preference data370is a set of user inputs that indicates user's preference on network operation. This user preference data, identified as Pi,j,k(t), sets prioritization rules and/or policies on a connection's devices, users, and/or applications, where the subscript indices again have similar meaning as those described before for X, F, S, discussed above. This user preference data370data is available to the server270, and user preferences may be set by any, some, or all of the interested parties; i.e., the ISP, the application provider, a regulatory group, and/or the user/employer/customer themselves. The user preference itself may also be learned through actions taken by a user (like hitting the thumbs-down when not satisfied and correlating that against settings). In effect, Pi,j,k(t) helps specify a preferred future profile Si,j,k(t) if that profile is permissible. The profile Si,j,k(t) may be directly specified in part (speeds, delays) or in full, the latter of which is unlikely directly specified by the user given the ISP and/or the application provider would likely wish to maintain certain degree of control. The user preference Pi,j,k(t) thus combines QoS data Xi,j,k(t), existing profile Si,j,k(t), and user QoE feedback data Fi,j,k(t) to estimate a next (more optimal/improved) profile Si,j,kT). Learning Methods to Estimate QoE from QoS The QoE estimator350implements learning methods to estimate QoE from QoS data375(operational) and/or380(performance). Other types of data may also be employed by the QoE estimator during this estimation process. The index-free QoE metric is Q(T) for time span T=(t0, t1), where t0<t1that is independent of the time span Ti,j,k(t), and it is possibly a function of all the QoS operational/performance data Xi,j,k(t), the user QoE feedback data Fi,j,k(t), and the user-preference data Pi,j,k(t). T may span several measurement periods of Ti,j,k(t). The QoE metric Q (T) is a single value that represents the overall QoE of all households', or more generally collaborator's households', WFH activity, and thus may encompass many values for the indices i,j,k. Also, Q(T) is normalized (such that data is reorganized in storage to improve utilization in subsequent queries and analysis) in such a way so that users can properly utilize that database for further queries and analysis. One example method of normalization balances or weighs new inputs and updates to the process with older versions of Q(T) previously estimated that also affects the new values. Q(T) is therefore an understood number that can compare one household's QoE to another. So the earlier subscripted Qi,j,k(t) was operational, device, and application specific, and without these subscript indices is an overall function. As such, the concept of ASC metric becomes an aggregate over all the relevant indices and specifics of interest. Thus, Q(T) is: Q(T)=ƒ(FiFj,k(τ)∀iF,j,kwithTiFj,k(τ)∩T≠Ø, PiPj,k(τ)∀ip,j,kwithTiP,j,k(τ)∩T≠Ø, XiXj,k(τ)∀iX,j,kwithTiXj,k(τ)∩T≠Ø). The subscripts on the operational indices i denote that the operational data type may be different for the different user feedback F, user preference P, and operational/performance data X inputs to Q(T), and the boldface indicates there may be several QoS data types all included in a vector of QoS data that is input to the function ƒ(⋅). The time variable for the data here is τ to avoid confusion with the time-span's endpoints T=(t0, t1). The function ƒ(⋅) may be a linear combination for each indexed data point in its arguments, and it may attempt to approximate a symmetrically positive (good) and negative (bad) quality index that is often viewed as a log-likelihood ratio. In this case, the Quality estimator350for a QoE metric uses linear regression. Linear regression is a machine learning method that adaptively determines this linear model by estimating the log-likelihood probability based on direct feedback QoE training data. For example, to associate user QoE feedback, as indicated as good or bad in the value of FiFj,k(t), with a certain operational/performance data XiXj,k(t) and user preference PiPj,k(t), it is helpful to know which device j and application k were active at the time of user QoE feedback. Other functional choices and metric scales are possible and may correspond to a variety of learning methods, including, but not limited to, reinforcement methods that project potential further reward for an estimated QoE when actual QoE data is not available, all of which may be implemented within the Quality estimator350. Examples of methods employed in the Quality estimator350are described below. In a first example, user preference data Pi,j,k(t) may indicate that the preferred service category is WFH services. A policy associated with this preferred service category would contain a list of device-and-application pairs for flow prioritization when WFH service is active, indicated by a boldface P(T)={(j1,k1), . . . , (jM,kM)}, which is this prioritized set of M ordered pairs of device and application indices for the time interval T. QoS data may also have only been collected for a list of N device-and-application pairs active during time interval T, X(T)={(j1,k1), . . . , (jN,kN)}. Correspondingly, user QoE feedback during time span T is a list of “thumbs-up” (no problem indicated) responses from different device and application pairs, F(T)={FTU,j1,k1, . . . , FTU,jK,kK} where each F is a unary data point. Then, the QoE metric during the time span T simplifies to: Q(T)=f(FTU,j,kfor(j,k)∈P(T)∩X(T)),wheref(·)={1iftheset{FTU,j,k|(j,k)∈P(T)∩X(T)}≠∅0,otherwise and Q(T)=1 means good quality and Q(T)=0 means quality is poor or unknown. This equation bases QoE only on the user QoE feedback that corresponds to the preferred service/devices that were active during the time span T. In this example, user preference and operational/performance data thereby ‘filter’ the user QoE feedback data. In general, user preference and operational/performance data can qualify user QoE feedback such that the corresponding QoE metric Q(T) represents that of greatest user concern. In another example, there are two different user QoE feedback data types:Thumbs-up FTU,j,k: again, unary data point equals to 1 if the performance is good, andThumbs-up-or-down FTUD,j,k: a binary data indication that is 1 if the experience is good and −1 if the experience is bad. Then, the estimated QoE during the time span T aggregates two different scores, which can be denoted as follows: Q(T)=g(f1(FTU,j,kfor(j,k)∈P(T)∩X(T)),f2(FTUD,j,kfor(j,k)∈P(T)∩X(T)))wheref1(·)={1iftheset{FTU,j,k|(j,k)∈P(T)∩X(T)}≠∅0,otherwise,andf2(x1,…,xK)=2*⌊0.5+(∑i=1Kxi)2K⌋-1, which is the majority function with values 1/−1 if there are more/less 1's than −1's in the K inputs. The overall dual-input aggregation function is g(x,y)={1ifx=1,y=1-1ifx=0,y=-10otherwise. These functions ƒ1, ƒ2, and g combine different feedback types to create a consistent QoE estimate {circumflex over (Q)}(T) given any specific input data sets. QoE predictions may also derive from QoS inputs in the absence of user QoE feedback by solving the following functional-optimization problem (which can still be considered as a type of supervised-learning method) to learn a functional estimator ƒ(⋅) for use when Q(T) cannot be directly computed because of the lack of user QoE feedback: minfg(d(Q(T1),Q^(T1)),…,d(Q(TN),Q^(TN)))s.t.Q^(Tn)=f(PiP,j,k(τ)for∀iF,j,kandTiP,j,k(τ)∩Tn≠∅,XiX,j,k(τ)for∀iF,j,kandTiX,j,k(τ)∩Tn≠∅) where Xi,j,k(t) is QoS data, d(x,y) is the distance between x and y that represents the prediction error and g(d1, . . . , dN) aggregates such prediction errors. Examples of such aggregation functions include counts of the number of errors larger than certain threshold, a mean-square function, a sum of such mean squared errors, and so on. A simple example of predicted/estimated QoE uses the same assumptions on user-preference and on QoS data as before, namely the pairs of ordered-pair indices set {P(Tn)={(j1,k1), . . . ,(jM,kM)},X(Tn)={(j1,k1), . . . ,(jN,kN)}} that represent the user preference and operational/performance data for time interval Tn. The QoS data then might include:X1,j,k(Tn), the average packet-error rate of device j for service k during time interval Tn, andX2,j,k(Tn), the average delay jitter of device j for service k during time interval Tn,d(x,y)=x−y, andg(d1, . . . ,dK)=Σdi2. the optimization that finds the best QoE predictor ƒ is minf∑i=1N(Q(T1)-Q^(T1))2 such that {circumflex over (Q)}(Tn)=ƒ(X1,j,k(Tn) for (j,k)∈P(T)∩X(T), X2,j,k(Tn) for (j,k)∈P(T)∩X(T)). When the above optimization problem finds the best ƒ among all linear functions, this implements linear regression. When d(x,x^)=log(px1-px·1-px^px^), the best ƒ among all linear functions implements logistic regression, and pxis the probability that the QoE training data is good. In general, the learning may use different computer methods to find the best QoE predictor based on QoS data, given user preference. Learning may also base the predictor on unsupervised learning methods in the case of rare (i.e., infrequent) user feedback data's availability. One example uses a multi-step approach with the same assumptions and notation as above: Step1designates the QoS data vector as Xinj,k=(X1,j,k(Tn),X2,j,k(Tn)) whereX1,j,k(Tn)∀(j,k)∈P(T)∩X(T), andX2,j,k(Tn)∀(j,k)∈P(T)∩X(T). In certain embodiments, the vector Xin,j,kneed not correspond to “label” or training data QoE Q(T1) from user QoE feedback, because that data is not available. Instead, the reinforcement learning training method uses unsupervised learning (e.g., training data is estimate of accumulated expected future QoE) such as clustering. The learning finds clusters of good Xin,j,kusing computer methods such as k-means. Designating ƒinj,k) as the clustering function that outputs the cluster index given any vector {circumflex over (X)}in,j,k. Step2finds the cluster(s) with highest proportion of feature vectors associated with bad QoE. Here, Xin,j,kassociates with bad QoE if there exist Q(T0)=0 ƒ or Xin,j,k. The set of cluster indices is CQ=1={c1, . . . , cq}. Step3: predicts the QoE for any feature vector Xin,j,kas follows: Qˆ(Xin,j,k)={1if(Xin,j,k)∈CQ=10otherwise. The method continues to find clusters belonging to good QoE and modifies the prediction rule to include good QoE. This is a simplified example. Many other clustering methods or un-supervised learning methods are also possible and may be used within the QoE estimator350. These include, but do not limit to, Q-learning that uses a hidden Markov model (or state machine) to update the clustering performance in a recurrent manner where continuous user QoE estimates project into some infinite future time period over the possible future states and their presumed-stationary Markov probability-transition distribution. The probability distribution may update in time also at each step, while each step presumes that updated distribution is stationary. Learned Reaction Data Many WFH applications provide device/application-use indications and some layer-7-based packet-loss data; however, these WFH applications are independent of access connection. WFH solutions view these as active QoS data, and these applications typically do not account for the connection's affect upon application-user's QoE or ASC productivity metric in the WFH case. ISPs generally also fail to provide metrics that relate to worker-collaboration measures and application specific connectivity through the network. Embodiments of the invention provide a QoE score (such as an ASC productivity score) that gives a company visibility into WFH employees/customers associated with specific ISPs. An example of such a metric is a WFH ISP-rank index or a QoE/ASC score in which data specific to an ISP is provided across a WFH network in which multiple ISPs are operating. One example of such a metric comprises the following steps: Step1: Collection of an ISP's subscribers' WFH QoE or ASC scores from a plurality of their subscribers. (For example, WFH QoE or ASC scores can be categorical data with values from {good (=1), bad (=−1), unknown (0)}.) Step2: Aggregation of each subscriber's WFH QoE or ASC score over the specific time period of score measurement. For example, the aggregated metric might be the total time of poor WFH service QoE or ASC score (i.e., =−1) during a predefined period. Step3: Computation of a network wide aggregated WFH QoE/ASC score over a plurality of subscribers. An example is the percentage of subscribers with poor QoE (=−1) for longer than some specific time during the measurement period (say 1 day or 1 work week). Step4: Ranking of the ISPs according to their aggregated network wide WFH QoE/ASC scores. In certain embodiments, this ranking may be geographically indexed for different service products (e.g., 100 Mbps downlink, 1 Gbps downlink, etc.). Ranking can therefore be filtered by geographical area, service product, and so on, prior to presentation for a more meaningful comparison. One skilled in the art will recognize that the filtering, categorization, specific metrics, and time may be adjusted in accordance with various embodiments of the invention. WFH activities may also be associated with specific performance metrics. For instance, WFH QoE/ASC score versus usage intensity on per subscriber basis, possibly also indexed by time and geography. Some examples of such WFH performance metrics include (but not limited to):WFH-intensity metric: WFH-intensity metric can be defined as WFH app use (e.g., WFH app use time/all apps' combined use time) during work hours. Application usage can be estimated based on network activity. Certain applications use the Internet when idle, while others do not. Each application's activity thus can correspond to use time above a certain (possibly learned) threshold. An application may be active not only at the time of network use, but also for the time interval between two consecutive network usages. This metric may be user sub-indexed.WFH-distraction metric: WFH-distraction metric can be defined as non-WFH app use (e.g., video, game, social media app use time/all apps' combined use time) during work hours for a given user on a given or set of devices.If the per-user metric is misleading or not possible, the cause may be another household member's video/game-application use. Therefore, the household size may be estimated based on the number of devices and the device/application usage levels estimated through simultaneous-use review to distinguish the worker from other household members. In some case, the worker mostly use a company issued device, which may allow easier determination of this metric. This metric allows for remote monitoring of employee time use.WFH-collaboration metric: WFH-collaboration metric can be defined as WFH collaboration app use (e.g., WFH communication application-use time/total work hours) The communication application-use time can include video/auto conferencing, email, and VoIP from the employer's assigned telephone number. This measures whether the employee is more or less collaborative than other employees, which might be further related to employee function and position.WFH-user impact metric: WFH-user impact metric identifies users most impacting the QoE of others and suggests where to apply connectivity resources to best improve the overall organization effectiveness. This metric allows the WFH system to prioritize participants within a WFH activity and, based on this prioritize, assign resources such as redundancy to participants to improve overall performance of the WFH collaboration.Overall WFH metric; i.e., ASC Productivity metric: This is an aggregate metric (possibly learned) of the other WFH metrics, possibly filtered by time of day and other factors/parameters.WFH productivity correlation metric (ASC productivity correlation metric): This metric could map the WFH score(s) to any of the below metrics:Number of calls processed by a call-center technicianNumber of deals closed (or total sales volume) by a salespersonNumber of lines of software code created by a programmerAnnual performance-review and/or pay-raise percentageBonus earnedEtc.Labor-law compliance metric: Here the device/application use time is measured, reviewed, and compared to allowed time, with reporting to human resources or other appropriate authority. Risk of potential noncompliance may possibly be predicted as well.WFH-network metric: This could be a Wi-Fi quality metric (e.g., Poor Wi-Fi working hours/total work hours). This metric may be filtered by the Wi-Fi problem source (e.g., coverage, interference, etc.) and possibly separated into uplink and/or downlink congestion and related to unstable fixed line connectivity. The Wi-Fi problem's correlation with other WFH metrics may also be of help in estimating productivity loss associated with poor Wi-Fi performance. All the indices can be tabulated versus time to indicate use schedules. Consistent problematic (or excellent production) patterns that repeat, sometimes called ergodicities, can be learned and exploited to improve future productivity. Analytic Result Delivery FIG.4is a method for the analytics' presentation or delivery according to various embodiments of the invention. In certain embodiments, the method presents results of WFH data analytics to any, or all, of the users (consumers/workers), ISP, or application provider. Data collection410accepts and formats user feedback and preference415together with user feedback, performance, and operational data420from any, or all of the sources of data described above. In certain embodiments, this data may be filtered and reduced prior to storage and presentation. Note that one skilled in the art will recognize that the application provider and the company that uses the application may be different or the same. Data analysis430receives the collected data and performs one or more of the data analysis process described within this application. This analysis may generate one or more of the metrics (including the ASC productivity metric) identified and discussed within this application. These generated metric(s) are then formatted in accordance with one or more of the preferred formats, as discussed within this application, and presented to a user450, an ISP455, an application provider465and/or an application customer460The method in which these results are provided may vary such that each entity may receive the results directly or indirectly (as shown inFIG.4, where the application customer receives results from the application provider). Feedbacks415,420may then be updated; for example, with a new user preference or new operational/performance data over time, and the new feedbacks will then be collected by the data collection410block. The application provider may adjust its audio and video compression schemes to match the user's available throughput. Certain implementations allow such adjustments and may in fact attempt them in real time, while others may allow historical performance to be appropriately added to the decision process on such dynamic matching of the compression scheme to the expected throughput. In particular, applications will not have access to connection performance when they are not active, but the server270would have such data and may extract its consistencies and use them. This allows a consequently more consistent QoE to be administered over sessions and during any individual session, resulting in a higher ASC productivity score. The application may elect also to suggest/disable video if all available bandwidth is needed for good audio QoE alone. Situations where quality is so poor that other WFH indices and ASC scores would be unacceptably low can be flagged and a recommendation for rescheduling of the call may be provided. A distribution channel for the performance improvements may be the application provider, but by sharing joint analytics with both the application provider and the ISP, it may also be beneficial to the ISP. This improvement may alleviate ISP's trepidation on the shared network-performance reporting and mapping. Alternatively, the ISP may be the application provider itself in certain embodiments. Dynamic Network Improvement—Self and/or Proactive Help Dynamic improvement adapts tunable control parameters to improve QoE or ASC metrics according to various embodiments of the invention. These parameters can be, for example, Si,j,k(t). A policy specialized for a preferred service category may dynamically improve the profile as a function of the internet-connection-supported applications'/devices' QoE; for example, improvement of the actual QoE data Q (T) or the estimated QoE data Q (T). This improvement, emphasized in certain embodiments, with particular focus on the WFH application and the ASC metric as the utility measure are described in detail below. The improvement may occur at many layers of the OSI stack. The ISP usually provides its services over layers 1 to 4, while WFH application providers' services usually use layers 4 to 7. The demarcation between ISP and application provider is not always clean or definitive, and either or both may attempt to provide all 7 layers. Improved profiles may specify control parameters at any, a combination of, or all layers. QoE is usually measured at higher layers while most QoS parameters characterize lower layers, with certain exceptions recognized by one of skill in the art. In one example, this relationship may be exploited by connecting the layer 7 QoE feedback data Fi,j,k(t), or its estimates {circumflex over (F)}i,j,k(t), and/or the estimated QoE data Q (T) to the improvement of lower layers' profiles Si,j,k(t). Resulting improvement methods may use this relationship to improve the WFH application's QoEi or ASC metric, examples of which are provided below. FIG.5generically depicts a home gateway283that physically sits at the entry or demarcation point between the ISP's to-the-home service and the in-home network in accordance with various embodiments of the invention. The ISP's service often includes supply of the gateway, and this gateway may have local-area-network (LAN) capabilities for Wi-Fi and/or Ethernet connections to the home's application devices. In this ISP-supplied-gateway example, the ISP provides the service to the device(s). In other examples, certain homeowners purchase retail gateways, and then another ISP may provide the in-home service. Such customers often connect their separate retail box to the ISP-provided gateway, but the retail box then provides the in-home device connectivity, usually via a Wi-Fi AP from the retail box. Referring toFIG.5, first WAN Tx510and Rx511interfaces with a first wide area network with a first WAN buffer (Queues)512stores data prior to being transmitted on the first wide area network. A second WAN Tx515and Rx516interfaces with a second wide area network with a second WAN buffer (Queues)517stores data prior to being transmitted on the second wide area network. Network management across the first and second WANs is provided by at least one ISP. A bus520interfaces the WAN interfaces with the LAN and Ethernet interfaces. As shown, first LAN Tx530and Rx531interfaces with a first local area network with a first LAN buffer (Queues)532stores data prior to being transmitted on the first local area network. A second LAN Tx535and Rx536interfaces with a second local area network with a second LAN buffer (Queues)537stores data prior to being transmitted on the second local area network. An Ethernet Tx540and Rx541interfaces with an Ethernet network with an Ethernet buffer (Queues)542stores data prior to being transmitted on the Ethernet network. A WFH software agent550allows the WFH architecture to monitor network connectivity issues and metrics as well as employee activity on the network. In certain embodiments, profile prioritization555stores, analyzes and improves traffic using profile information, a policy module560stores and analyzes policy information which, in certain embodiments receives performance570and preferred service category information575. FIG.5's two WANs may correspond to the same ISP who provides, for instance, both a fixed-line connection and a wireless cellular (or fixed-wireless access) connection to the home, but different ISPs are also possible. InFIG.5, where two different ISPs provide the WAN backhauls to the home gateway, there can be many different combinations, including but not limited to, two separate DSL connections, one DSL connection and one cable-modem connection, one PON connection and one cable-modem connection, one DSL connection and one wireless cellular connection, etc. Multiple-WAN-interfacing functionality need not be fully present physically nor virtually in a single box (or software-defined equivalent thereof). The ISP, for example through the assistance of a server270, may inform the gateway of the relevant in-home and to-the-home profile settings to use. The gateway may request alternative profiles or reject profiles that it cannot (or does not desire to) implement. The server270directly improves the profiles or sometimes supplies improvement policy to a supportive gateway. The gateway may share WAN connectivity between multiple networks (such as two different residential structures, condominiums and/or apartments) to allow WFH users to have the benefit redundant WAN backhaul connectivity. Bus520connects the various queues' WAN packets to/from the local area networks (LANs) that may be wired or wireless. In certain examples, the gateway's bus520may be an Ethernet switch that aggregates/de-aggregates such traffic. The Ethernet layer 2 packets will contain the (at least) 6-bytes-each source and destination medium access control (MAC) addresses that are unique to each connected device. In certain optional embodiments under IEEE standard 802.1Q, four additional VLAN (Virtual LAN) bytes after the source/destination MAC fields contain an identifier for a subnet. Applications such as IPTV and VoIP may use a dedicated subnet; thus, VLAN byte is often unique to the application carried on the LAN. This identifier is known as a VLAN tag. In combination, this allows unique identification of source, destination, and type of flow. One skilled in the art will recognize there are other methods in which devices within a LAN or Ethernet network may be identified using embedded tags within a data stream. In certain embodiments, the wireless LAN (or WLAN) is Wi-Fi in either or both of the 2.4 GHz and the 5 GHz, or in the future other newly arising Wi-Fi in 6 GHz. In other embodiments, the wireless LAN may be one of Bluetooth or Zigbee wireless alternatives as well. In yet other instances, it can be a small cell (e.g., femto, pico and 5G) network supporting low-power cellular in-home connectivity. As previously shown, there are also queues to/from the LANs that prioritize certain application's data to/from the in-home devices, again potentially useful for WFH application prioritization, and thereby able to improve the ASC productivity metrics. Both WAN and LAN queues can be physically timed and stored in memory, and they also can correspond to the use of different frequency bands, different channels within those frequency bands, different spatial streams simultaneously from multiple memories or paths, etc. These queues introduce connection delay or latency. Some applications tolerate delay better than others, which can also be important in network improvement or optimization. Furthermore, delay tolerance can also be a function of the queue's size or depth (overflowing a queue leads to data loss). Network improvement may dynamically change the profile consisting of various network component configurations. Again, the profile Si,j,k(t) may have many possible configurations possible for each of many users (human and/or machine), devices, and applications that share the WAN and LAN links. For example, a Wi-Fi channel may be ideal for a user in the living room, but this same Wi-Fi channel's use may interfere with signals to another Wi-Fi user in the kitchen. Therefore, the best configuration often prioritizes the most mission-critical application/device while maintaining reasonable communication quality with the rest. In certain embodiments, profile prioritization555considers at least one or more of the following: (1) best prioritization policy; (2) devices, users, and/or applications that may need prioritization, including to improve ASC productivity metrics; and (3) communication subsystem configuration in prioritization. Prioritization may occur in many OSI layers across many different network sub-systems. For example, an ISP may prioritize profile parameters in layers 1 to 4 in their equipment configuration (sometimes called provisioning). To improve the in-home network further, the software agent550inside a home gateway283collects data and manages configurations according to the preferred service category575and the policy560. The agent550may also permit direct WFH imposition of a profile555. The agent550can improve network conditions through the following steps in an example embodiment: The first agent workflow step collects QoS performance and operational data Xi,j,k(t). This data collection may be periodic or event driven. Data collection event triggers include a data use change, a service profile change, users' QoE feedback, etc. The agent550may use different collection intervals or event triggers for different data types. If the agent550connects directly to a UI, the agent may collect user preference Pi,j,k(t) or user QoE feedback Fi,j,k(t). The server's policy guides this data collection step in certain embodiments. The second agent workflow step collects the permitted service profile set Si,j,k(t). The server270may recommend the preferred service category after either receiving the user's direct preference through the UI or automatically through service category prediction. When the policy permits the agent to predict the preferred service category based on the performance and operational data, the agent550may determine the preferred service category under the server's ASC policy. Alternatively, if the agent550connects to the UI, the agent may procure a user preferred profile directly. The third agent workflow step applies the policy560associated with the preferred service category to improve the profile555. This policy560may include various trigger conditions for different profiles' applications. The fourth agent workflow step monitors system performance, and the agent550may use the system performance to trigger re-profiling. Embodiments can estimate the preferred service category's ASC metric periodically to monitor the prioritized applications'/devices' performance. Embodiments may also estimate other service categories' QoE periodically and monitor the de-prioritization's performance degradation for possible UI display. The server270may be responsible for estimating ASC metric based on historical data and then recommending profiles or policies to the agent for implementation. Finally, the agent550updates the preferred service category if it receives a new preferred service category575or if current preferred service category575is no longer desirable. For example, a user can select another service category or disable current preferred service category using the UI, or the agent can switch the service category575to another one if the current service category has expired. For the latter, the next service profile575may have been specified by the user when current service profile has started or by computer implemented methods described herein. To differentiate the devices' and/or applications' QoE, the transmit queues or “buffers” may use priority queues illustrated inFIG.6in an example embodiment. Priority queuing provides each device within a home network at least one application queue610-613. Priority queueing routes packets from different applications and/or devices to different queues and then outputs these according to the priority. Each queue may additionally include a traffic shaper615that may limit that queue's input and output data rates. As mentioned above, some priority queues610,612may themselves partition into several queues with different priorities, and the packets from high-priority device or applications pass more quickly through the high-priority queue. In certain embodiments, the queue may implement active queue management (AQM), which preemptively drops packets to prevent congestion or excessive buffering within corresponding buffers620. Certain gateways may support AQM methods, such as, Random Early Detection/Discarding (RED) that drop packets selectively before the queue is full and controlled delay (CoDel) that limits the delay that prioritized packets may remain in queue. In certain embodiments, home gateways may prioritize certain applications or devices based on a WFH situation. For example, a WFH user might mandate a WFH tool or application in the associated preferred service category to be prioritized. The server's policy and profile improvement then prioritize profiles for the corresponding WFH application and device. The policy also may specify how to detect and associate the WFH device and application with the preferred service category. The profile further allows prioritization to be implemented within different home network subsystems. For example:WAN profile can improve WFH applications fail-over only to associated WFH applications and devices.LAN profile can assign WFH applications and devices to the best QoS paths/links; this may include improving Wi-Fi spatial streams and associated beamforming to better serve the WFH device.WFH applications' data may be input to higher priority queues. After prioritization, the WFH system monitors the WFH QoE data (or ASC metric) directly or with the estimated QoE. Learning methods then improve prioritization for best QoE or ASC score. This prioritization may also depend on user preference data or data described herein that may be used to improve one or more network connections or metrics. QoE monitoring of lower-priority applications and devices also maintains improvement's impact; such that, these lower-priority applications and devices experience minimum performance less while retaining acceptable or best ASC score. Application and Device Prioritization In certain embodiments, application and device prioritizations typically exist even above Layer 7 and can assist ASC productivity optimization or improvement, or more generally, QoE, optimization or improvement. Different applications may have different network usage behavior. Certain multimedia applications may use non-standard TCP/UDP ports instead of standard ports such as TCP:443 port for HTTPs. In accordance with certain embodiments, particular equipment can detect these applications' message flows based on the communication port use and may thus use WFH provided policy to prioritize some of these flows. Other applications may use standard ports, but they may create different use patterns that are detectable. For example, non-real-time video streaming applications use more downlink than uplink bandwidth, whereas video conferencing applications often exhibit similar bandwidth usage for uplink and downlink. Additionally, some applications may be identified by their IP-destination address. server270policies then may direct equipment to prioritize based on port number, usage patterns, or IP addresses. Layers 2-4: Flow Prioritization The flow prioritization described in this section may be used instead of or in addition to the application and device prioritization described in the previous section. OSI layers 2 to 4 packet header information allows traffic flow identification and can be used to optimize or improve prioritization on different communications links. Profiles can dynamically specify various layer 2 to 4 parameters to prioritize or optimize or improve the ASC metric.FIG.7illustrates flow identification possibilities within a home gateway. In this Figure, device types710, media ports720and DSCP/WMMMs730are configured to support specific flows across a variety of WFH applications740. One skilled in the art will recognize that there are numerous combinations that can be implemented to support different flows and network performance and associated metrics. Examples of consideration points in determining flow prioritization are given below:Layer 2 traffic flow type identification may use the VLAN tag because applications such as VoIP or IPTV often use a dedicated VLAN. In addition, certain Ethernet flows allow in-packet header specification of one of 8 priority levels, using IEEE Standard 802.1p CoS (Class of Service), which can identify traffic type, as well as possibly permit WFH packet flows' prioritization (specified or changed in route, for instance, at the gateway).Layer 3 IP packet headers may contain a Differentiated Service Code Point marking that depends on the packet types. This marking can flag applications, such as VoIP or streaming video, if detected in any link to the end device. A gateway WAN or LAN queue can then prioritize such WFH packets accordingly.Layer 4 processing may use certain special TCP/UDP ports for media (voice/video) delivery, allowing application identification by port number. Consequent to the identification, a traffic flow's prioritization can thus occur in Layers 2 to 4 in certain embodiments. Home gateways provide priority queues where the TCP/UDP port number or the DSCP marking permit priority determination. ASC optimization/improvement policy can direct this priority's re-marking. For example, Zoom voice packets' prioritization adds UDP ports 8801-8810 to the supporting gateway's (or router's, more generally in the network) high-priority list. DSCP re-marking can find use in the priority queue and also in other communication sub-systems. For example, routers with Differential Service Protocol capability prioritize packets with high-priority DSCP marking, and the supportive Wi-Fi gateways can send first these packets in the home before lower priority traffic when Wi-Fi Multimedia is enabled in Wi-Fi gateway. The IP header's DSCP marking can determine this prioritization, which the application commonly sets, and the recommended profile incorporates, if the application supports WFH-management-reprofiling's policy and uses such DSCP marking. However, the gateway can also re-mark the DSCP priority even without the application's support. For example, a gateway may re-mark DSCP54for a certain non-audio traffic for time-sensitive, remote-control applications to minimize the control delay. Additionally, to support faster DNS look-up for prioritized applications, the gateway's local host file can add prioritized servers' IP addresses. WAN interface optimization or improvement can also prioritize a data flow to optimize or improve network performance and associated metrics. Referring to the example inFIG.5, home gateway may connect to 2 WANs, such as DSL as the main broadband service and a metered LTE as a back-up. When the main broadband service's performance degrades, fail-over can redirect high-priority device's/applications' packets through the back-up LTE link. These packets can also pass through both WANs510,515to increase prioritized application's overall data rate. The consequent multi-path transmission may dynamically use one or more estimated ASC metrics through the server's direct reprofiling, or through policy specification that enables immediate profile change upon the WAN's router's contention sensing. This multi-path control prioritizes the routing within a 2-WAN mesh network. If the WAN additionally supports multiple network slices and the poor QoE or ASC metric results from the current network-slice choice, then the high-priority devices'/applications' packets may switch to another higher-priority slice as a function of the measured WFH QoE or ASC metric. For example, if the WFH QoE or ASC metric that is consequent to a slice (e.g., Enhanced Mobile Broadband) use that is lower than a threshold, the WFH packets may dynamically switch to a different slice such as an Ultra-Reliable Low Latency Communications slice. The WFH QoE or ASC metric improvement from WFH devices' and applications' prioritization may also arise from less important applications'/devices' de-prioritization. Some WFH user's other home applications/devices need no priority to maintain that user's acceptable WFH QoE or ASC metric. Typically, highly interactive applications need priority and non-interactive applications may safely reduce priority to improve the interactive applications' performance. For example, cloud-storage applications' simultaneous de-prioritization during a video-conferencing session can improve an ASC metric or other-related network connectivity metrics, even if both cloud-storage and video-conferencing applications nominally have high priority. In addition, de-prioritization of some non-WFH applications/devices; such as, gaming and game consoles, can improve ASC metrics and other related network connectivity metrics. In certain embodiments, de-prioritization and prioritization can use the same reprofiling except with packets' reassignment to the lower-priority queue or the lower-priority WAN/slice. In addition, traffic shapers615can apply a strict throughput limit on deprioritized devices. Layer 1-2: Signal Level Improvement The signal level improvement described in this section may be used instead of or in addition to the application and device prioritization and/or the flow prioritization described in the previous sections. Profile optimization or improvement can also prioritize applications and devices at OSI Layers 1 and 2 to improve ASC metrics. In certain embodiments, Layer 1 and 2 profile optimization or improvement provides opportunities to prioritize different applications and devices because there is typically not a universal communication signal that can best serve all applications. As such, layer 1/2 profile choices intrinsically consider the specifics of the link, application, and device. For example, a long coding delay may stabilize a DSL link at the cost of increased latency. Such delay may have little consequence for one-way streaming video, but it may degrade interactive video conferencing. Wi-Fi interference may differ within the home, so selecting a channel with less interference for one device may increase the interference for another device. A particular device's or application's QoE may be highly sensitive to Layer 1 and 2 profiles settings. WFH application providers may consider such optimization or improvement as a way to improve their products' user QoE or ASC metric, even if the issue is in the network, because a competitive application provider who does address this Layer 1/2 QoE or ASC metrics effect will consequently have a better product take rate as well as a better customer retention rate. In other instances, two relatively higher prioritized applications may be operating concurrently across different users within the same household or network. For example, a household may have a first user engaged in WFH activity, such as a videoconference, and a second user engaged in online education. Embodiments of the invention may learn to improve prioritization across these two applications based on different, yet competing ASC metrics such that the overall performance of the shared network is improved. The following examples suggest possible prioritizations of Wi-Fi's physical or PHY layer (layer 1) and the MAC layer (layer 2) for different applications/devices:Channel Selection: Wi-Fi can operate in 11 different channels in the 2.4 GHz band that IEEE 802.11b/g/n/ax standards specify. These different channels can have different propagation, and particularly interference, characteristics. The interference characteristics depend on location and proximity to other Wi-Fi systems (e.g., a neighbor's Wi-Fi). A Wi-Fi AP can measure (through IEEE standard 802.11k specified measurements) connectivity and related information to all devices, and this becomes part of the QoS performance data. This data's careful evaluation then permits channel selection through profiles that best support most (though possibly not all) devices with reasonable performance. Wi-Fi-device applications' prioritization can affect this profile selection as well. For example, if there is no clear best channel, the Wi-Fi spectrum-optimization or improvement reprofiling can instead select a channel with the least interference for the profile's prioritized WFH device. The optimized/improved profile policy then reserves this selected channel by creating more traffic in this channel so that a neighbor's Wi-Fi management Dynamic Frequency Selection (used to avoid existing licensed spectra, like satellite signal bands, that may vary with user jurisdiction, so the Wi-Fi system will avoid a heavily used channel because DFS presumes that channel is protected) will then avoid this channel. Similar advanced Wi-Fi systems, such as Wi-Fi6, also have “coloring” schemes that allow otherwise independent APs to agree to use their own “colors” (i.e., frequency channels). Profile settings also can manage these colors.Band Steering: Some WLANs can use any, some, or all of the 900 MHz, 2.4 GHz, 5 GHz, and 6 GHz bands, with conventional Wi-Fi today largely in the 2.4 GHz and 5 GHz bands. Furthermore, some support multiple bands and then the server270can steer devices to use certain bands through re-profiling. Typically, the band-steering profiles are such that each device uses a different band. Instead of prioritizing certain device, QoE profile optimization or improvement can also move the remaining devices to share a different band so that the prioritized device(s) no longer competes for its assigned band's usage. A new profile could prioritize a device that only uses 2.4 GHz because this band performs better at that device's longer distance from the AP than would higher-frequency 5 GHz band. The other devices/links optimized/improved profiles might then all have profiles that specify the 5 GHz band. Band-steering can be generalized within a band to channel steering, particularly within the 5 GHz band that has many channels, or within the new 6 GHz band that has even more channels. As a result, one radio can be dedicated to the prioritized device(s) through the profiles selected.Bandwidth: Many Wi-Fi APs today can support up to a 160 MHz-wide individual-channel bandwidth. More narrow choices of 20 MHz, 40 MHz, and 80 MHz are also possible. A wider bandwidth however spreads the available power over a wider frequency spectrum. Using wide bandwidth can improve the data rate, but it can also become more vulnerable to the interference and coverage issues through the power spreading, as well as a larger probability of overlap with other uncoordinated Wi-Fi systems. This option trades off between rate and reliability. This trade-off can be used to prioritize applications/devices through the optimized/improved profiles' specification of channel bandwidth. For example, an optimized/improved profile could manage an AP to reduce its bandwidth from 160 MHz to a best 20 MHz channel if the prioritized device requires stable high reliability. Any single 20 MHz Wi-Fi channel without interference is more than sufficient to support high quality voice and video for a WFH videoconference.MIMO: Wi-Fi systems can support different MIMO (Multiple Input Multiple Output) configurations; such as, spatial multiplexing, space-time coding, transmit beamforming, etc. To prioritize the application that requires high reliability, Wi-Fi profile optimization or improvement can select MCS space-time profile parameters to improve reliability at reduced transmission rate, essentially through redundancy spread across the multiple transmission links virtually created by MIMO signal processing, which are often called spatial streams. Wi-Fi profile optimization or improvement thus can specify multi-user MIMO (MU-MIMO) profiles that assure best performance for the prioritized device. Multi-User can direct spatial streams independently in the same frequency band to devices in different locations. These different device locations can receive different power in their spatial beams (as the sum of all transmit beams' power is the only constraint). A prioritized WFH application's device can have a profile that allocates more power to it than to other lower priority devices.Mesh Routing: A Wi-Fi mesh comprises of one or more mesh nodes (or mesh points, MP) that relay packets between the AP and the devices. Mesh points are similar to “Multiple Access Points” (MAPs), where the MAP may assume the responsibility of being a master access point. The MPs or MAPs may use overlapping channels and thus cause interference between them. When a prioritized (WFH) device connects to an MP, the other MPs' profiles may reduce their transmit power or may specify a different channel's use to prioritize the AP to/from WFH-device link. If the prioritized device requires very low latency, the optimized or improved profile may cause this device to connect to the AP with its maximum power, bypassing rest of the mesh.Airtime fairness: Airtime fairness is a feature in WLAN APs that assigns different resource use to devices or SSIDs which may depend on the location and need of the device. In certain instances, the fairness setting may be dynamically set. This can occur in Wi-Fi MAC management or in cellular systems' scheduler. In general, this penalizes a device that is far from the AP or that has an older version of Wi-Fi chipset that cannot support very high-speed transmissions because the transmission speed is low for such devices. If the device that needs to be prioritized is at such low speed locations, airtime fairness can provide profiles that allocate more time to prioritized devices. Alternatively, the profiles may disable other devices to prevent them from further reducing the transmission speed and degrading the prioritized devices' QoE or ASC metric. Many wireline communications offer similar PHY/MAC profile configuration that can prioritize different devices and applications. For example, DSL systems allow a profile to specify the interleaver depth and FEC coding rate. Additionally, DSL profiles can specify margin and power back-off parameters to enable the trade-off between data rate and reliability, as well as reduce power from crosstalk generating connections. If the prioritized applications require low latency, these codes' profile configurations can be set to reduce the latency. In cable modems, an uplink scheduler can specify a service profile that prioritizes certain applications through an associated queue filter, which is a set of rules that classifies the packets. The filter may use source/destination IP address or port number and may be part of a cable modem configuration. Layer 4-7: Application Improvement The application improvement described in this section may be used instead of or in addition to the application and device prioritization and/or the flow prioritization and/or the signal level improvement described in the previous sections Optimization or improvement may also adapt application profile's parameters; such as, video frame/bit rate and/or audio bit rate to improve WFH applications' QoE or ASC metrics. There are a variety of ways to adapt the application profile using performance/operational data, two of which are provided below. In a first example, WFH/ASC optimization or improvement may indirectly exploit the application's configuration adaptation rules through profile settings. WFH application prioritization results from non-WFH applications' de-prioritization. Many non-WFH applications use application layer (layer 7) adaptation rules that react to lower layer network performance degradation; such as, excessive delay or packet losses. When WFH QoE or ASC metrics start to degrade, these non-WFH applications' rules can inject delay into, or delete data from, their application server queues. Inducement of this delay injection may occur through reprofiled layer 4 traffic shapers that lower the non-WFH input queue's data rates. This lowering occurs through the server's re-profile instruction to a cooperating AQM to drop some non-WFH packets. As a result, the non-WFH application's layer 7 response lowers its frame rate and/or bitrate; thus this induced non-WFH action frees useful bandwidth for the WFH application. Similarly, when the server270anticipates degraded communication performance for a scheduled upcoming video conferencing session, the WFH video conferencing application's new profile can then implement this non-WFH spoofing to provide consistent quality for the upcoming WFH video conferencing session. In a second example, an application profile method directly adapts the profile for the WFH application. Through its historical QoS application performance data collection, the server270anticipates certain recurrent network environments and thus proactively supplies a correspondingly optimized/improved application profile. Other QoS performance data (typically at lower layers) can augment the historical upper layer data so that the resultant WFH profiles or policies improve the application server's video and/or audio encoders or alter/enlarge its set of available profiles to the server270for consideration and use. The server's computer methods can predict a home network's expected throughput/delay based on previous WFH sessions' observed QoE and/or QoS consistencies (i.e., QoS may also be predicted based on past observed QoE and/or QoS in some embodiments). The server's profiles and policies can correspondingly set the frame rate and bit rate to levels that reduce future WFH application-use degradation. If the server270predicts an upcoming scheduled session's home network quality degradation, the application profile could specify SD video instead of HD video. This SD-for-HD profile substitution may occur because the WFH learned previously that the WFH user is less likely to be unproductive with SD video than with disrupted HD video. Another WFH re-profiling opportunity occurs when the home network's QoS performance data include a WFH-prioritized device list. In this use case, the server's profiles can update the application's configuration to use the prioritized ports in the layer 4 RTC headers or layer 7+ webRTC commands. Similarly, the WFH optimization/improvement system can provide the port list. The home network QoS performance data can also include problem alerts that lead to WFH profile imposing solutions; such as, connecting audio via phone service instead of Internet voice. Neighborhood-Level Network Improvement The neighborhood-level network improvement described in this section may be used instead of or in addition to the application and device prioritization and/or the flow prioritization and/or the signal level improvement and/or the application improvement described in the previous sections. FIG.8illustrates a neighborhood uplink network access architecture according to various embodiments. This figure illustrates that many households may share a neighborhood's physical access medium. Examples include cable modems, wireless backhaul systems, or PONs. A plurality of access lines810are provided with each having corresponding traffic shaper815, and820buffer that interface with the access network850. The network access may couple with the Internet/public network via an aggregation unit860such as a DSLAM or CMTS. The different households' traffic may be scheduled. As illustrated inFIG.8, the aggregation unit850(e.g., cable headend or PON OLT) collects and schedules uplink communication demand from network termination points that connects the customer's home network to an ISP's line, which may result in scheduling delay. Consequently, the aggregate uplink channel810may become congested, which reduces individual lines' uplink capacity. Moreover, many access networks offer lower uplink rate than downlink rate that often causes an uplink buffer bloat problem more frequently in WFH's more likely symmetric bandwidth application use. These uplink problems traditionally have been addressed by two methods: scheduling or AQM. Uplink schedulers monitor the queue length and/or flow's priority and accordingly assign the service flow's uplink bandwidth with longer queue and/or higher queue-exit priority. In the architecture shown inFIG.8, this assignment occurs through feedback to the aggregation unit860. In another method, AQM can drop packets when it expects excessive queueing delay for certain service flows. The queuing delay is QoS-performance data, but it may not be accurate because of burstiness and time-varying nature of other households' uplink traffic. Inaccuracy increases when the uplink congestion occurs in the shared link. Thus, the use of AQM may penalize its lower priority neighborhood line/service flows so that the prioritized line/service flows will occupy the vacated bandwidth. To prioritize certain WFH (or other mission-critical) applications or lines, the server270may re-profiles the shared-lines' uplink queues dynamically. For example, optimized or improved WFH profiles can reconfigure DOCSIS 3.1 cable modems' to different links to prioritize different service/lines through DOCSIS-3.1-supported AQM, which provides at least the following tunable parameters: enable/disable AQM per service flow, per-flow latency targets, and per-flow buffer sizes. Consider when WFH line/service flow's QoE is low, AQM can then shorten a non-WFH service flow's target latency to initiate more aggressive non-WFH packet dropping. Simultaneously, AQM actions can initiate TCP flow control to reduce TCP flow rate for the affected flow. Alternatively, if similar prioritization occurs instead in the uplink scheduler, TCP flow control will not be triggered until a buffer overflows, so the uplink latency for the de-prioritized user will continue to increase. Therefore, the WFH's AQM optimization/improvement likely maintains better QoE for all connections, even those deprioritized. In general, uplink prioritization can be better managed by jointly optimizing/improving all links. In certain embodiments, a WFH location may have multiple uplink queues that can support network uplink connectivity. For example, a dual uplink queue may have a WFH queue and a best effort queue. At a neighborhood level, prioritization between queues may be supported such that WFH queues may be serviced first and then best effort queues follow. This queue prioritization may be under control of the service provider and allows the service provider to sell a differentiated service to consumers. Preferred Service Category Identification WFH service QoE or ASC network profile optimization or improvement associates the preferred service category with the applications and devices. Many households may use the same WFH application, so the WFH-application QoS data may consequently have shared values. However, each household's WFH devices may need identification because they may also be used for other applications. Collaborative household users may provide the WFH device list as user preference data Pi,j,k(t), where the boldface is used because there is a neighborhood of such data (or a data vector instead of a data point). Similarly, QoS data Xi,j,k(t), profiles Si,j,k(t), user QoE feedback data Fi,j,k(t) also become vectors. The server270then becomes vectored for the WFH users' neighborhood. The quality function may still be a scalar for the neighborhood, but there can also be individual quality functions applied. The vectored WFH system then applies vector profiles Si,j,k(t) that best optimize or improve the overall WFH situations. This can lead to better QoE for all the users than if each individual user performs independent, individual optimization/improvement. However, such vectored user preference data Pi,j,k(t) may not initially be available, or user-provided lists may become obsolete. Instead the vectored server270can estimate device type probabilities based on other network WFH devices' statistics among similar situations and/or within the vectored neighborhood. For example, laptops are commonly used for WFH; however, a household user may use the same or different laptops for remote learning and/or other applications. Therefore, the WFH server270can run a computer process on other more easily collected QoS performance data to identify devices that are used for WFH and those that are not. FIG.9shows a simple example computer process that identifies the WFH devices based on the following user preference and QoS operational data. In this example, a plurality of devices910are analyze in respect to (1) whether the WFH device type is confirmed920, (2) the probability the device is used for WFH930, (4) the probability an application is being used940, (4) a mathematical operation (e.g., multiply) combining the two probabilities into a single value950, and (5) an analytical conclusion whether the device is currently a WFH device960. One skilled in the art will recognize the values illustrated inFIG.9are for illustration purposes only and a variety of different values and variable may be employed:User's device type input within user preference920Network/region wide WFH device statistics, such as a laptop's probability of being a WFH device.Application usage per device. The following explains the mathematical process illustrated inFIG.9:PAis the probability that a household device uses WFH applications930. The server270estimates PAfrom network-wide WFH-device QoS data statistics, per device type or individual device model. The probability can be computed as PA=N1N1+N2,whereN1is the device model's (or type's) number of WFH appearances in the user-preference data.N2is the device model's (or type's) number of non-WFH appearances in the user-preference data.PBis the probability that a household's specific WFH-supporting device is active for the WFH purpose940, which can be computed as PB=T1T2,whereT1is the time duration when the device has at least one WFH application active.T2is the time duration when the device was active or connected to the network. If a WFH application cannot be detected, T1can be estimated by subtracting non-WFH use time when the device was used for known non-WFH applications; such as, playing a game, streaming a video, downloading files using Bit-Torrent, etc., from T2.PA·PBis the probability that the household has an active WFH device950. Then, a simple method declares there is an active WFH device if PA·PB>PT. The threshold PTmay be learned from network-wide statistics such as the device's average WFH-application use time. This learned probability uses the user-preferences specified for the WFH device. If user sets a device as WFH category, its label denotes WFH device even if PAPB<PT. Learning methods can improve the preferred service profile's identification. For example, a supervised learning process could use the following features:Device type: Device types include laptop, desktop, smartphone, and so on. This feature is useful because certain device types are more likely to have dedicated uses, e.g., an Xbox is mostly for gaming, and laptops are more likely for WFH than are TV consoles.Device manufacturer: Device manufacturer may be derived from the MAC address. This feature is useful because businesses often use laptops from certain well-known brands, and companies may issue laptops from a single brand (or a limited number of brands) to all their employees based on negotiated discounts from the brand(s).Network device name: The network device name can also be obtained from a variety of sources and methods including an ARP name, (i.e., associate IP addresses with devices through layer 5 session activity), NetBIOS name (i.e., allows entities on a LAN to communicate directly, and thus may associated IP addresses for the devices with MAC addresses) and other methods known to those of one skilled in the art. Sometimes, the device name indicates whether it is work-related equipment or not.Type of applications used by the device: This assumes that the server270knows or learns the used application set, and presumably knows the WFH applications already. Example application types include WFH, remote learning, entertainment, IoT, and so on. Based on the known mapping, the application type can be transformed to a Boolean variable that indicates whether at least one WFH application was used or not. This feature expands to be a set of numbers that indicate the time duration that each application type's use by the device.Device usage/connection at different time and day: This feature shows whether the device use correlates to working hour.Device location: Device location information includes the devices' GPS coordinate or other AP-distance indications. If location data is not available, this feature may be derived from operational data indicating the location. For example, RSSI, channel estimates, antenna array gain, and other radio channel related parameters indicate the device's location relative to the AP. To train a supervised learning model, the server270labels features using user preference data, especially the primary device use. Learning methods may have intermediate derived/calculated entities called “features” that are intermediate to the QoE quality data like ASC metric and may appear at stages within the learning system, for instances stage within a deep neural network. For example, the weighted addition of 2 or 3 QoS data elements prior to thresholding might create some internal partial indication of the ASC metric as a feature. The features may not yet include other QoS data's influence, which instead occurs in later learning stages that aggregate features before the final stage provides the QoE estimate. Standard supervised-learning-based classifiers can be used to derive the supervised-learning model and its internal features' stages. The following models are useful to get a good classification results: Generalized Linear Models (GLM) including logistic regression, boosting methods such as gradient boosting, deep-learning processes such as long short-term memory (LSTM), and so on. Alternatively, unsupervised (reinforcement) learning methods can use the same feature set to identify a device's preferred service category when that user-preference data is unavailable. For example, clustering processes such as k-means can estimate a natural device cluster and then classify the preferred service category as WFH if this cluster's most frequent known preferred service category is WFH. This user preference data estimate internally may act as a feature when combined with other QoS performance data to estimate QoE or ASC metrics. WFH Traffic Separation Workflow Preferred service category identification explained service- and/or device-based WFH-traffic identification according to various embodiments of the invention. WFH traffic's identification, along with its associated performance measurement, allow measurement of various important work-from-home statistics and/or WFH traffic prioritization. In an embodiment, WFH traffic identification can separate identified WFH traffic from other traffic for prioritization and privacy protection, particularly when WFH users do not wish to share network use information with WFH service providers or with their employer. This separation permits the WFH traffic and the other traffic to be treated differently. WFH traffic's separation may follow the following example workflow:First, the software agent550identifies the service and/or device type.Second, the software agent550may route the two traffic types differently. In one embodiment, the WFH traffic and the other traffic may use different Wi-Fi SSID and/or different VLANs, which allows WFH traffic's prioritization. In another embodiment where multiple WANs connect to the home network, the WFH traffic and other traffic may connect to different WANs; alternately, only the WFH traffic may fail-over to the other traffic's chosen WAN.Third, the software agent550collects and records the different types of, and/or amounts of, WFH traffic data and other non-WFH traffic data. In one embodiment, the agent550collects more detailed information; such as, application name, usage time, and duration from WFH traffic. In another possible embodiment, the agent550can send each of the recorded information on the WFH data to a server and the recorded information on the other, non-WFH data to another server (e.g., private data server).Finally, different aggregation levels or methods can be applied to WFH traffic and to the other data. For example, embodiments can aggregate the collected WFH traffic data per employee and other data more coarsely; this may include such specifics as different time of day or employee group (like department). Different aggregation levels can thereby protect the employee's privacy. Traffic separation can also apply to other preferred service categories. In an embodiment, if the preferred service category is remote health (telehealth), the agent can separate the telehealth traffic from other traffic and apply similar separation and data treatment disclosed in this document. Policy-Based Device & Application Joint Prioritization A policy is a function that describes changes to a profile Si,j,k(t) based on the server's270available QoS, current profile, user QoE feedback, and user preference data that also include the (potentially estimated) QoE quality function Q. The policy would supply all information to implement the prioritization through local profile choice. The server270may provide the policy for profile optimization/improvement to a local device like a Wi-Fi AP205. Policy information include, but not limited to, the following:Data collection method: A policy may specify how to collect QoS data. This may include the QoS data list, the collection period, and the format. When the server270shares some data with a third-party partner like an application provider, the WFH policy will cause the profile to specify the type and period of information sharing between the server270and the application server.Trigger condition: Even when a user provides a certain service category preference, prioritization may not be necessary because the existing QoE performance level for the preferred service category is sufficient. When the network is underused, mission critical applications like WFH should work well without any prioritization. Re-profiling policy can specify various trigger conditions for prioritization to avoid unnecessary negative impact to non-preferred applications and devices. In one preferred embodiment, the QoE levels for the preferred service category may have several trigger thresholds. For one trigger level, only low-priority devices would be deprioritized when the WFH QoE is less than a corresponding threshold. At a second more serious trigger threshold, then both low- and medium-priority devices would be de-prioritized, and so on.Prioritization targets: This is the prioritized (ordered) list of applications/devices when the WFH service category is active. Each listed device could have a high/medium/low/lowest priority, which will determine the level of (de-)prioritization at different QoE or ASC levels. For example, referring toFIG.10, a plurality of devices1010are correlated to a plurality of applications1020by a priority ranking or value. In one instance, laptop 1 is set to high priority and desktop 1 is set to low priority for the WFH service categories and desktop 1 is usually for gaming and laptop 1 is for WFH.Prioritization methods: These are the sub-systems' allowed methods to prioritize certain application/devices. The reprofiling policy might have the list of TCP/UDP ports that can be prioritized to support certain WFH applications. This list's source/destination IP address pairs permit packet re-marking to a high-priority DSCP label. Alternatively, the re-profiling policy might limit AQM configuration such that the AQM's detrimental effect can occur only on the low-priority devices. Prioritization method selection depends on the current communication performance. If the network performance slightly degrades for WFH devices/applications, the server's policy might first apply a mild (de-) prioritization; such as, an airtime-fairness method. If the network performance degrades further, the server's270policy might then request more aggressive de-prioritization such as enforcing low maximum speed limit to non-WFH devices using a traffic shaper.End-to-end diagnostics: The policy might request end-to-end communication performance monitoring to assess the policy-imposed optimization's/improvement's effectiveness and to provide evidence to third-party partners.Prioritization effectiveness: The applied prioritization's effectiveness may need measurement to assess if best/sufficient or further optimization/improvement should occur. The ideal effectiveness metric is user's feedback Fi,j,k(t) on the perceived QoE (or ASC metric) improvement, which may be available real time or estimated, if not.Impact to un-prioritized applications/devices: Policy impact on the non-WFH devices/applications assesses any negative impact to those devices/applications. If the streaming video application QoE degrades sufficiently, then the disadvantage of the corresponding poor entertainment application QoE may outweigh the ASC benefit.Diagnostics information: The re-profiling policy may prescribe how to detect WAN congestion (e.g., insufficient uplink bandwidth for cable/PON) for certain WFH devices, such that extreme prioritization triggers upon WAN congestion's detection. The policy may prescribe a video conferencing application's maximum uplink latency, above which the policy de-prioritizes all non-WFH devices. The policy can also use long term observation of the network's operational/performance/use data Xi,j,k(t) and user QoE feedback Fi,j,k(t). The server270determines prioritization targets and illustrates a priority matrix in accordance with various embodiments. Some embodiments prioritize applications through their communication ports or the destination IP addresses for the uplink packets. Other embodiments may instead prioritize devices through uplink packets' source IP/MAC addresses or by using airtime-fairness methods. Pairing of these two prioritizations in different ways may improve WFH QoE or ASC metric. When policy separates application and device priorities, learning processes may offer better ASC metrics.FIG.10also illustrates a medium priority service (e.g., web) for non-WFH device as low priority, whereas the same medium priority service for WFH device is medium priority. This exemplary priority matrix may allow Layer 1 device priority optimization/improvement to pair with the Layer 4 application prioritization by TCP port and illustrates that a high priority may be given only to the application and device pair where both application and device are high priority. The server's policy specifies the priority matrix. InFIG.10, an example pairing of laptop 1 and Zoom has high priority. An embodiment may prioritize the airtime fairness for laptop 1 while Zoom's communication port (i.e., “UDP 8801-8810”) is set to high priority. Consequently, the server's270policy specifies automatic setting of the laptop 2 (low priority device) and Zoom pair to medium priority, despite Laptop 2's low priority. This priority elevation is due to Zoom's communication port was already prioritized (for laptop1 and Zoom). The priority matrix' entries interdepend through these implementation limitations. Before applying the priority matrix, the server270may check feasibility. Feasibility checking may start from the high priority device/application pair and then continue for consistency to the lower priority device/application pairs. The policy may also depend on home gateway's available prioritization capabilities. In certain embodiments, the policy optimization/improvement supports the ergodic spectrum management framework as set forth in U.S. patent application Ser. No. 16/804,000, filed on Feb. 27, 2020, entitled “Ergodic Spectrum Management Systems and Methods,” which application is incorporated herein by reference in its entirety. ESM uses QoE function as one of constraints during the spectrum optimization/improvement, and the policies depend on the ergodic properties of WFH use and behavior. QoE reward functions thereby become specific to the subset of WFH applications and devices, in particular for the ASC metric. This QoE reward function or ASC metric again may be learned from QoS performance data obtained from devices labeled as WFH devices. Meta Improvement Training The QoE and ASC metrics, described herein (individually or aggregated), could themselves become inputs to training as a form of metadata according to various embodiments of the invention. The objective would be profile adjustment that learns from them to improve worker productivity, as measured by ASC metrics or other WFH-related metrics. Worker productivity or ASC metrics depends on certain (learned) QoS parametrizations and profiles and thus improves with proper optimization/improvement. Some workers' performance depends more on high priority connection flows—for instance, employees or consultants that use sophisticated computer-aided design tools that incorporate data from these at home workers. Dependency is particularly high for work that transfers large files, which could motivate these workers' corresponding applications and devices to have higher relative prioritization. A worker producing more lines of debugged/qualified software code because of better latency and bandwidth to/from a server would have higher ASC productivity contribution. WFH Improvement Based on ASC Workflow The WFH service ASC metric may relate to various WFH metrics described herein and may be used to improve WFH metrics and/or features. An example of how a network may be optimized or improved for ASC productivity metrics follows. First, in a data collection workflow step, the server270collects data from various sources, which can include the LAN(s), WAN(s), application customers, and application providers. In an embodiment, the software agent550may collect QoS data from the LAN and the WAN, the application provider may collect application use data via an API or logs, and the application customer may collect employee information such as employee job function, work schedule, etc. Second, in a performance evaluation workflow step, various metrics measure the WFH-service's performance. Since user feedback is sometimes hard to obtain, machine-learning techniques described herein that predict QoE from QoS data, even without user feedback, may be used. Machine-learning methods such as logistic regression, deep learning, and boosting computer methods may implement learning or prediction using available user feedback as labels and collected data as features. Similar machine learning methods can derive other metrics such as the network stability during WFH, work hours lost from poor communication performance, the teleconferencing quality, and so on. In an embodiment, application-use logs and teleconference participants' feedback permit prediction of teleconferencing quality as a function of communication performance. The user feedback may include thumbs up/down (or absence thereof) as well as an exit score (smiley faces to frown faces) of all or a participant subset. The teleconferencing quality may be a vector quantity if it uses data from multiple teleconference participants. An embodiment may aggregate the vector quantity's elements into a single value to denote all (or a subset of) participants' overall teleconferencing quality. An embodiment may graph the remote workers' communication patterns, and then a metric such as PageRank can identify the employee(s) with the largest influence. Computation of participants' influence level may weight their contribution to the aggregated teleconference quality. A dashboard may display identified communication patterns to give better insight to employees' communication behavior. Third, in an aggregation workflow step, the various performance metrics may be combined to compute an aggregated, overall WFH metric referred to as the ASC metric. This aggregated ASC metric is an overall work-from-home metric that is an aggregation derived from other WFH metrics; such as, WFH intensity metric, WFH distraction metric, WFH collaboration metric, etc. Embodiments can derive the ASC metric that measures the productivity gain or loss caused by WFH communication performance. The ASC productivity more generally can be the ratio of an economic output (e.g., revenue, profit, production quantity, etc.) to the inputs (e.g., labor cost, investment, raw material, etc.). The economic inputs and outputs can be measured over variable time intervals. If longer time intervals are employed, this delay would allow measured economic input and output over longer periods. Performance metrics may need aggregation over the same time interval as the productivity metrics, such that embodiments of the invention may derive these performance metrics' relationships through machine learning and/or heuristic techniques. In one example, an average is used as the aggregation function for metrics discussed in step 2 and then use a generalized linear model, such as logistic regression, to relate these metrics to the productivity changes. Fourth, in an workflow step, an embodiment can optimize or improve the WFH system to maximize the ASC product metric. Vector Turboing For neighborhoods with multiple WFH workers, the previous vectored neighborhood methods can also share multiple WAN systems to improve (accelerate the speed of) one or a few worker(s) temporarily based on need. This acceleration amplifies the previous fail-over or bonded solution through multiple connections' use, and these multiple connections may include multiple devices and/or access points bonding multi-Wi-Fi radio links and accessing multiple parallel WANs to increase total available bandwidth substantially. This intelligent bandwidth sharing can accentuate in situations with distributed antenna system (DAS) behavior, where different access points direct several spatial beams at a single multi-radio device (like a work-from-home station/computer) to accelerate its bandwidth through these streams' multipath-TCP aggregation at layers 2 to 4. Pictorial Description FIG.11pictorially illustrates ASC metric optimization/improvement through machine-learning methods according to various embodiments of the invention previously described in more general terms. As shown, productivity is analyzed as a ratio of an economic output (such as revenue) to an input (such as labor hours). In certain embodiments, productivity measurement may have low-time resolution (such as monthly, quarterly) and/or high-time resolution (such as hourly, daily). If low-time resolution is used, the need to correlate short events (such as bad conference calls, etc.) is reduced. If Z1150is defined in this embodiment as hours lost/total hours for all employees due to connectivity issues, then the ASC metric may be defined as 1-Z1155. In this embodiment, variables (Xnm)1110may be used as inputs for the method. These variables may be observable measurements such as performance (e.g., packet loss, up/down rate, latency, etc.), usage (e.g., up/down usage, application software type, etc.), and contextual information (e.g., day of the week, time of the day, # of attendees on a call, etc.). An intermediate (short-term) variable (Yn)1120may be used as a label with fine resolution such as labeling survey information or special measurements. Examples of information that may be labeled include lost work hours, employee satisfaction, QoE, etc. The method identifies g( )1115and coefficients Knmsuch that Yn=g(Kn1Xn1+ . . . +KnMXnM), which is used to train coefficients. In certain embodiments, g( )1115determines a type of predictor such as GLM or Generalized Linear Model (e.g., exponential if using Poisson regression) used within the learning method. Aggregation1130is employed to the prediction output and a regressor1140is applied to generate Z1150, from which an ASC metric1155is derived. As will be understood, a regressor enables prediction of a continuous outcome variable (here Z) based on the value of one or multiple predictor variables (here ai). Briefly, the goal of a regression model is to build a mathematical equation that defines the outcome (Z) as a function of the predictor variables (ai). The regressor may be a linear regression model, or a logistic regression model, as previously discussed. Other regressors are also possible. In this example, the ASC metric1155is a normalized non-negative measure that represents an employee's, or a group of collaborating employees', productivity. If Z1150is (normalized to max) hours lost for all employees due to connectivity issues (for instance on conference calls), then 1-Z1155is proportional to the productivity, which has a certain revenue value for each employee group. User Interface Workflow A user interface (UI) collects user preference data Pi,j,k(t) and user QoE feedback Fi,j,k(t). The UI may alert the user to improvement needs and/or prompt the user to start improvement processes.FIG.12illustrates a simple UI workflow according to various embodiments for WFH. This workflow has 5 steps, and each step may occupy a separate page or may share workflow components; such as, sliders, frames, columns/rows, cards, and similar components, on a common smartphone app or web page. Referring toFIG.12, a first UI workflow step (Screen 1) starts network improvement1210. If the server270supports only one preferred service category, the user may start optimization/improvement by pressing an application software's menu-bar button. The server270entity can also prompt a user to start network improvement through an email, app, text, or other alert forms. The device's use pattern then indicates the prioritization need. Use patterns may include the application type, network data use indications (e.g., the ratio of uplink/downlink bandwidth consumption), the used network port, and so on. The following example use patterns may trigger such an alert:Application/Device use: Certain application software's/hardware device's use may indicate user preference and suggest its consequent prioritization. For example, the server270can associate applications and devices using association methods described herein. Upon such preferred application/device pair's detected activity, a user alert may recommend an estimated, better preferred service category. Embodiments can compare subsequent and historical application/device use, under the applicable or estimated preferred service category, for consistency. For example, if the user played the Xbox only when the preferred service category was entertainment before, but then begins such use under WFH service category, it may indicate that the service category preferred by the user is not to WFH.Data-use change: A data use pattern change may indicate an user's need for a different network profile. For example, the ratio of uplink to downlink data consumption may indicate the application type; therefore, a change in this ratio may indicate a different application's use and consequent re-profiling need. To detect an application change, the server270can compare the predicted and the actual ratio of uplink to downlink data consumption (i.e., a predicted QoS data point and an actual QoS data point). The server can also predict this ratio from historical observations through time series analysis; such as, linear prediction, exponential smoothing, ARMA (Auto-Regressive Moving Average) filtering, etc. In the case of ARMA models, the future values are determined as a linear combination of both past observations as well as past predicted values, which could be past predicted QoE and/or QoS data. A simple example is an application change at roughly 5 PM each work-day, an often-encountered work stop time.WFH schedule: The server270can learn the WFH user's schedule from previous service category selections and/or from historical application/device use. When the WFH starts according to the learned schedule, the server270may send an alert to the employer. When actual user QoE feedback data are available, a supervised learning method can predict the best service category for profile (or policy) distribution to the appropriate implementation points in the device, gateway, router, etc. The following features can assist prediction:Device use: This data indicate whether a certain device was active simultaneous with the corresponding collected data. The device use list maps to an integer vector, where each element indicates the data use level. For example, 0 means the device is not connected, 1 means the device is connected but the data usage was less than 1 MB during the sampling interval, 2 means between 1 MB and 2 MB, and so forth. When uplink/downlink uses are separated, this feature produces two corresponding integer vectors.Application use: This data indicate whether a certain application software is active simultaneous with the corresponding collected data. The application use list maps to an integer vector where each element indicates the device's application use level. Jointly sampled device and application use allows formation of a matrix where the rows correspond to the devices and the columns correspond to the applications.Time information: This timestamp data may include the time of the day, the day of the week, and so on. This data map to a Boolean variable that indicates whether the timestamp is in working hours or not, based on a known work schedule (or pattern). Operational use data permit learning of work schedule, or the schedule can be set by the user. When the connection supports multiple service categories, different schedules correspond to the different service categories.Network-wide statistics: Network-wide statistics derive from network-wide QoS operational data. These statistics can include the probability that a certain service category is set or active on other lines in the same network or on neighboring lines. To improve the relevance of network-wide statistics to the individual line/connection, the server270can use statistics that derive from similar lines/connections in the same geographical location (e.g., based on IP address), the same home network type (e.g., # of devices), the same broadband network type (e.g., service product), the same household type, and/or other clusters of meaningful common characteristics, and so on.User-defined scheduling: WFH schedules may be based on direct feedback from the user where the user sets the schedule itself. In this example, WFH scheduling may be directly defined by one or more users and/or stakeholders. The server270may periodically collect the above data. This data permits predictor training by using the preferred service category selection history as a label. This learning may use methods such as (generalized) linear regression, boosting computer methods, tree-based methods such as random forest, etc. The preferred service category predictor can run periodically or may be triggered upon a certain condition. For example, simpler methods can transition to more sophisticated methods upon certain quantity and quality of data availability. In addition, unsupervised learning methods, such as hidden Markov Models (HMMs), may instead predict the preferred service category when training data is not available. Finally, the service category prediction can track working hours, which can provide insightful information on WFH behavior. Based on the use pattern analysis, the first workflow step provides analytics. An alert may show proposed service category and the analytics that support its choice. For example, embodiments may use an alert to compare a new use pattern to an old use pattern to justify optimization/improvement using a new service category. In addition to user initiation, optimization/improvement can start automatically if certain conditions are met. Embodiments may not rely 100% on the automatic optimization/improvement start because users' manual triggers could also be a good QoE label for prediction method training. When the optimization/improvement starts automatically, the server270may send an alert to the user as a reminder and may ask for the user's opinion on whether the automatic start is desirable (before proceeding). This feedback is also important to train the prediction method(s). Finally, the alert may provide some diagnostic information. A second UI workflow step1215(Screen 2) requests the user to select a preferred service category manually from a supported list. The supported list should be sorted according to service category popularity or likelihood of user preference. The popularity depends on how often the preferred service category was used in the home network or by the user. Embodiments may compute likelihood through the association of currently active application software/devices with the preferred service category, for example using methods described in the preferred service category discussion. The user may also press “Use default” button to skip this second step. Similarly, embodiments may select a preferred service category from the list based on predictions of its likelihood, in which case the user provides no input directly. The third UI workflow step1220(Screen 3) in this embodiment prioritizes devices. The user may manually select a device, such as an initial selection. The initial manual selection is a valuable label for computer training methods. In Screen 31220, the user may select the prioritization target device from a sorted list based on the likelihood of association with the preferred service category. Embodiments may also display the priority device list separately, so that the user can select devices only from the devices with unknown user preference. The workflow may skip this third step if all devices' preferred service categories become known or learned and automated, for example using training methods described in the preferred service category discussion. The fourth UI workflow step1225(Screen 4) selects the time when the selected service category expires. This step prevents any unnecessary negative impact to deprioritized application software/devices after the prioritization becomes unnecessary. For example, most users do not work an entire day and may forget to exit WFH prioritization. To facilitate this exit, the UI can display a default expiration time. The server270may estimate a default WFH expiration time based on the WFH schedule and historical data. In addition, the user may select the service category that will be applied when the selected preferred service category expires. In any case, the user will be alerted when a preferred service category expires, and the next preferred service category will then become active if set. The UI sends all user preference data collected in workflow steps 1 to 4 to the server270. Then, the server270will confirm that a new preferred service category is active. Finally, the UI displays the confirmation in the final workflow step1230(Screen 5). The UI can display the service category and QoE diagnostics; such as, the optimization's/improvement's performance gain, any network problems, the prioritized application software and devices (and perhaps also those de-prioritized), etc. The UI may be accessible outside the home, which gives more flexibility to managing the home network even when the user is not home. Aspects of the present invention may be encoded upon one or more non-transitory computer-readable media with instructions for one or more processors or processing units to cause steps to be performed. It shall be noted that the one or more non-transitory computer-readable media shall include volatile and non-volatile memory. It shall be noted that alternative implementations are possible, including a hardware implementation or a software/hardware implementation. Hardware-implemented functions may be realized using application specific integrated circuits (ASICs), programmable arrays, digital signal processing circuitry, or the like. Accordingly, the terms in any claims are intended to cover both software and hardware implementations. Similarly, the term “computer-readable medium or media” as used herein includes software and/or hardware having a program of instructions embodied thereon, or a combination thereof. With these implementation alternatives in mind, it is to be understood that the figures and accompanying description provide the functional information one skilled in the art would require to write program code (i.e., software) and/or to fabricate circuits (i.e., hardware) to perform the processing required. It shall be noted that embodiments of the present invention may further relate to computer products with a non-transitory, tangible computer-readable medium that have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind known or available to those having skill in the relevant arts. Examples of tangible computer-readable media include, but are not limited to: magnetic media such as hard disks; optical media such as CD-ROMs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store or to store and execute program code, such as ASICs, programmable logic devices (PLDs), flash memory devices, and ROM and RAM devices. Examples of computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer using an interpreter. Embodiments of the present invention may be implemented in whole or in part as machine-executable instructions that may be in program modules that are executed by a processing device. Examples of program modules include libraries, programs, routines, objects, components, and data structures. In distributed computing environments, program modules may be physically located in settings that are local, remote, or both. One skilled in the art will recognize no computing system or programming language is critical to the practice of the present invention. One skilled in the art will also recognize that a number of the elements described above may be physically and/or functionally separated into sub-modules or combined together. It will be appreciated to those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the scope of the present disclosure. It is intended that all permutations, enhancements, equivalents, combinations, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It shall also be noted that elements of any claims may be arranged differently including having multiple dependencies, configurations, and combinations. Clauses Describing Exemplary Embodiments Clause A1. A method comprising: (a) accessing data relating to one or more network devices and/or connections, the data comprising at least quality of service (QoS) data and quality of experience (QoE) data, the QoS data providing an objective measure of connection quality for an associated network connection, and the QoE data providing a measure of user experience for an associated network connection or application software; and (b) training a QoE estimator model for generating estimated QoE data from QoS data, wherein the QoE estimator model uses a machine learning method in which the QoS data are used as input data and the QoE data are used to define one or more labels such that the QoE estimator model determines a respective estimator for each of the one or more labels based on the input data. Note that “accessing” may comprise receiving (passively) or collecting (actively) or simply retrieving from memory. Note also that the QoE data may provide an indirect measure of connection quality for the associated network connection. For example, “thumbs down” user feedback during a user video conferencing session may be indicative that the associated connection is poor, thereby leading to a poor user experience. Thus, given sufficient data, a correlation would be expected between the QoE data and the QoS data. Clause A2. The method of clause A1 wherein the accessed data comprise current and/or historical data relating to the one or more network devices and/or connections. Note that “current data” are the most recent data available for a particular data type over a recent time period. The most recent time period may represent more than one measurement cycle of that data type, and may vary between data types. For example, for one data type, all data from the past day may be considered to be “current”; for other data types, shorter or longer time periods (e.g. 15 minutes, 30 minutes, 1 hour, 2 hours, 3 hours, 6 hours, 12 hours, multiple days, 1 week, multiple weeks, 1 month, multiple months, or events occurring since the last complete measurement ended, etc.) may be more appropriate to define what is “current.” In contrast, “historical data” are any data that are no longer considered to be “current.” Clause A3. The method of clause A1 or A2 wherein the input data further comprise timing data which provide time spans and/or time stamps associated with the QoS data and the QoE data. Clause A4. The method of any one of clauses A1-A3 wherein the QoS comprises data from one or more OSI layers. Clause A5. The method of any one of clauses A1-A4 wherein each label is based on one or more types of QoE data. Clause A6. The method of any one of clauses A1-A5 wherein the QoE data comprise real time user feedback data and/or delayed user feedback data Clause A7. The method of any one of clauses A1-A6 wherein the QoE data comprise direct user feedback data and/or indirect user feedback data Clause A8. The method of clause A7 when dependent on clause A6 wherein real time direct user feedback data comprise one or more of “thumbs-up” user feedback, “thumbs-down” user feedback, “like” user feedback, “star rating” from the user, “smiley face” user feedback, other opinion scores and comments from chat/messaging streams. Note that real time direct user feedback include input from within the collaboration platform application software and/or input from a separate application software used to provide feedback on collaboration (e.g., a smartphone app or web-based app) that is used during a collaboration session. Furthermore, a user can provide feedback on their own experience and/or on a specific person the user is collaborating with at the time of the feedback. Clause A9. The method of clause A7 when dependent on clause A6 wherein delayed direct user feedback data comprise one or more of mean opinion scores, and exit/other survey scores. Note that delayed direct user feedback can be from users commenting on their own experience, as well as their experience with others. Clause A10. The method of clause A7 when dependent on clause A6 wherein real time indirect user feedback data comprise user activity data from a user activity monitor such as a keystroke counter, an audio activity monitor, a video activity monitor, a facial expression monitor, etc. Clause A11. The method of clause A7 when dependent on clause A6 wherein delayed indirect user feedback data comprise one or more of information regarding help calls, information regarding help chat-box attempts, information regarding technician dispatches, information regarding complaints, information regarding equipment replacement, information regarding permanent disconnection of service by a user, information regarding refusals to use a particular application software, and information regarding excessive repeats of collaborative sessions. Clause A12. The method of any one of clauses A1-A11 wherein the QoE data comprise estimated QoE data previously generated by the QoE estimator model. Clause A13. The method of any one of clauses A1-A12 wherein the QoS data and/or the QoE data are normalised for use in the QoE estimator model. Clause A14. The method of any one of clauses A1-A13 wherein the QoS data and/or the QoE data are aggregated for use in the QoE estimator model. Clause A15. The method of clause A14 wherein the aggregation is over one or more time periods. Clause A16. The method of clause A14 or A15 wherein the aggregation is over one or more home networks associated with at least one of the one or more network devices and/or connections. Clause A17. The method of any one of clauses A1-A16 wherein the QoE estimator model is able to be updated based on additional data relating to the one or more network devices and/or connections. Clause A18. The method of clause A17 wherein the additional data are additional QoE data. Clause A19. The method of any one of clauses A1-A18 wherein the QoE estimator model is optimized/derived over all linear functions of the input data, possibly under certain constraints. Clause A20. The method of clause A19 wherein the QoE estimator model is optimized/derived using linear regression or logistic regression, possibly under certain constraints. Clause A21. The method of any one of clauses A1-A20 wherein the QoE estimator model is further based on user preference data which provide user preferences associated with a particular network device or connection of the one or more network devices and/or connections. Clause A22. The method of clause A21 wherein the accessed data are filtered based on the user preference data such that the QoE estimator model may be associated with particular user preference data. Clause A23. The method of any one of clauses A1-A22 wherein the QoE estimator model is further based on application software data which provide data regarding one or more application software associated with a particular network device or connection of the one or more network devices and/or connections. Clause A24. The method of clause A23 wherein the accessed data are filtered based on the application software data such that the QoE estimator model may be associated with one or more selected application software. Clause A25. The method of any one of clauses A1-A24 wherein the accessed data are filtered based on one or more home networks associated with the one or more network devices and/or connections such that the QoE estimator model may be associated with one or more selected home networks. Clause A26. The method of any one of clauses A1-A18 further comprising using the QoE estimator model to subsequently generate estimated QoE data from new QoS data. Clause A27. The method of clause A26 wherein the one or more labels may further be defined based on the estimated QoE data. Clause A28. The method of any one of clauses A1-A27 further comprising using the QoE estimator model to subsequently generate predicted QoE data from predicted QoS data. Thus, the QoE estimator model may additionally be used to make predictions of future QoE if predicted future QoS data are used as inputs. Also, it will be appreciated that estimated and/or predicted QoE data may be generated based on a reduced set of QoS data. Not all of the QoS data types need be present as inputs for each prediction/estimation. Nonetheless, a larger number of inputs will generally lead to a better or more accurate predictions/estimation. Clause A29. The method of clause A28 further comprising generating the predicted QoS data from current and/or historical QoS data. Thus, if for example a user has a high, uplink user packet transfer rate every afternoon at a particular time due to a regular video-conference call, this can be predicted. Clause B1. A system comprising one or more processors configured to carry out the method of any one of clauses A1-A29. Note that the system is most likely to be located at the previously described Application Specific Connectivity server (e.g., a server270). However, at least part of the system could be located in a home network software agent in a local embodiment, as well as possibly at the ISP or the application software server in different embodiments. For example, the QoE estimator model could be trained at the Applications Specific Connectivity server, but the steps of generating estimated or predicted QoE values (as per clauses A26 and A28) could potentially be performed at the home network software agent283, at the application server running application software285, or at the ISP281. Clause Cl. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any one of clauses A1-A29. Clause D1. A method comprising: (a) accessing data relating to one or more network devices and/or connections, the data comprising quality of service (QoS) data and further comprising quality of experience (QoE) data and/or productivity data, the QoS data providing an objective measure of connection quality for an associated network connection, the QoE data providing a measure of user experience for an associated network connection or application software, and the productivity data providing a measure of user productivity for an associated network connection; and (b) training a productivity estimator model for generating estimated productivity data from QoS data, wherein the productivity estimator model uses a machine learning method in which the QoS data are used as input data and the QoE data and/or productivity data are used to define one or more labels such that the productivity estimator model determines a respective estimator for each of the one or more labels based on the input data. Note that “accessing” may comprise receiving (passively) or collecting (actively) or simply retrieving from memory. Note that the QoE data may provide an indirect measure of connection quality for the associated network connection. For example, “thumbs down” user feedback during a user video conferencing session may be indicative that the associated connection is poor, thereby leading to a poor user experience. Thus, given sufficient data, a correlation would be expected between the QoE data and the QoS data. Clause D2. The method of clause D1 wherein the accessed data comprise current and/or historical data relating to the one or more network devices and/or connections. Note that “current data” are the most recent data available for a particular data type over a recent time period. The most recent time period may represent more than one measurement cycle of that data type, and may vary between data types. For example, for one data type, all data from the past day may be considered to be “current”; for other data types, shorter or longer time periods (e.g. 15 minutes, 30 minutes, 1 hour, 2 hours, 3 hours, 6 hours, 12 hours, multiple days, 1 week, multiple weeks, 1 month, multiple months, or events occurring since the last complete measurement ended, etc.) may be more appropriate to define what is “current”. In contrast, “historical data” are any data that are no longer considered to be “current”. Clause D3. The method of clause D1 or D2 wherein the input data further comprise timing data which provide time spans and/or time stamps associated with the QoS data, the QoE data and the productivity data. Clause D4. The method of any one of clauses D1 to D3 wherein the QoS comprises data from one or more OSI layers. Clause D5. The method of any one of clauses D1 to D4 wherein each label is based on one or more types of QoE data and/or productivity data. Clause D6. The method of any one of clauses D1 to D5 wherein the QoE data comprise real time user feedback data and/or delayed user feedback data Clause D7. The method of any one of clauses D1 to D6 wherein the QoE data comprise direct user feedback data and/or indirect user feedback data Clause D8. The method of clause D7 when dependent on clause D6 wherein real time direct user feedback data comprise one or more of “thumbs-up” user feedback, “thumbs-down” user feedback, “like” user feedback, “star rating” from the user, “smiley face” user feedback, other opinion scores, and comments from chat/messaging streams. Note that real time direct user feedback include input from within the collaboration platform application software and/or input from a separate application software used to provide feedback on collaboration (e.g., a smartphone app or web-based app) that is used during a collaboration session. Furthermore, a user can provide feedback on their own experience and/or on a specific person the user is collaborating with at the time of the feedback. Clause D9. The method of clause D7 when dependent on clause D6 wherein delayed direct user feedback data comprise one or more of mean opinion scores, and exit/other survey scores. Note that delayed direct user feedback can be from users commenting on their own experience, as well as their experience with others. Clause D10. The method of clause D7 when dependent on clause D6 wherein real time indirect user feedback data comprise user activity data from a user activity monitor such as a keystroke counter, an audio activity monitor, a video activity monitor, a facial expression monitor, etc. Clause D11. The method of clause D7 when dependent on clause D6 wherein delayed indirect user feedback data comprise one or more of information regarding help calls, information regarding help chat-box attempts, information regarding technician dispatches, information regarding complaints, information regarding equipment replacement, information regarding permanent disconnection of service by a user, information regarding refusals to use a particular application software, and information regarding excessive repeats of collaborative sessions. Clause D12. The method of any one or clauses D1 to D11 wherein the QoS data and/or the QoE data and/or the productivity data are normalised for use in the productivity estimator model. Clause D13. The method of any one or clauses D1 to D12 wherein the QoS data and/or the QoE data and/or the productivity data are aggregated for use in the productivity estimator model. Clause D14. The method of clause D13 wherein the aggregation is over one or more time periods. Clause D15. The method of clause D13 or D14 wherein the aggregation is over one or more home networks associated with at least one of the one or more network devices and/or connections. Clause D16. The method of any one or clauses D1 to D15 wherein the productivity estimator model is able to be updated based on additional data relating to the one or more network devices and/or connections. Clause D17. The method of any one or clauses D1 to D16 wherein the productivity estimator model is optimized/derived over all linear functions of the input data, possibly under certain constraints. Clause D18. The method of clauses D17 wherein the productivity estimator model is optimized/derived using linear regression or logistic regression, possibly under certain constraints. Clause D19. The method of any one or clauses D1 to D18 wherein the productivity estimator model is further based on user preference data which provide user preferences associated with a particular network device or connection of the one or more network devices and/or connections. Clause D20. The method of clause D19 wherein the accessed data are filtered based on the user preference data such that the productivity estimator model may be associated with particular user preference data. Clause D21. The method of any one or clauses D1 to D20 wherein the productivity estimator model is further based on application software data which provide data regarding one or more application software associated with a particular network device or connection of the one or more network devices and/or connections. Clause D22. The method of clause D21 wherein the accessed data are filtered based on the application software data such that the productivity estimator model may be associated with one or more selected application software. Clause D23. The method of any one or clauses D1 to D22 wherein the accessed data are filtered based on one or more home networks associated with the one or more network devices and/or connections such that the productivity estimator model may be associated with one or more selected home networks. Clause D24. The method of any one or clauses D1 to D23 further comprising using the productivity estimator model to subsequently generate estimated productivity data from new QoS data and/or performance data Clause D25. The method of clause D24 wherein the one or more labels may further be defined based on the estimated productivity data. Clause D26. The method of any one of clauses D1 to D25 further comprising using the productivity estimator model to subsequently generate predicted productivity data from predicted QoS data. Thus, the productivity estimator model may additionally be used to make predictions of future productivity if predicted future QoS data are used as inputs. Also, it will be appreciated that estimated and/or predicted productivity data may be generated based on a reduced set of QoS data. Not all of the QoS data types need be present as inputs. Clause D27. The method of any one of clauses D1 to D26 wherein the productivity data comprise data such as hours lost by a user due to a bad connection, or number of calls made by a sales/marketing user, or number of hours worked by a user, etc. Clause D28. The method of any one of clauses D1 to D7 wherein the QoE data comprise estimated QoE data generated according to the method of clause A26. Clause E1. A system comprising one or more processors configured to carry out the method of any one of clauses D1 to D28. Note that the system is most likely located at the Applications Specific server (e.g., server270). However, at least part of the system could be located in a home network software agent in a local embodiment, as well as possibly at the ISP or the application software server in different embodiments. For example, the productivity estimator model could be trained at the Applications Specific server (e.g., server270), but the steps of generating estimated or predicted productivity values (as per clauses D24 and D26) could potentially be performed at the home network software agent, in the gateway283, at the application server285, or at the ISP281. Clause F1. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any of clauses D1 to D28. Clause G1. A method comprising: (a) accessing data relating to one or more network devices and/or connections, the data comprising at least quality of service (QoS) data, the QoS data providing an objective measure of connection quality for an associated network connection; (b) using a productivity estimator model trained according to the method of any one of clauses D1-D28 to generate estimated productivity data from the QoS data; (c) aggregating the estimated productivity data over a time period; and (d) based on the aggregated estimated productivity data, generating an application specific connectivity (ASC) metric for the time period. In some embodiments, the ASC metric may be considered to be “application specific” in the sense that it relates to one or more specific application software (i.e. a subset of all application software). For example, an ASC metric relevant to WFH may relate to all application software associated with WFH (e.g. video conferencing applications, audio conferencing applications, web browsers, etc.). Clause G2. The method of clause G1 further comprising training the productivity estimator model according to the method of any one of clauses D1 to D28. Clause G3. The method of clause G1 or G2 wherein the ASC metric is generated using a regressor. Clause H1. A system comprising one or more processors configured to carry out the method of any one of clauses G1-G3. Note that the system is most likely located at the server270. However, at least part of the system could be located in a home network software agent in the gateway283, in a local embodiment, and at least part of the system could also be located at the application server running application software285or at the ISP281. Clause I1. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any of clauses G1-G3. Clause J1. A method comprising: (a) accessing data relating to one or more network devices and/or connections, wherein each datum is associated with one or more applications from a plurality of applications; (b) selecting at least one application of interest from the plurality of applications; and (c) analysing the data to determine an application-specific connectivity (ASC) metric that is indicative of the effect of connection quality on user productivity for the at least one application of interest; wherein the ASC metric is determined using an ASC model trained on ASC training data, the ASC training data comprising current and/or historical data relating to the one or more network devices and/or connections, the ASC training data comprising at least quality of service (QoS) data and associated quality of experience (QoE) data, the QoS data providing an objective measure of connection quality for an associated network connection, and the QoE data providing a measure of user experience for an associated network connection or application software. Note that “accessing” may comprise receiving (passively) or collecting (actively) or simply retrieving from memory. Note that the QoE data may provide an indirect measure of connection quality for the associated network connection. For example, “thumbs down” user feedback during a user video conferencing session may be indicative that the associated connection is poor, thereby leading to a poor user experience. Thus, given sufficient data, a correlation would be expected between the QoE data and the QoS data. Clause J2. The method of clause J1 wherein the at least one application of interest comprises one or more application software associated with WFH activities. Clause J3. The method of clause J1 wherein the at least one application of interest comprises one or more application software associated with remote learning/teaching and/or telemedicine and/or other virtual gatherings and/or distribution of streaming entertainment media and/or security-camera systems and/or sensors and/or smart-home appliances. Clause J4. The method of any one of clauses J1-J3 wherein the at least one application of interest comprises one or more video conferencing application software (e.g. Zoom, WebEx, GoToMeeting, Skype for Business, JoinMe, Slack, Teams, Google Meet, Google Hangouts, etc.) Clause J5. The method of any one of clauses J1-J3 wherein the at least one application of interest comprises one or more video-gaming applications. Clause J6. The method of any one of clauses J1-J5 wherein the ASC metric for the at least one application of interest may further be determined based on an ASC metric for at least one other application of interest. Clause J7. The method of any one of clauses J1-J6 wherein the ASC metric is individually determined for each network connection associated with the at least one application of interest. Clause J8. The method of any one of clauses J1-J6 wherein the ASC metric is aggregated over multiple network connections associated with the at least one application of interest. Clause J9. The method of clause J8 wherein the multiple network connections over which the ASC metric is aggregated comprise those network connections associated with a single home network. Clause J10. The method of clause J8 wherein the multiple network connections over which the ASC metric is aggregated comprise those network connections associated with home networks of a group of users associated with a particular stakeholder or business entity. Clause J11. The method of clause J10 wherein the ASC metric is further determined based on HR-related/employer productivity data for the particular stakeholder or business entity. Clause J12. The method of clause J8 wherein the multiple network connections over which the ASC metric is aggregated comprise those network connections associated with home networks located in a particular geographical area. Clause J13. The method of any one of clauses J1-J12 wherein the accessed data comprise one or more of: data from one or more internet service providers (ISPs); data from one or more home networks; data from one or more application servers; data from one or more user devices; and data from one or more stakeholders (e.g. employers). Clause J14. The method of any one of clauses J1-J13 wherein the accessed data comprise QoS data from one or more OSI layers. Clause J15. The method of any one of clauses J1-J14 wherein accessed data comprise one or more of: QoS data; QoE data; productivity data which provide a measure of user productivity for an associated network connection; user preference data which provide user preferences associated with a particular network device or connection; configuration data which provide configuration settings associated with a particular network device or connection; and timing data which provide time spans and/or time stamps associated with particular types of data. Clause J16. The method of any one of clauses J1-J15 wherein the accessed data relate to a plurality of network devices and/or connections. Clause J17. The method of any one of clauses J1-J16 wherein the ASC training data further comprise one or more of: user preference data which provide user preferences associated with a particular network device or connection; configuration data which provide settings associated with a particular network device or connection; timing data which provide time spans and/or time stamps associated with particular types of data; productivity data which provide a measure of user productivity for an associated network connection; and contextual data. Clause J18. The method of clause J17 wherein the productivity data comprise data such as hours lost by a user due to a bad connection, or number of calls made by a sales/marketing user, or number of hours worked by a user, etc. Clause J19. The method of any one of clauses J1-J18 wherein the ASC model uses machine learning and/or artificial intelligence. Clause J20. The method of clause J19 wherein the ASC model uses a machine learning method in which the QoS data of the ASC training data are used as input data to the ASC model and the user feedback data of the ASC training data are used to define a label. Note that other types of data (e.g. performance data and/or contextual data) may also be used to define the label. Clause J21. The method of clause J20 wherein the ASC model determines an optimal/preferred estimator for the label based on the input data, uses the optimal/preferred estimator for the label to output an estimation per home network, aggregates the estimations of each label over a time period, and uses the aggregated estimations in a functional estimator to generate a value from which the ASC metric is derived. Clause J22. The method of clause J21 wherein the functional estimator is optimized/improved over all linear functions. Clause J23. The method of clause J22 wherein the functional estimator is optimized/improved using linear regression or logistic regression. Clause J24. The method of any one of clauses J1-J23 wherein the ASC training data further comprise a previously determined ASC metric. Clause J25. The method of any one of clauses J1-J24 wherein the input data further comprise timing data which provide time spans and/or time stamps associated with particular types of data. Clause J26. The method of any one of clauses J1-J25 wherein the ASC model uses a supervised learning method. Clause J27. The method of any one of clauses J1-J25 wherein the ASC model uses an unsupervised learning method. Clause J28. The method of any clause J27 wherein the unsupervised learning method uses clustering. Clause J29. The method of any one of clauses J1-J28 wherein the ASC model uses one or more of: boosting methods such as gradient boosting; deep-learning processes such as long short-term memory (LSTM); and neural networks. Clause J30. The method of any one of clauses J1-J12 wherein each network device and/or connection has associated configuration settings, and the method further comprises determining updated configuration settings for at least one of the one or more network devices and/or connections, wherein the determining comprises improving the ASC metric aggregated over the data associated with the at least one application of interest. Clause J31. The method of clause J30 further comprising communicating at least some of the updated configuration settings to a software agent located on a network device of the plurality of network devices or at the gateway. Clause J32. The method of clause J31 further comprising, by the software agent, implementing one or more of the updated configuration settings associated with the respective network device. Clause K1. A system comprising one or more processors configured to carry out the method of any one of clauses J1-J32. Note that the system is most likely located at the server. However, at least part of the system could be located in a home network software agent, at the ISP, or at the application server. Clause L1. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any of clauses J1-J32. Clause M1. A method at a software agent located on a home gateway network device, the home gateway network device associated with a home network, the home network comprising one or more associated network devices, the method comprising: (a) accessing updated configuration settings for the network device; and (b) implementing the updated configuration settings for the network device. Note that the selected application software to be prioritized may include at least one of the applications of interest referred to in clauses J1-J32. Clause M2. The method of clause M1 further comprising receiving at least some of the updated configuration settings from a device outside the home network. Clause M3. The method of clause M1 or clause M2 further comprising determining at least some of the updated configuration settings at the home gateway network device in accordance with the method of clause J30. Clause M4. The method of any one of clauses M1-M3 wherein the implementing comprises: based on the updated configuration settings, prioritizing selected application software and/or network devices as compared to other application software and/or network devices by at least one of: (i) identifying the selected application software and/or network devices and subsequently prioritizing associated message flows, and (ii) identifying the other application software and/or network devices and subsequently deprioritizing associated message flows. Clause M5. The method of clause M4 wherein the selected and/or other application software and/or devices are identified based on communication port usage. Clause M6. The method of clause M4 or M5 wherein the selected and/or other application software and/or devices are identified based on usage patterns. Clause M7. The method of any one of clauses M4-M6 wherein the selected and/or other application software and/or devices include video-conferencing applications which are identified based on similar bandwidth usage for uplink and downlink. Clause M8. The method of any one of clauses M4-M7 wherein the selected and/or other application software and/or devices are identified based on destination IP addresses. Clause M9. The method of any one of clauses M4-M8 wherein the selected and/or other application software and/or devices are identified based on OSI layers 2-4 header information of the associated message flows. Clause M10. The method of clause M9 wherein the OSI layers 2-4 header information comprises a layer 2 VLAN tag. Clause M11. The method of clause M9 or M10 wherein the OSI layers 2-4 header information comprises a layer 2 priority level. Clause M12. The method of any one of clauses M9-M11 wherein the OSI layers 2-4 header information comprises a layer 3 DSCP marking associated with VoIP or streaming video. Clause M13. The method of any one of clauses M9-M12 wherein the OSI layers 2-4 header information comprises a layer 4 port number. Clause M14. The method of any one of clauses M4-M13 wherein the prioritizing comprises changing the queue priority of the associated message flows. Clause M15. The method of clause M14 wherein the queue priority is changed based on TCP/UDP port number or DSCP marking. Clause M16. The method of any one of clauses M4-M15 wherein the prioritizing comprises the home gateway network device remarking a DSCP priority of the associated message flows. Clause M17. The method of any one of clauses M4-M16 wherein the prioritizing comprises the home gateway network device adding an IP address of an application server associated with at least one of the selected application software to a host file of the home gateway network device to support faster DNS look-up. Clause M18. The method of any one of clauses M4-M17 wherein the prioritizing comprises failover and/or load balancing of the message flows associated with the selected application software and/or devices from a first WAN associated with the home gateway network device to a second WAN associated with the home gateway network device. Clause M19. The method of any one of clauses M4-M18 wherein a WAN associated with the home gateway network device supports multiple network slices, and wherein the prioritizing comprises switching the associated message flows to a network slice having a different priority level. Clause M20. The method of any one of clauses M4-M19 wherein the prioritizing comprises prioritizing layer 1 or 2 of the Wi-Fi link for the selected and/or other application software and/or devices. Clause M21. The method of clause M20 wherein the prioritizing comprises one or more of Wi-Fi channel selection, Wi-Fi band steering, changes in Wi-Fi bandwidth; changes in Wi-Fi MIMO configurations, changes in Wi-Fi mesh routing, and changes in Wi-Fi airtime fairness profiles. Clause M22. The method of any one of clauses M1-M21 wherein the implementing comprises, based on the updated configuration settings, adapting profile parameters of one or more selected application software and/or communication link. Clause M23. The method of clause M22 wherein the profile parameters include video frame rate, video bit rate, and/or audio bit rate. Clause M24. The method of clause M23 wherein the one or more selected application software are lower priority application software, and wherein the adapting comprises adapting OSI layer 4 traffic shapers by dropping some packets such that an OSI layer 7 data rate of the one or more selected application software is reduced. Clause N1. A home gateway network device comprising a software agent configured to carry out the method of any one of clauses M1-M24 Clause O1. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any of clauses M1-M24. Clause P1. A method at an ISP server, the ISP server managing one or more network devices and wireline and/or wireless connections, the method comprising: (a) accessing updated configuration settings for the network devices and wireline and/or wireless connections; and (b) implementing the updated configuration settings for the network devices and wireline and/or wireless connections. Clause P2. The method of clause P1 further comprising receiving at least some of the updated configuration settings from a device not managed by the ISP. Clause P3. The method of clause M1 or clause M2 further comprising determining at least some of the updated configuration settings at the ISP device in accordance with the method of clause J30. Clause P4. The method of any one of clauses P1-P3 wherein the implementing comprises specifying an interleaver depth and/or FEC coding rate in a DSL system associated with the ISP. Clause P5. The method of any one of clauses P1-P3 wherein the implementing comprises specifying margin and/or power back-off parameters in a DSL system associated with the ISP. Clause P6. The method of any one of clauses P1-P3 wherein the implementing comprises using an uplink scheduler of a modem associated with this ISP to specify a service profile that prioritizes particular applications through an associated queue filter. Clause P7. The method of any one of clauses P1-P3 wherein the filter uses source/destination IP address or port number to prioritize the particular applications. Clause Q1. An ISP server comprising a software agent configured to carry out the method of any one of clauses P1-P7. Clause R1. A non-transient computer readable medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to carry out the method of any of clauses P1-P7. | 211,427 |
11863404 | DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Customers deploy applications with components that reside on multiple cloud providers. Therefore, access to such applications by the customers must be optimized. Optimization of customer access to applications provided by multi-cloud providers through private networks requires optimization of paths from customer locations (e.g., user devices) to private network provider devices (e.g., gateways) to cloud providers (e.g., multiple cloud computing environments). Optimization of customer access paths for applications provided by multi-cloud providers through private networks also requires optimization of a path from the private network provider gateways to cloud provider devices (e.g., gateways) and applications (e.g., or application platforms). Such optimizations require consideration of customer service level agreements (SLAs) and utilizations of the private network devices and the cloud provider devices. However, current techniques for providing customer access to applications provided by multi-cloud providers through private networks fail to consider such optimizations and considerations. Thus, current techniques for providing customer path optimization consume computing resources (e.g., processing resources, memory resources, communication resources, and/or the like), networking resources, and/or other resources associated with handling poor customer experiences associated with accessing the applications, inefficiently utilizing network provider devices, inefficiently utilizing cloud provider devices, handling lost traffic associated with accessing the applications, and/or the like. Some implementations described herein provide an optimizer system that calculates optimum customer access paths for applications provided by multi-cloud providers through private networks. For example, the optimizer system may receive a request for an application from a user device, and may receive network data for a network provider and a cloud provider associated with the user device and SLA constraints associated with the user device, the network provider, and the cloud provider. The optimizer system may calculate, based on the network data and the SLA constraints, a plurality of cost vectors associated with defining a path for the user device to access the application, and may identify, from a plurality of network provider devices, a network provider device that provides a first least cost path and satisfies a first threshold based on the plurality of cost vectors. The optimizer system may identify, from a plurality of cloud provider devices, a set of cloud provider devices that support the application for the network provider device, and may identify, from the set of cloud provider devices, a cloud provider device that provides a second least cost path and satisfies a second threshold based on the plurality of cost vectors. The optimizer system may cause the application to be provided from the cloud provider device to the user device, via the network provider device. In this way, the optimizer system calculates optimum customer access paths for applications provided by multi-cloud providers through private networks. For example, the optimizer system may identify, for a user device (e.g., a customer device) attempting to access an application, an optimum network provider device (e.g., gateway) that provides a least cost access path (e.g., that provides minimum delay, jitter, loss, and/or the like) and resource (e.g., processor, memory, bandwidth, and/or the like) utilizations below a threshold level. The optimization system may identify cloud provider devices (e.g., gateways) that support the application, and may identify, for the optimum network provider device and from the cloud provider devices that support the application, an optimum cloud provider device that provides a least cost access path (e.g., minimum usage charge, delay, jitter, loss, and/or the like) and resource utilizations below a threshold level. Thus, the optimizer system may conserve computing resources, networking resources, and/or other resources that would have otherwise been consumed by handling poor customer experiences associated with accessing the applications, inefficiently utilizing network provider devices, inefficiently utilizing cloud provider devices, handling lost traffic associated with accessing the applications, and/or the like. FIGS.1A-1Iare diagrams of an example100associated with calculating optimum customer access paths for applications provided by multi-cloud providers through private networks. As shown inFIGS.1A-1I, example100includes an optimizer system105associated with one or more user devices (UDs) of customer device clusters (e.g., customer device cluster1through customer device cluster N), one or more network devices of a network provider (NP) network that includes network provider gateway clusters (e.g., network provider gateway cluster1through network provider gateway cluster M), and one or more network devices of cloud provider (CP) clusters (e.g., cloud provider1cluster1through cloud provider S cluster1). Further details of the optimizer system105, the user devices, the network devices, the network provider network, and the clusters are provided elsewhere herein. As shown inFIG.1A, a first customer device cluster (e.g., customer device cluster1) may include multiple user devices (e.g., D11through D1n), an Nth customer device cluster (e.g., customer device cluster N) may include multiple user devices (e.g., DN1through DNk), and/or the like. A first network provider gateway cluster (e.g., network provider gateway cluster1) may include multiple network devices (e.g., NP-GW11through NP-GW1x), an Mth network provider gateway cluster (e.g., network provider gateway cluster M) may include multiple network devices (e.g., NP-GWM1through NP-GWMy), and/or the like. A first cloud provider cluster (e.g., cloud provider1cluster1) may include multiple network devices (e.g., CP1-GW1through CP1-GWu), another cloud provider cluster (e.g., cloud provider S cluster1) may include multiple network devices (e.g., CPs-GW1through CPs-GWt), and/or the like. The optimizer system105may calculate optimum paths for the user devices to access applications provided by the cloud provider clusters, via the network provider gateway clusters. In some implementations, when calculating the optimum paths for the user devices to access the applications, the optimizer system105may assume that certain user devices (e.g., a customer region or a market region) access pre-identified primary and secondary network provider devices or a pre-identified network provider gateway cluster, and may assume that certain network provider gateway clusters access pre-identified primary and secondary cloud provider devices of each cloud provider cluster or a pre-identified cloud provider cluster of each cloud provider (i.e., a cloud provider cluster may include a set of cloud provider devices in multiple regions of a cloud provider). Optimum access to the cloud providers may depend on a least cost (e.g., a least usage-based charge, a least delay, a least loss, a least jitter, and/or the like) path from a network provider device to a cloud provider device, an availability of the application at a least cost cloud provider device, resource availability (e.g., processor, memory, port bandwidth, and/or the like) at the least cost cloud provider device, and/or the like. In some implementations, when calculating an optimum path for a user devices to access an application, the optimizer system105may identify a best network provider device for the user device based on a network provider least cost access path (e.g., minimum delay, jitter, loss, and/or the like) and network provider resource (e.g., processor, memory, port bandwidth, and/or the like) utilizations below a threshold level, and may identify cloud provider clusters (e.g., locations) supporting the requested application. The optimizer system105may identify a cloud provider device for the best network provider device based on a cloud provider least cost access path (e.g., minimum delay, jitter, loss, and/or the like) and cloud provider resource (e.g., processor, memory, port bandwidth, and/or the like) utilizations below a threshold level. As shown inFIG.1B, and by reference number110, the optimizer system105may receive a request for an application from a user device, and network data for a network provider and cloud providers associated with the user device. For example, a user of the user device (e.g., D12) may wish to access the application from one of the cloud provider clusters, and may cause the user device to generate the request for the application. The user may cause the user device to provide the request for the application to the optimizer system105, and the optimizer system105may receive the request for the application from the user device. The optimizer system105may continuously receive the network data from the network provider gateway clusters of the network provider network and/or from the cloud provider clusters, may periodically receive the network data from the network provider gateway clusters of the network provider network and/or from the cloud provider clusters, may receive the network data based on a request provided to the network provider gateway clusters of the network provider network and/or from the cloud provider clusters, and/or the like. In some implementations, the network data may include data identifying delays associated with access of the user device to a plurality of network provider devices (e.g., the network devices) of the network provider network, losses associated with access of the user device to the plurality of network provider devices, jitter associated with access of the user device to the plurality of network provider devices, memory utilizations associated with the plurality of network provider devices and a plurality of cloud provider devices (e.g., the network devices) of the cloud provider clusters, processor utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, bandwidth utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, delays associated with access of the plurality of network provider devices to the plurality of cloud provider devices, losses associated with access of the plurality of network provider devices to the plurality of cloud provider devices, jitter associated with access of the plurality of network provider devices to the plurality of cloud provider devices, usage charges associated with access of the plurality of network provider devices to the plurality of cloud provider devices, and/or the like. As further shown inFIG.1B, and by reference number115, the optimizer system105may receive SLA constraints associated with the user device, the network provider, and the cloud providers. For example, the optimizer system105may receive the SLA constraints from the user device, the network provider network, the cloud provider clusters, and/or the like. In some implementations, the optimizer system105may receive the SLA constraints associated with the user device from the request for the application received from the user device. In some implementations, the optimizer system105may receive the SLA constraints associated with the network provider based on requesting the SLA constraints from the network provider network. In some implementations, the optimizer system105may receive the SLA constraints associated with the cloud providers based on requesting the SLA constraints from the cloud provider clusters. The SLA constraints may include constraints associated with usage charges, round trip delays (or one-way delays) for access of the user device to the cloud provider devices (e.g., the network devices) of the cloud provider clusters, one-way losses for access of the user device to the cloud provider devices, one-way jitter for access of the user device to the cloud provider devices, memory utilizations for the network provider devices (e.g., the network devices) of the network provider network and the cloud provider devices, processor utilizations for the network provider devices and the cloud provider devices, bandwidth utilizations for the network provider devices and the cloud provider devices, and/or the like. In some implementations, the usage charges (UCNPGW-CPGW) may be less than a threshold usage charge (UCD-SLA); the round trip delays (RTDD-CPGW) may be less than a threshold round trip delay (RTDD-SLA), where the round trip delay is for access of the user device to a remote cloud provider device; the one-way losses (LD-CPGW) for access of the user device to the cloud provider devices may be less than a threshold one-way loss (LD-SLA), where the one-way losses are maximum losses of both directions for access of the user device to a remote cloud provider device; the one-way jitter (JD-CPGW) for access of the user device to the cloud provider devices may be less that a threshold jitter (JD-SLA), where the one-way jitter are maximum jitters of both directions for access of the user device to a remote cloud provider device; the memory utilizations for the network provider devices (CPGWmem-util) may be less than a threshold memory utilization (α); the processor utilizations for the network provider devices (NPGWcpu-util) may be less than a threshold processor utilization (β); the bandwidth utilizations for the network provider devices (NPGWport-util) may be less than a threshold port bandwidth utilization (γ); the memory utilizations for the cloud provider devices (CPGWmem-util) may be less than a threshold memory utilization (α); the processor utilizations for the cloud provider devices (CPGWcpu-util) may be less than a threshold processor utilization (β); and the bandwidth utilizations for the cloud provider devices (CPGWport-util) may be less than a threshold port bandwidth utilization (γ). In some implementations, the thresholds for the memory utilizations of the network provider devices (NPGWmem-util) and the memory utilizations of the cloud provider devices (CPGWmem-util) may be α1and α2, respectively, but the same threshold (α) may be utilized for both memory utilizations. In some implementations, the thresholds for the processor utilizations of the network provider devices (NPGWcpu-util) and the memory utilizations of the cloud provider devices (CPGWcpu-util) may be β1and β2, respectively, but the same threshold (β) may be utilized for both processor utilizations. In some implementations, the thresholds for the bandwidth utilizations of the network provider devices (NPGWport-util) and the memory utilizations of the cloud provider devices (CPGWport-util) may be γ1and γ2, respectively, but the same threshold (γ) may be utilized for both bandwidth utilizations. As further shown inFIG.1B, and by reference number120, the optimizer system105may calculate, based on the network data and the SLA constraints, first cost vectors associated with access of the user device to network provider devices. For example, when calculating the first cost vectors associated with access of the user device to the network provider devices, the optimizer system105may utilize the network data and the SLA constraints to calculate delays, losses, and jitter associated with access of the user device to the network provider devices, and to generate the first cost vectors based on the delays, the losses, and the jitter. In some implementations, costs associated with the user device access to network provider devices may be provided by CD-NPGW, costs for delays associated with access of the user device to the network provider devices may be provided by NPGWD, costs for losses associated with access of the user device to the network provider devices may be provided by NPGWL, and costs for jitter associated with access of the user device to the network provider devices may be provided by NPGWJ. In such implementations, the first cost vectors may be provided by the following: CD-NPGW=[NPGWDNPGWLNPGWJ]. As shown inFIG.1C, and by reference number125, the optimizer system105may calculate, based on the network data and the SLA constraints, second cost vectors associated with access of the network provider devices to cloud provider devices. For example, when calculating the second cost vectors associated with access of the network provider devices to the cloud provider devices, the optimizer system105may utilize the network data and the SLA constraints to calculate delays, losses, jitter, and usage charges associated with access of the network provider devices to the cloud provider devices, and to generate the second cost vectors based on the delays, the losses, the jitter, and the usage charges. In some implementations, costs associated with the network provider devices access to the cloud provider devices may be provided by CNPGW-CPGW, costs for delays associated with access of the network provider devices to the cloud provider devices may be provided by CPGWD, costs for losses associated with access of the network provider devices to the cloud provider devices may be provided by CPGWL, costs for jitter associated with access of the network provider devices to the cloud provider devices may be provided by CPGWJ, and costs for usage charges associated with access of the network provider devices to the cloud provider devices may be provided by CPGWUC. In such implementations, the second cost vectors may be provided by the following: CNPGW-CPGW=[CPGWDCPGWLCPGWJCPGWUC]. As shown inFIG.1D, and by reference number130, the optimizer system105may calculate, based on the network data and the SLA constraints, third cost vectors associated with SLA constraints of the user device. For example, when calculating the third cost vectors associated with the SLA constraints of the user device, the optimizer system105may utilize the network data and the SLA constraints to calculate delays, losses, jitter, and usage charges associated with the SLA constraints of the user device, and to generate the third cost vectors based on the delays, the losses, the jitter, and the usage charges. In some implementations, costs associated with the SLA constraints of the user device may be provided by CD-SLA, costs for delays (e.g., round trip delays (RTDs) associated with the SLA constraints of the user device may be provided by RTDD-SLA, costs for losses associated with the SLA constraints of the user device may be provided by LD-SLA, costs for jitter associated with the SLA constraints of the user device may be provided by JD-SLA, and costs for usage charges associated with the SLA constraints of the user device may be provided by UCD-SLA. In such implementations, the third cost vectors may be provided by the following: CD-SLA=[RTDD-SLALD-SLAJD-SLAUCD-SLA]. As shown inFIG.1E, and by reference number135, the optimizer system105may calculate, based on the network data and the SLA constraints, fourth cost vectors associated with the network provider devices and the cloud provider devices. For example, when calculating the fourth cost vectors associated with the network provider devices and the cloud provider devices, the optimizer system105may utilize the network data and the SLA constraints to calculate memory utilizations, processor utilizations, and bandwidth utilizations associated with the network provider devices and the cloud provider devices, and to generate the fourth cost vectors based on the memory utilizations, the processor utilizations, and the bandwidth utilizations. In some implementations, costs associated with the network provider devices and the cloud provider devices may be provided by CD-UTIL, costs for memory utilizations associated with the network provider devices may be provided by NPGWmem-util, costs for processor utilizations associated with the network provider devices may be provided by NPGWcpu-util, costs for bandwidth utilizations associated with the network provider devices may be provided by NPGWport-util, costs for memory utilizations associated with the cloud provider devices may be provided by CPPGWmem-util, costs for processor utilizations associated with the cloud provider devices may be provided by CPGWcpu-util, costs for bandwidth utilizations associated with the cloud provider devices may be provided by CPGWport-util. In such implementations, the fourth cost vectors may be provided by the following: CD-UTIL=[NPGWmem‐utilNPGWcpu‐utilNPGWport‐utilCPGWmem‐utilCPGWcpu‐utilCPGWport‐util]. As shown inFIG.1F, and by reference number140, the optimizer system105may identify, from the network provider devices, a network provider device that provides a first least cost path and satisfies a first threshold based on the first cost vectors and the third cost vectors. For example, a cost of a link between two neighboring network provider devices may be provided as CNPGW-NPGWand a cost of a link between two neighboring cloud provider devices may be provided as CCPGW-CPGW. The optimizer system105may calculate a cost associated with access of the user device to a network provider device according to Equations 1 and 2: CD-REMOTE-NPGW<CD-NPGW+(i−1)*CNPGW-NPGW(1), and CNPGW-REMOTE-CPGW<CNPGW-CPGW+(j−1)*CCPGW-CPGW(2). The optimizer system105may ensure that the calculated cost satisfies Equations 3 and 4: CD-CPGW<CD-NPGW+CNPGW-CPGW<CD-SLA(3), and CD-REMOTE-CPGW<CD-NPGW+(i−1)*CNPGW-NPGW+CNPGW-CPGW+(j−1)*CCPGW-NPGW<CD-SLA(4), where CD-SLAmay be based on a usage-based charge, a delay, a jitter, and/or a packet loss agreement with a customer (e.g., the user of the user device). Equation 3 provides a cost associated with a device accessing a closest NPGW (e.g., an optimally-located NPGW for the device) in a given NPGW cluster. The optimal NPGW accesses a closest CPGW (e.g., an optimally-located CPGW for the optimal NPGW) in a given CPGW cluster. However Equation 4 provides a cost associated with the device accessing another NPGW, other than the optimally-located NPGW, within the NPGW cluster. The other NPGW accesses another CPGW, other than the optimally-located CPGW within that CPGW cluster. The optimizer system105may utilize Equation 1, the first cost vectors, and the third cost vectors to identify, from the network provider devices, the network provider device that provides the first least cost path and satisfies the first threshold. If none of the network provider devices satisfy Equation 1 for the first cost vectors and the third cost vectors, the optimizer system105may select a network provider device from the network provider devices (e.g., that provides a least cost path but fails to satisfy the first threshold). If multiple network provider devices satisfy Equation 1 for the first cost vectors and the third cost vectors, the optimizer system105may utilize the multiple network provider devices for the determinations described below in connection withFIG.1G. As shown inFIG.1G, and by reference number145, the optimizer system105may identify, from the cloud provider devices, a set of cloud provider devices that support the application for the network provider device. For example, the optimizer system105may identify, from the cloud provider devices, a set of cloud provider devices that support the application for the network provider device (or multiple network provider devices) determined above in connection withFIG.1F. The optimizer system105may repeat the determinations described above in connection withFIG.1Funtil the optimizer system105identifies a network provider device that supports access to a cloud provider device that has access to the application. As shown inFIG.1H, and by reference number150, the optimizer system105may identify, from the set of cloud provider devices, a cloud provider device that provides a second least cost path and satisfies a second threshold based on the second cost vectors and the fourth cost vectors. For example, the optimizer system105may identify, from the set of cloud provider devices, the cloud provider device that provides the second least cost path, satisfies the second threshold, and satisfies Equation 2 for the second cost vectors, the fourth cost vectors, the network provider device (or multiple network provider devices) determined above in connection withFIG.1F. The optimizer system105may repeat the determinations described above in connection withFIG.1Guntil the optimizer system105identifies a network provider device that supports access to the cloud provider device that has access to application. If none of the set of cloud provider devices satisfy Equation 2 for the second cost vectors and the fourth cost vectors, the optimizer system105may select a cloud provider from the set of cloud provider devices (e.g., that provides a least cost path but fails to satisfy the second threshold). If there are multiple cloud provider devices satisfy Equation 2 for the second cost vectors and the fourth cost vectors, the optimizer system105may randomly select one of the multiple cloud provider devices. As shown inFIG.1I, and by reference number155, the optimizer system105may cause the application to be provided from the cloud provider device to the user device, via the network provider device. For example, when causing the application to be provided from the cloud provider device to the user device, via the network provider device, the optimizer system105may cause the network provider device and the cloud provider device to define a path (e.g., that includes the first least cost path and the second least cost path) for the user device to access the application from the cloud provider device. The optimizer system105may then cause the application to be provided from the cloud provider device to the user device, via the path. In some implementations, the path for the user device to access the application from the cloud provider device may satisfy the SLA constraints. For example, as further shown inFIG.1I, the optimizer system105may identify a particular cloud provider device (e.g., network device CPs-GWt) and a particular network provider device (e.g., network device NP-GW1x), and may cause the application to be provided from the particular cloud provider device to the user device (e.g., D12), via the particular network provider device. In this way, the optimizer system105calculates optimum customer access paths for applications provided by multi-cloud providers through private networks. For example, the optimizer system105may identify, for a user device (e.g., a customer device) attempting to access an application, an optimum network provider device (e.g., gateway) that provides a least cost access path (e.g., that provides minimum delay, jitter, loss, and/or the like) and resource (e.g., processor, memory, bandwidth, and/or the like) utilizations below a threshold level. The optimization system105may identify cloud provider devices (e.g., gateways) that support the application, and may identify, for the optimum network provider device and from the cloud provider devices that support the application, an optimum cloud provider device that provides a least cost access path (e.g., minimum usage charge, delay, jitter, loss, and/or the like) and resource utilizations below a threshold level. Thus, the optimizer system105may conserve computing resources, networking resources, and/or other resources that would have otherwise been consumed by handling poor customer experiences associated with accessing the applications, inefficiently utilizing network provider devices, inefficiently utilize cloud provider devices, handling lost traffic associated with accessing the applications, and/or the like. As indicated above,FIGS.1A-1Iare provided as an example. Other examples may differ from what is described with regard toFIGS.1A-1I. The number and arrangement of devices shown inFIGS.1A-1Iare provided as an example. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than those shown inFIGS.1A-1I. Furthermore, two or more devices shown inFIGS.1A-1Imay be implemented within a single device, or a single device shown inFIGS.1A-1Imay be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) shown inFIGS.1A-1Imay perform one or more functions described as being performed by another set of devices shown inFIGS.1A-1I. FIG.2is a diagram of an example environment200in which systems and/or methods described herein may be implemented. As shown inFIG.2, the environment200may include the optimizer system105, which may include one or more elements of and/or may execute within a cloud computing system202. The cloud computing system202may include one or more elements203-213, as described in more detail below. As further shown inFIG.2, the environment200may include a network220, a user device230, and/or a network device240. Devices and/or elements of the environment200may interconnect via wired connections and/or wireless connections. The cloud computing system202includes computing hardware203, a resource management component204, a host operating system (OS)205, and/or one or more virtual computing systems206. The cloud computing system202may execute on, for example, an Amazon Web Services platform, a Microsoft Azure platform, or a Snowflake platform. The resource management component204may perform virtualization (e.g., abstraction) of the computing hardware203to create the one or more virtual computing systems206. Using virtualization, the resource management component204enables a single computing device (e.g., a computer or a server) to operate like multiple computing devices, such as by creating multiple isolated virtual computing systems206from the computing hardware203of the single computing device. In this way, the computing hardware203can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices. The computing hardware203includes hardware and corresponding resources from one or more computing devices. For example, the computing hardware203may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. As shown, the computing hardware203may include one or more processors207, one or more memories208, one or more storage components209, and/or one or more networking components210. Examples of a processor, a memory, a storage component, and a networking component (e.g., a communication component) are described elsewhere herein. The resource management component204includes a virtualization application (e.g., executing on hardware, such as the computing hardware203) capable of virtualizing computing hardware203to start, stop, and/or manage one or more virtual computing systems206. For example, the resource management component204may include a hypervisor (e.g., a bare-metal or Type1hypervisor, a hosted or Type2hypervisor, or another type of hypervisor) or a virtual machine monitor, such as when the virtual computing systems206are virtual machines211. Additionally, or alternatively, the resource management component204may include a container manager, such as when the virtual computing systems206are containers212. In some implementations, the resource management component204executes within and/or in coordination with a host operating system205. A virtual computing system206includes a virtual environment that enables cloud-based execution of operations and/or processes described herein using the computing hardware203. As shown, the virtual computing system206may include a virtual machine211, a container212, or a hybrid environment213that includes a virtual machine and a container, among other examples. The virtual computing system206may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system206) or the host operating system205. Although the optimizer system105may include one or more elements203-213of the cloud computing system202, may execute within the cloud computing system202, and/or may be hosted within the cloud computing system202, in some implementations, the optimizer system105may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based. For example, the optimizer system105may include one or more devices that are not part of the cloud computing system202, such as a device300ofFIG.3, which may include a standalone server or another type of computing device. The optimizer system105may perform one or more operations and/or processes described in more detail elsewhere herein. The network220includes one or more wired and/or wireless networks and/or satellite networks. For example, the network220may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or a combination of these or other types of networks. The network220enables communication among the devices of the environment200. The user device230includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, as described elsewhere herein. The user device230may include a communication device and/or a computing device. For example, the user device230may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device. The network device240includes one or more devices capable of receiving, processing, storing, routing, and/or providing traffic (e.g., a packet and/or other information or metadata) in a manner described herein. For example, the network device240may include a router, such as a label switching router (LSR), a label edge router (LER), an ingress router, an egress router, a provider router (e.g., a provider edge router or a provider core router), a virtual router, or another type of router. Additionally, or alternatively, the network device240may include a gateway, a switch, a firewall, a hub, a bridge, a reverse proxy, a server (e.g., a proxy server, a cloud server, or a data center server), a load balancer, and/or a similar device. In some implementations, the network device240may be a physical device implemented within a housing, such as a chassis. In some implementations, the network device240may be a virtual device implemented by one or more computing devices of a cloud computing environment or a data center. In some implementations, a group of network devices240may be a group of data center nodes that are used to route traffic flow through a network. In some implementations, the network device240may include a base station, such as an aggregated base station, a disaggregated base station, an integrated access and backhaul (IAB) node, a relay node, and/or one or more components thereof. The base station may refer to a central unit (CU), a distributed unit (DU), a radio unit (RU), a Near-Real Time (Near-RT) RAN Intelligent Controller (RIC), or a Non-Real Time (Non-RT) RIC, or a combination thereof. The number and arrangement of devices and networks shown inFIG.2are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.2. Furthermore, two or more devices shown inFIG.2may be implemented within a single device, or a single device shown inFIG.2may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of the environment200may perform one or more functions described as being performed by another set of devices of the environment200. FIG.3is a diagram of example components of a device300, which may correspond to the optimizer system105, the user device230, and/or the network device240. In some implementations, the optimizer system105, the user device230, and/or the network device240may include one or more devices300and/or one or more components of the device300. As shown inFIG.3, the device300may include a bus310, a processor320, a memory330, an input component340, an output component350, and a communication component360. The bus310includes one or more components that enable wired and/or wireless communication among the components of the device300. The bus310may couple together two or more components ofFIG.3, such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling. The processor320includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. The processor320is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the processor320includes one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein. The memory330includes volatile and/or nonvolatile memory. For example, the memory330may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The memory330may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). The memory330may be a non-transitory computer-readable medium. The memory330stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of the device300. In some implementations, the memory330includes one or more memories that are coupled to one or more processors (e.g., the processor320), such as via the bus310. The input component340enables the device300to receive input, such as user input and/or sensed input. For example, the input component340may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator. The output component350enables the device300to provide output, such as via a display, a speaker, and/or a light-emitting diode. The communication component360enables the device300to communicate with other devices via a wired connection and/or a wireless connection. For example, the communication component360may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna. The device300may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., the memory330) may store a set of instructions (e.g., one or more instructions or code) for execution by the processor320. The processor320may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors320, causes the one or more processors320and/or the device300to perform one or more operations or processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, the processor320may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.3are provided as an example. The device300may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.3. Additionally, or alternatively, a set of components (e.g., one or more components) of the device300may perform one or more functions described as being performed by another set of components of the device300. FIG.4depicts a flowchart of an example process400for calculating optimum customer access paths for applications provided by multi-cloud providers through private networks. In some implementations, one or more process blocks ofFIG.4may be performed by a device (e.g., the optimizer system105). In some implementations, one or more process blocks ofFIG.4may be performed by another device or a group of devices separate from or including the device. Additionally, or alternatively, one or more process blocks ofFIG.4may be performed by one or more components of the device300, such as the processor320, the memory330, the input component340, the output component350, and/or the communication component360. As shown inFIG.4, process400may include receiving a request for an application from a user device (block410). For example, the device may receive a request for an application from a user device, as described above. As further shown inFIG.4, process400may include receiving network data for a network provider and a cloud provider and SLA constraints (block420). For example, the device may receive network data for a network provider and a cloud provider associated with the user device, and service level agreement (SLA) constraints associated with the user device, the network provider, and the cloud provider, as described above. In some implementations, the network data includes data identifying one or more of delays associated with access of the user device to the plurality of network provider devices, losses associated with access of the user device to the plurality of network provider devices, jitter associating with access of the user device to the plurality of network provider devices, memory utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, delays associated with access of the plurality of network provider devices to the plurality of cloud provider devices, losses associated with access of the plurality of network provider devices to the plurality of cloud provider devices, jitter associating with access of the plurality of network provider devices to the plurality of cloud provider devices, and usage charges associated with access of the plurality of network provider devices to the plurality of cloud provider devices. In some implementations, the SLA constraints include constraints associated with one or more of usage charges, round trip/one-way delays for access of the user device to the plurality of cloud provider devices, one-way losses for access of the user device to the plurality of cloud provider devices, one-way jitter for access of the user device to the plurality of cloud provider devices, memory utilizations for the plurality of network provider devices and the plurality of cloud provider devices, utilizations for the plurality of network provider devices and the plurality of cloud provider devices, or utilizations for the plurality of network provider devices and the plurality of cloud provider devices. In some implementations, the network provider provides a private network that includes the plurality of network provider devices. As further shown inFIG.4, process400may include calculating a plurality of cost vectors associated with defining a path for the user device to access the application (block430). For example, the device may calculate, based on the network data and the SLA constraints, a plurality of cost vectors associated with defining a path for the user device to access the application, as described above. In some implementations, calculating, based on the network data and the SLA constraints, the plurality of cost vectors associated with defining the path for the user device to access the application includes calculating first cost vectors associated with access of the user device to the plurality of network provider devices, calculating second cost vectors associated with access of the plurality of network provider devices to the plurality of cloud provider devices, calculating third cost vectors associated with SLA constraints of the user device, and calculating fourth cost vectors associated with the plurality of network provider devices and the plurality of cloud provider devices. In some implementations, calculating the first cost vectors associated with access of the user device to the plurality of network provider devices includes calculating delays, losses, and jitter associated with access of the user device to the plurality of network provider devices, and generating the first cost vectors based on the delays, the losses, and the jitter. In some implementations, calculating the second cost vectors associated with access of the plurality of network provider devices to the plurality of cloud provider devices includes calculating delays, losses, jitter, and usage charges associated with access of the plurality of network provider devices to the plurality of cloud provider devices, and generating the second cost vectors based on the delays, the losses, the jitter, and the usage charges. In some implementations, calculating the third cost vectors associated with the SLA constraints of the user device includes calculating delays, losses, jitter, and usage charges associated with the SLA constraints of the user device, and generating the third cost vectors based on the delays, the losses, the jitter, and the usage charges. In some implementations, calculating the fourth cost vectors associated with the plurality of network provider devices and the plurality of cloud provider devices includes calculating memory utilizations, processor utilizations, and bandwidth utilizations associated with the plurality of network provider devices and the plurality of cloud provider devices, and generating the fourth cost vectors based on the memory utilizations, the processor utilizations, and the bandwidth utilizations. As further shown inFIG.4, process400may include identifying a network provider device that provides a first least cost path and satisfies a first threshold (block440). For example, the device may identify, from a plurality of network provider devices, a network provider device that provides a first least cost path and satisfies a first threshold based on the plurality of cost vectors, as described above. As further shown inFIG.4, process400may include identifying a set of cloud provider devices that support the application for the network provider device (block450). For example, the device may identify, from a plurality of cloud provider devices, a set of cloud provider devices that support the application for the network provider device, as described above. In some implementations, the plurality of network provider devices are gateways associated with the network provider, and the plurality of cloud provider devices are gateways associated with the cloud provider. As further shown inFIG.4, process400may include identifying a cloud provider device that provides a second least cost path and satisfies a second threshold (block460). For example, the device may identify, from the set of cloud provider devices, a cloud provider device that provides a second least cost path and satisfies a second threshold based on the plurality of cost vectors, as described above. In some implementations, identifying the cloud provider device that provides the second least cost path and satisfies the second threshold includes randomly selecting the cloud provider device from the set of cloud provider devices. As further shown inFIG.4, process400may include causing the application to be provided from the cloud provider device to the user device, via the network provider device (block470). For example, the device may cause the application to be provided from the cloud provider device to the user device, via the network provider device, as described above. In some implementations, causing the application to be provided from the cloud provider device to the user device, via the network provider device includes causing the network provider device and the cloud provider device to define the path for the user device to access the application from the cloud provider device, and causing the application to be provided from the cloud provider device to the user device, via the path. In some implementations, the path for the user device to access the application from the cloud provider device satisfies the SLA constraints. AlthoughFIG.4shows example blocks of process400, in some implementations, process400may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.4. Additionally, or alternatively, two or more of the blocks of process400may be performed in parallel. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. To the extent the aforementioned implementations collect, store, or employ personal information of individuals, it should be understood that such information shall be used in accordance with all applicable laws concerning protection of personal information. Additionally, the collection, storage, and use of such information can be subject to consent of the individual to such activity, for example, through well known “opt-in” or “opt-out” processes as can be appropriate for the situation and type of information. Storage and use of personal information can be in an appropriately secure manner reflective of the type of information, for example, through various encryption and anonymization techniques for particularly sensitive information. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. | 54,340 |
11863405 | DETAILED DESCRIPTION OF THE EMBODIMENTS Broadband service operators conventionally provide broadband communication service on a location basis, i.e. they provide service to a particular location, such as to a residence or business. Multiple clients at the location may share the broadband communication service. For example, an operator may provide broadband communication service to a residence, and the service may be shared among multiple people at the residence. Each person, in turn, may have two or more clients, such as mobile phones, computers, entertainment devices, medical devices, security devices, etc., resulting in multiple clients sharing the broadband communication service at the residence. As another example, an operator may provide broadband communication service to a business, and multiple clients at the business, such as computers, voice over internet protocol (VoIP) telephones, conferencing applications, etc., may share the broadband communication service at the business. Providing broadband communication service on a location basis may result in suboptimal performance and/or suboptimal resource allocation. For example, some clients at a location may receive insufficient broadband communication service, e.g. broadband communication service having insufficient bandwidth and/or unacceptable latency, while other less-demanding clients at the location may receive a higher-level of broadband communication service than needed. Additionally, one client's use of shared broadband communication service may interfere with another client's use of the shared broadband communication service. For example, one client may use a large amount of communication bandwidth, leaving insufficient communication bandwidth for other clients. Furthermore, one or more parties sharing broadband communication service at a location may have to compromise on what tier of broadband communication service to subscribe to. For example, one party may desire a high-performance broadband communication service tier, while another party may desire a low-cost broadband communication service tier. Moreover, providing broadband communication service on a location basis does not enable a party to receive a consistent broadband service level as the party roams among locations. For example, a person subscribing to 1 Gb/s broadband communication service at their residence will not receive such communication bandwidth when using a communication network at a friend's residence, if the friend subscribes to 10 Mb/s broadband communication service. Disclosed herein are systems and methods for providing individualized communication service, which may at least partially overcome one or more the above-discussed problems. The new systems and methods can provide communication service on a client basis, i.e. to a particular client, instead of, or in addition to, on a location basis. Therefore, certain embodiments of the new systems and methods can provide individualized communication service for two or more clients at a given location, potentially enabling communication service to be optimized for each client. In particular embodiments, each client is assigned a service profile specifying one or more attributes of the client's communication service, and data associated with the client is transported on both a local communication network and an operator communication network in accordance with the service profile. For example, a client subscribing to high bandwidth service may be assigned a service profile specifying a high-bandwidth tier, and data associated with the client may be transported by a local communication network and an operator communication network in accordance with the high-bandwidth tier, i.e. with one or more attributes specified by the high-bandwidth tier. As another example, a client subscribing to a low latency service may be assigned a service profile specifying a low-latency tier, and data associated with the client may transported by a local communication network and an operator communication network in accordance with the low-latency tier, i.e. with one or more attributes specified by the low-latency tier. Additionally, some embodiments enable a client to roam among different communication networks while receiving a consistent or analogous communication service level. FIG.1is a block diagram of a communication system100, which is one embodiment of the new communication systems for providing individualized communication service. Communication system100includes N local communication networks102communicatively coupled to an operator communication network104, where N is an integer greater than one. In some alternate embodiments, however, communication system100only includes a single local communication network102. In this document, specific instances of an item may be referred to by use of a numeral in parentheses (e.g., local communication network102(1)) while numerals without parentheses refer to any such item (e.g., local communication networks102). Details of local communication networks102(2)-102(N) are not shown inFIG.1. In some embodiments, each local communication network102is implemented at a single respective location, such as at a single building or a single outdoor site. However, in some other embodiments, at least one local communication network102spans multiple buildings and/or multiple outdoor sites, such as a plurality of buildings on a campus. Each local communication network102of system100need not have the same configuration. Local communication network102(1) includes shared network communication equipment106and one or more clients108. Clients108may be tangible or intangible. For example, one client108may be a tangible information technology device, and another client108may be an intangible application running on an information technology device. Examples of clients108include, but are not limited to, a mobile telephone, a computer, a set-top device, a data storage device, an Internet of Things (IoT) device, an entertainment device, a computer networking device, a smartwatch, a wearable device with wireless capability, a medical device, a wireless access device (including, for example an evolved NodeB (eNB), a next generation NodeB (gNB), an Institute of Electrical and Electronics Engineers (IEEE) 802.11-based wireless access point, an Integrated Access and Backhaul (IAB) access point, a microcell, a picocell, a femtocell, a macrocell, and IEEE 802.11-based application), an application with communication capability, a software or firmware element with communication capability, etc.FIG.1depicts local communication network102including five clients108(1)-108(5), where client108(1) is a medical device, client108(2) is a streaming content application running on a tablet computer110, client108(3) is an IoT device in the form of a smart lightbulb, client108(4) is a mobile telephone, and client108(5) is another mobile telephone. The number of clients108and/or the types of clients108in local communication network102(1) may vary without departing from the scope hereof. Additionally, number and/or type of clients108may vary among local communication network102instances. Shared network equipment106is shared by all clients108of local communication network102(1), and shared network equipment106communicatively couples clients108to operator communication network104. Additionally, in certain embodiments, shared network equipment106is capable of transferring data between two or more clients108of local communication network102without assistance of operator communication network104. In some embodiments, shared network equipment106includes one or more of a switch, a wireless access point, a repeater, a range extender, a hub, a router, electrical cable, and optical cable. For example, in some embodiments, shared network equipment includes a switch (not shown) and one or more Ethernet electrical or optical cables (not shown) communicatively coupling one or more clients108to the switch. As another example,FIG.2is a block diagram of a wireless communication system200, which is an embodiment of wireless communication system100where shared network equipment106includes a wireless access point206. In some embodiments, wireless access point206includes one or more of an IEEE 802.11-based wireless access point, a fourth-generation (4G) wireless access point, a fifth-generation (5G) new radio (NR) wireless access point, and a sixth-generation (6G) wireless access point. Shared network equipment106may vary among local communication network102instances. Referring again toFIG.1, in certain embodiments, shared network equipment106is configured to establish at least two subnetworks112for transferring data between respective client devices108and operator communication network104. For example,FIG.1depicts local network equipment102establishing four subnetworks112, where client108(1) is a member of subnetwork112(1), client108(2) is a member of subnetwork112(2), client108(3) is a member of subnetwork112(3), and clients108(4) and108(5) are members of subnetwork112(4). In certain embodiments, each subnetwork112is logically separate from each other subnetwork112, and in some embodiments, shared network equipment106is capable of configuring two or more subnetworks112to have different capabilities and/or attributes. For example, in some embodiments, two or more subnetworks112have different communication bandwidth, communication latency, communication quality of service (QoS), communication volume, security service, data origination address controls, data destination address controls, parental control service, time of day restrictions, and/or number of connected client108restrictions. In particular embodiments, subnetworks112are implemented at least partially using one or more techniques disclosed in United States Patent Application Publication Number 2019/0036909 to Cable Television Laboratories, Inc., which is incorporated herein by reference. Operator communication network104is configured to transport data between local communication network102and one or more nodes, such as an origination node, a destination node, and/or an intermediate node. An origination node is a node which provides data to a client108, and a destination node is a node which receives data from a client108. An intermediate node is a node between an origination node and a destination node, e.g. a node at a peering location. A given node may be both an origination node and a destination node. Additionally, a given node may be an intermediate node as well as an origination node and/or a destination node. Origination, destination, and intermediate nodes may be located, for example, in network resources114, in operator communication network104, and/or in a local communication network102instance. In some embodiments, network resources114include, but are not limited to, the public Internet, voice communication applications, conferencing applications, and/or content delivery applications. Although network resources114are illustrated as being separate from operator communication network104, in certain embodiments, one or more elements of network resources114are part of operator communication network104. Network resources114need not be part of system100. Operator communication network104includes (1) a respective access device116for each local communication network102, (2) a network hub118, and (3) a router120. Access device116(1) is communicatively coupled with shared network equipment106, and access device116(1) interfaces local communication network102(1) with operator communication network104. In some embodiments, at least one access device116is a modem, such as a cable modem (CM) or a DSL modem, and in certain embodiments, at least one access device116is an optical network terminal (ONT) or an optical network unit (ONU). Access devices116need not all have the same configuration. For example, in some embodiments, access device116(1) is a modem, and access device116(2) is an ONT, or vice versa. In some embodiments, operator communication network104is configured to transport data at least in partially in accordance with one or more of a DOCSIS communication protocol, a DSL communication protocol, an optical communication protocol, and a wireless communication protocol. Examples of possible optical communication protocols include, but are not limited to, an ethernet passive optical network (EPON) communication protocol, a radio frequency over glass (RFOG) communication protocol, and a gigabit passive optical network (GPON) communication protocol. Examples of possible wireless communication protocols include, but are not limited to, an IEEE 802.11-based wireless communication protocol, a 4G wireless communication protocol, a 5G NR wireless communication protocol, and a 6G wireless access communication protocol. Although access devices116are depicted as being separate from local communication networks102, in some embodiments, at least one access device116shares one or more elements with a respective local communication network102. Additionally, in certain embodiments, an access device116in co-packaged with shared network equipment106of a respective local communication network102. For example, in particular embodiments, access device116(1) and shared network equipment106are co-packaged as a premises gateway device. Each access device116is communicatively coupled to network hub118via a communication link122. Communication links122include, for example, coaxial electrical cable, twisted pair electrical cable, optical cable, or a combination of two or more of the aforementioned cables. For example, in particular embodiments, at least one communication link122is a hybrid fiber and coaxial cable (HFC) communication link, including optical cable connected between network hub118and a fiber node (not shown), and coaxial electrical cable connected between the fiber node and an access device116instance. One or more communication links122may include a wireless communication link in place of, or in addition to, an electrical or optical cable. Two or more access devices116may share a common communication link122. In some embodiments, network hub118includes a wireless or wired relay node, an Ethernet switch, a cable modem termination system (CMTS), an optical line terminal (OLT), a wireless communication termination system (e.g. a packet core or an evolved packet core), a wireless relay system, or a digital subscriber line access multiplexer (DSLAM). Although network hub118is depicted as a single element, in some embodiments, network hub118includes a plurality of elements, such as a central element and one or more remote elements. Router120is configured to route data between network hub118and one or more nodes, including but not limited to origination nodes, destination nodes, and intermediate nodes. Such nodes, for example, are part of network resources114, local communication networks102, and/or operator communication network104. Operator communication network104may be modified to include additional or alternative elements without departing from the scope hereof. For example, in some alternate embodiments, router120is omitted. As another example, in some embodiments, operator communication network104further includes one or more content delivery servers (not shown). Local communication network102(1) and/or operator communication network104are configured to assign each client108a service profile specifying one or more attributes of the client's communication service, such as to provide the client individualized communication service or default communication service. Some possible examples of attributes specified by a service profile include, but are not limited to, one or more of communication bandwidth (e.g., maximum communication bandwidth or minimum communication bandwidth), communication latency (e.g., maximum communication latency), communication quality of service (QoS), communication volume (e.g., maximum amount of data that can transported during a specified time), security service, data origination address controls, data destination address controls, parental control service, and/or time of day restrictions, associated with the first client. QoS prioritizes transportation of data packets that are high-priority, e.g. time sensitive data packets, over data packets that are not high priority. Security service includes, for example, one or more services to protect privacy and/or integrity of data associated with a client108. Security service may alternately or additionally include one or more services to protect a client108from unauthorized access. Examples of possible security services include, but are not limited to, an encryption service for encrypting data associated with a client108, and a firewall service for helping prevent unauthorized access to a client108. Data origination address controls regulate what node or nodes can provide data for a client. For example, data origination address controls may specify what node(s) are permitted to provide data to a client108, and/or data origination address controls may specify what node(s) are not permitted to provide data to a client108. Data destination address controls regulate what node or nodes can receive data from a client108. For example, data destination address controls may specify what node(s) a client108is permitted provide data to, and/or data destination address controls may specify what node(s) a client108is not permitted to provide data to. Parent control service enables one party, such as a parent, to restrict another party, such as a child, from using one or more aspects of a client108. Time of day restrictions restrict service available to a client108, for example, according to time, date, and/or day of week. FIGS.3-6illustrate one set of possible service profiles for clients108of local communication network102(1). Specifically,FIG.3is a schematic diagram illustrating an individualized service profile300, which is one possible service profile for client108(1), for providing individualized communication service to the client. Client108(1), which is a medical device, does not require high communication bandwidth, and service profile300therefore specifies low communication bandwidth for client108(1). However, speed and reliability of client108(1) are important, and service profile300accordingly specifies low communication latency and communication QoS for client108(1), to promote fast and reliable communication. Additionally, data associated with client108(1) must remain secure, because the data may include personal information. Accordingly, service profile300specifies security service for client108(1). In the example ofFIG.3, client108(1) is intended to communicate with only one node, e.g. a node associated with a medical service provider, where the node has an address “Address 1.” Therefore, service profile300species data origination address controls and data destination address controls. Specifically, service profile300species that (a) client108(1) is permitted to receive data from only a node at Address 1, and (2) client108(1) is permitted to provide data to only the node at Address 1. In some embodiments, Address 1 is an Internet Protocol (IP) version 4 address or an IP version 6 address. In view of client108(1) being a medical device, no parental controls are necessary, and service profile108therefore specifies no parental control service for client108(1). FIG.4is a schematic diagram illustrating a service profile400, which is one possible service profile for client108(2), for providing individualized communication service to the client. Client108(2) is a streaming content application which requires high communication bandwidth, and service profile400therefore specifies high communication bandwidth for client108(2). Best effort communication service will suffice for client108(2), and service profile400therefore specifies that low communication latency and communication QoS are not required for the client. In the example ofFIG.4, the party associated with client108(2) is concerned about security and possible inappropriate use of client108(2) by children, and service profile400therefore specifies both security service and parental control service for client108(2). Finally, no restrictions on origination or destination of data are desired, and security profile400therefore does not specify any origination or destination address controls for client108(2). FIG.5is a schematic diagram illustrating a service profile500, which is one possible service profile for client108(3), for providing individualized communication service to the client. Client108(3) is a smart light bulb which does not require high-performance communication service, security service, or parental controls. Service profile500therefore specifies low communication bandwidth and that low communication latency, communication QoS, security service, and parental controls are not required for client108(3). Client108(3) is intended to communicate with only one node, e.g. a node associated with a light bulb supplier, where the node has an address “Address 2.” Therefore, service profile500species Address 2 for both origination address controls and destination address controls. Consequently, data generated by client108(3) may only be transported to a node at Address 2, and client108(3) may only receive data from the node at Address 2, according to service profile500. FIG.6is a schematic diagram illustrating a service profile600, which is an example of possible default service profile, i.e. a service profile that is assigned to clients which will not receive individualized communication service. In this example, clients108(4) and108(5) will not receive individualized communication service, and each of clients108(4) and108(5) is therefore assigned default service profile600. Service profile600specifies medium communication bandwidth and no special services for an associated client. It should be appreciated that clients108(1)-108(5) could be assigned service profiles other than those ofFIGS.3-6. Furthermore, a service profile need not specify the same attributes as those ofFIGS.3-6. For example,FIG.7is a schematic diagram of a service profile700, which is another possible service profile for client108(1). Service profile700is like service profile300ofFIG.3, but service profile700does not include fields for origination address controls or destination address controls. As another example,FIG.8is a schematic diagram of a service profile800, which is another possible service profile for client108(1). Service profile800is like service profile300ofFIG.3, but service profile800includes a surge bandwidth field in place of a parental control service field. The surge bandwidth field specifies whether client108(1) is to be provided surge communication bandwidth, i.e. a higher-than-normal communication bandwidth for a limited amount of time, such as to promote high performance during short-term peak demands. Local communication network102(1) and operator communication network104are each configured to transport data associated with a client108according to a service profile of the client108, i.e. to transport the data in accordance with attributes specified by the service profile, to provide individualized communication service to the client. For example, in one embodiment, local communication network102(1) and operator communication network104are configured to (a) transport data124(1) associated with client108(1) in accordance with service profile300ofFIG.3, (b) transport data124(2) associated with client108(2) in accordance with service profile400ofFIG.4, (c) transport data124(3) associated with client108(3) in accordance with service profile500ofFIG.5, (d) transport data124(4) associated with client108(4) in accordance with service profile600ofFIG.6, and (e) transport data124(5) associated with client108(5) in accordance with service profile600ofFIG.6. More specifically, in the above example, local communication network102(1) and operator communication network104are each configured to transport data124(1) associated with client108(1) with low maximum communication bandwidth, low communication latency, communication QoS, and security service, as specified in individualized service profile300. Additionally, local communication network102(1) and operator communication network104are each configured to limit client108(1) to communicating with a node at Address 1, as further specified in individualized service profile300. Furthermore, local communication network102(1) and operator communication network104are each configured to transport data124(2) associated with client108(2) at high maximum bandwidth and with security and parental control services, as specified in individualized service profile400. Moreover, local communication network102(1) and operator communication network104are each configured to transport data124(3) between client108(3) and a node at Address 2 with no special services, as specified in individualized service profile500. Finally, in this example, local communication network102(1) and operator communication network104are each configured to transport data124(4) associated with client108(4), as well as to transport data124(5) associated with client108(5), at medium maximum bandwidth and with no special services, as specified in default service profile600. AlthoughFIG.1illustrates data124(1)-(5) being transported between router120and network resources114, data124(1)-(5) could be transported between router120and one or more different locations, without departing from the scope hereof. FIG.9is a data flow diagram900illustrating one example of transporting data associated with clients108(1) and108(2) in communication system100. It should be noted, though, that operation of communication system100is not limited to theFIG.9example. At time to, local communication network102(1) transports data124(1) from client108(1) to operator communication network104in accordance with service profile300, and operator communication network104transports data124(1) from local communication network102(1) to node A in accordance with service profile300. Specifically, client108(1) transports data124(1) to shared network equipment106, e.g. via subnetwork112(1), and shared network equipment106transports data124(1) to access device116(1). Access device116(1) transports data124(1) to network hub118, and network hub118transports data124(1) to router120. Router120transports or routes data124(1) to node A. Node A is, for example, an origination node, a destination node, or an intermediate node. Although node A is depicted as being external to operator communication network104, node A could be within operator communication network104without departing from the scope hereof. Furthermore, data124(1) could traverse additional nodes between router120and node A. At time t1, local communication network102(1) transports data124(2) from client108(2) to operator communication network104in accordance with service profile400, and operator communication network104transports data124(2) from local communication network102(1) to node B in accordance with service profile400. Specifically, client108(2) transports data124(2) to shared network equipment106, e.g. via subnetwork112(2), and shared network equipment106transports data124(2) to access device116(1). Access device116(1) transports data124(2) to network hub118, and network hub118transports data124(2) to router120. Router120transports or routes data124(2) to node B. Node B is, for example, an origination node, a destination node, or an intermediate node. Although node B is depicted as being external to operator communication network104, node B could be within operator communication network104without departing from the scope hereof. Furthermore, data124(2) could traverse additional nodes between router120and node B. At time t2, operator communication network104transports data124(1) from node A to local communication network102(1) in accordance with service profile300, and local communication network102(1) transports data124(1) from operator communication network104to client108(1) in accordance with service profile300. Specifically, router120receives data124(1) from node A, and router120transports data124(1) to network hub118. Network hub118transports data124(1) to access device116(1), and access device116(1) transports data124(1) to shared network equipment106. Shared network equipment106transports data124(1) to client108(1), e.g. via subnetwork112(1). At time t3, operator communication network104transports data124(2) from node B to local communication network102(1) in accordance with service profile400, and local communication network102(1) transports data124(2) from operator communication network104to client108(2) in accordance with service profile400. Specifically, router120receives data124(2) from node B, and router120transports data124(2) to network hub118. Network hub118transports data124(2) to access device116(1), and access device116(1) transports data124(2) to shared network equipment106. Shared network equipment106transports data124(2) to client108(2), e.g. via subnetwork112(2). FIG.10is a flow chart illustrating a method1000for providing individualized communication service. Although method1000is discussed in the context of system100, method1000is not limited to use with system100. Additionally, system100is not limited to use with method1000. In a block1002of method1000, a first client being communicatively coupled to a first local communication network is recognized. In one example of block1002, shared network equipment106recognizes medical device client108(1) being communicatively coupled to local communication network102(1). In another example of block1002, shared network equipment106recognizes streaming content application client108(2) being communicatively coupled to local communication network102(1). In another example of block1002, shared network equipment106recognizes mobile telephone client108(4) being communicatively coupled to local communication network102(1). In a block1004of method1000, an identity of the first client is determined. In one example of block1004, local communication network102(1), operator communication network104, and/or another communication network (not shown) determine an identity of medical device client108(1), using, for example, one or more security certificates associated with medical device client108(1) and/or a subscriber identity module (SIM) associated with medical device client108(1). In another example of block1004, local communication network102(1), operator communication network104, and/or another communication network (not shown) determine an identity of streaming content application client108(2), using, for example, one or more security certificates associated with streaming content application client108(2) and/or a SIM associated with streaming content application client108(2). In another example of block1004, local communication network102(1), operator communication network104, and/or another communication network (not shown) determine an identity of mobile telephone client108(4), using, for example, one or more security certificates associated with mobile telephone client108(4) and/or a SIM associated with mobile telephone client108(4). In some embodiments, local communication network102(1), operator communication network104, and/or another communication network (not shown) determine an identity of one or more clients108at least partially using techniques disclosed in United States Patent Application Publication Number 2018/0255050 to Cable Television Laboratories, Inc., which is incorporated herein by reference. In a block1006of method1000, first data is transported between the first client and a first operator communication network, using the first local communication network in accordance with a first service profile associated with the first client. In one example of block1006, local communication network102(1) transports data124(1) between medical device client108(1) and operator communication network104in accordance with service profile300,700, or800, e.g. using subnetwork112(1). In another example of block1006, local communication network102(1) transports data124(2) between streaming content application client108(2) and operator communication network104in accordance with service profile400, e.g. using subnetwork112(2). In another example of block1006, local communication network102(1) transports data124(4) between mobile telephone client108(4) and operator communication network104in accordance with default service profile600, e.g. using subnetwork112(4). In a block1008of method1000, the first data is transported using the first operator communication network in accordance with the first service profile. In one example of block1008, data124(1) is transmitted by operator communication network104according to service profile300, e.g. from local communication network102(1) to node A, as illustrated inFIG.9. In another example of block1008, data124(2) is transmitted by operator communication network104according to service profile400, e.g. from local communication network102(1) to node B, as illustrated inFIG.9. In another example of block1008, data124(4) is transmitted by operator communication network104according to default service profile600. Referring again toFIG.1, in some embodiments, location communication networks102and/or operator communication network104are configured to provide one or aspects of individualized and/or default service to clients108without use of service profiles. For example, in certain embodiments, at least one subnetwork112is configured to limited number of clients108in the subnetwork, and/or impose time of day restrictions on clients108, without requiring such limitation to be specified in a service profile. Some embodiments of system100are configured such that a client108receives communication service in accordance with its respective service profile even as the client roams from one local communication network102to another local communication network102. For example,FIG.11is a block diagram of a communication system1100, which is an embodiment of communication system100that is configured to support roaming of clients among local communication networks102. System1100includes an instance of operator communication network104and N local communication networks1102, where local communication networks1102are embodiments of local communication networks102ofFIG.1. Local communication network1102(1) includes shared network equipment1106(1), and at time ta, local communication network1102(1) further includes a streaming content application client1108(1) operating on a tablet computer1110. Local communication network1102(1) may include additional clients1108without departing from the scope hereof. Shared network equipment1106(1) is an embodiment of shared network equipment106ofFIG.1, and shared network equipment1106(1) is communicatively coupled to access device116(1). At time ta, local communication network1102(1) transmits data1124(1) between streaming content application client1108(1) and operator communication network104in accordance with a service profile associated with client1108(1), e.g. service profile400ofFIG.4. In some embodiments, shared network equipment1106(1) additionally recognizes streaming content application client1108(1) being communicatively coupled to local communication network1102(1), and local communication network1102(1), operator communication network104, and/or another network (not shown) determine an identity of client1108(1), such as in a manner similar to that discussed above with respect toFIG.10. Local communication network1102(2) includes shared network equipment1106(2) and smart light bulb client1108(2), at time ta. Local communication network1102(1) may include additional clients1108without departing from the scope hereof. Shared network equipment1106(2) is an embodiment of shared network equipment106ofFIG.1, and shared network equipment1106(2) is communicatively coupled to access device116(2). Local communication network1102(2) transports data1124(2) between smart light bulb client1108(2) and operator communication network104in accordance with a service profile associated with client1108(2), e.g. service profile500ofFIG.5. As discussed above, streaming content application client1108(1) is in local communication network1102(1) at time ta. However, streaming content application client1108(1) (and tablet computer1110) roam from local communication network1102(1) to local communication network1102(2) at time tb, as indicated by an arrow1126. Local communication network1102(2) transmits data1124(3) between streaming content application client1108(1) and operator communication network104in accordance with the same service profile associated with client1108(1) in local communication network1102(1), e.g. service profile400ofFIG.4. Consequently, streaming content application client1108(1) receives consistent communication service as it roams from local communication network1102(1) to local communication network1102(2). In some embodiments, shared network equipment1106(2) additionally recognizes streaming content application client1108(1) and smart light bulb1108(2) being communicatively coupled to local communication network1102(1), and local communication network1102(2), operator communication network104, and/or another network (not shown) determine an identity of clients1108(1) and1108(2), such as in a manner similar to that discussed above with respect toFIG.10. Although the same service profile is associated with streaming content application client1108(1) in both local communication networks1102(1) and1102(2), the two local communication networks may have different capabilities, such as due to differences in shared network equipment1106, access devices116, and/or communication links122. Consequently, streaming content application client1108(1) may not receive identical communication service in local communication networks1102(1) and1102(2), even though streaming content application client1108(1) has the same service profile in each local communication network1102. For example, local communication network1102(1) may be able to support a downlink communication bandwidth of 1 Gb/s, while local communication network1102(2) may only be able to support a downlink communication bandwidth of 250 Mb/s. In this case, streaming content application client1108(1) will receive different communication service in local communication network1102(2) than in local communication network1102(1), if streaming content application client1108(1)'s service profile specifies a maximum communication bandwidth of greater than 250 Mb/s. Client1108(1) roams among local communication networks served by a common operator communication network in theFIG.11example. Some embodiments of the systems and methods disclosed herein are configured such that a client receives communication service in accordance with its respective service profile as the client roams among local communication networks served by different operator communication networks. For example,FIG.12is a block diagram of a communication system1200, which is configured to support roaming of clients among local communication networks served by different respective operator communication networks. System1200includes an operator communication network1204(1), an operator communication network1204(2), a local communication network1202(1), and a local communication network1202(2). Operator communication networks1204are embodiments of operator communication network104, and local communication networks1202are embodiments of local communication networks102. Operator communication network1204(1) serves local communication network1202(1), and operator communication network1204(2) serves local communication network1202(2). The number of operator communication networks1204in system1200, as well as the number of local communication networks1202support by each operator communication network1204, may vary without departing from the scope hereof. Details of operator communication networks1204are not shown inFIG.12. Local communication network1202(1) includes shared network equipment1206(1), and at time ta, local communication network1202(1) further includes a streaming content application client1208(1) operating on a tablet computer1210. Local communication network1202(1) may include additional clients1208without departing from the scope hereof. Shared network equipment1206(1) is an embodiment of shared network equipment106ofFIG.1, and shared network equipment1206(1) is communicatively coupled to an access device (not shown) of operator communication network1204(1). At time ta, local communication network1202(1) transmits data1224(1) between streaming content application client1208(1) and operator communication network1204(1) in accordance with a service profile associated with client1208(1), e.g. service profile400ofFIG.4. Additionally, operator communication network1204(1) transmits data1224(1) in accordance with the service profile associated with client1208(1). In some embodiments, shared network equipment1206(1) additionally recognizes streaming content application client1208(1) being communicatively coupled to local communication network1202(1), and local communication network1202(1), operator communication network1204(1), and/or another network (not shown) determine an identity of client1208(1), such as in a manner similar to that discussed above with respect toFIG.10. Local communication network1202(2) includes shared network equipment1206(2), but local communication network1202(2) does not include any clients at time ta. Shared network equipment1206(2) is an embodiment of shared network equipment106ofFIG.1, and shared network equipment1206(2) is communicatively coupled to an access device (not shown) of operator communication network1204(2). As discussed above, streaming content application client1208(1) is in local communication network1202(1) at time ta. However, streaming content application client1208(1) (and tablet computer1210) roam from local communication network1202(1) to local communication network1202(2) at time tb, as indicated by an arrow1226. Local communication network1202(2) then transmits data1224(2) between streaming content application client1108(1) and operator communication network1204(2) in accordance with the same service profile associated with client1208(1) in local communication network1202(1), e.g. service profile400ofFIG.4. Additionally, operator communication network1204(2) transmits data1224(2) in accordance with the same service profile. Consequently, streaming content application client1208(1) receives consistent communication service as it roams from local communication network1202(1) to local communication network1202(2), even though the two local communication networks are served by different respective operator communication networks. In some embodiments, shared network equipment1206(2) additionally recognizes streaming content application client1208(1) being communicatively coupled to local communication network1202(1), and local communication network1202(2), operator communication network1204(2), and/or another network (not shown) determine an identity of client1208(1), such as in a manner similar to that discussed above with respect toFIG.10. It may be desirable to track data transportation by operator communication networks1204and/or by local communication networks1202, such as to facilitate business arrangements associated with these communication networks. Accordingly, in some embodiments, a data structure1228is distributed among multiple computing devices, to record transmission of data by at least one of operator communication network1204(1), operator communication network1204(2), local communication network1202(1), and local communication network1202(2). In certain embodiments, data structure1228is configured according to blockchain principles, or other consensus-based principles, to help ensure integrity of information recorded by the data structure. Data structure1228may be at least partially separate from system1200. In some embodiments, data structure1228is replaced by, or supplemented by, one or more different data storage structures, such as a database. Referring again toFIG.1, in some embodiments, a party associated with a given local communication network102, e.g. a party owning or leasing a building where a local communication network102is deployed, may pay for some or all costs associated with operator communication network104providing communication service to the local communication network. Additionally, in some embodiments, a party associated with a given client108, instead of a party associated with a given local communication network102, may pay for some or all costs associated with operator communication network104and/or a local communication network102providing communication service to the client. Combination of Features Features described above may be combined in various ways without departing from the scope hereof. The following examples illustrate some possible combinations: (A1) A method for providing individualized communication service may include (1) recognizing a first client being communicatively coupled to a first local communication network, (2) determining an identity of the first client, (3) transporting first data between the first client and a first operator communication network, using the first local communication network in accordance with a first service profile associated with the first client, and (4) transporting the first data using the first operator communication network in accordance with the first service profile. (A2) The method denoted as (A1) may further include (1) recognizing a second client being communicatively coupled to the first local communication network, (2) determining an identity of the second client, (3) transporting second data between the second client and the first operator communication network, using the first local communication network in accordance with a second service profile associated with the second client, the second service profile being different from the first service profile, and (4) transporting the second data using the first operator communication network in accordance with the second service profile. (A3) In the method denoted as (A2), transporting the first data between the first client and the first operator communication network may include transporting the first data using a first subnetwork of the first local communication network, and transporting the second data between the second client and the first operator communication network may include transporting the second data using a second subnetwork of the first local communication network. (A4) In the method denoted as (A1), transporting the first data between the first client and the first operator communication network may include transporting the first data using a first subnetwork of the first local communication network. (A5) The method denoted as (A1) may further include (1) recognizing a second client being communicatively coupled to the first local communication network, (2) recognizing a third client being communicatively coupled to the first local communication network, (3) transporting second data between the second client and the first operator communication network, using the first local communication network in accordance with a default service profile, and (4) transporting third data between the third client and the first operator communication network, using the first local communication network in accordance with the default service profile. (A6) Any one of the methods denoted as (A1) through (A5) may further include (1) recognizing the first client being communicatively coupled to a second local communication network, (2) determining the identity of the first client, while the first client is communicatively coupled to the second local communication network, and (3) transporting additional data between the first client and the first operator communication network, using the second local communication system in accordance with the first service profile. (A7) Any one of the methods denoted as (A1) through (A5) may further include (1) recognizing the first client being communicatively coupled to a second local communication network, (2) determining the identity of the first client, while the first client is communicatively coupled to the second local communication network, and (3) transporting additional data between the first client and a second operator communication network, using the second local communication network in accordance with the first service profile. (A8) The method denoted as (A7) may further include recording transportation of the additional data by at least one of the second local communication network and the second operator communication network, using a data structure distributed among multiple computing devices. (A9) In any one of the methods denoted as (A1) through (A8), determining the identity of the first client may include determining the identity of the first client using one or more security certificates associated with the first client. (A10) In any one of the methods denoted as (A1) through (A9), the first service profile may specify one or more of communication bandwidth, communication latency, communication quality of service (QoS), communication volume, security service, data origination address controls, data destination address controls, and parental control service, associated with the first client. (A11) In any one of the methods denoted as (A1) through (A10), transporting the first data using the first operator communication network may include transporting the first data in accordance with a Data Over Cable Service Interface Specification (DOCSIS) communication protocol in at least part of the first operator communication network. (A12) In any one of the methods denoted as (A1) through (A10), transporting the first data using the first operator communication network may include transporting the first data in accordance with an optical communication protocol in at least part of the first operator communication network. (B1) A method for providing individualized communication service may include (1) obtaining an identity of a first client communicatively coupled to a first local communication network, (2) transporting first data using a first operator communication network in accordance with a first service profile associated with the first client, (3) obtaining an identity of a second client communicatively coupled to the first local communication network, and (4) transporting second data using the first operator communication network in accordance with a second service profile associated with the second client, the second service profile being different from the first service profile. (B2) The method denoted as (B1) may further include (1) obtaining the identity of the first client while the first client is communicatively coupled to a second local communication network that is different from the first local communication network, and (2) transporting additional data associated with the first client using the first operator communication network in accordance with the first service profile. (B3) In any one of the methods denoted as (B1) and (B2), the first service profile may specify one or more of communication bandwidth, communication latency, communication quality of service (QoS), communication volume, security service, data origination address controls, data destination address controls, and parental control service, associated with the first client. (B4) In any one of the methods denoted as (B1) through (B3), transporting the first data using the first operator communication network may include transporting the first data in accordance with a Data Over Cable Service Interface Specification (DOCSIS) communication protocol in at least part of the first operator communication network. (B5) In any one of the methods denoted as (B1) through (B3), transporting the first data using the first operator communication network may include transporting the first data in accordance with an optical communication protocol in at least part of the first operator communication network. (C1) A method for providing individualized communication service may include (1) recognizing a first client being communicatively coupled to a first local communication network, (2) obtaining an identity of the first client, (3) transporting first data between the first client and a first operator communication network, using the first local communication network in accordance with a first service profile associated with the first client, (4) recognizing a second client being communicatively coupled to the first local communication network, (5) obtaining an identity of the second client, and (6) transporting second data between the second client and the first operator communication network, using the first local communication network in accordance with a second service profile associated with the second client, the second service profile being different from the first service profile. (C2) In the method denoted as (C1), transporting the first data between the first client and the first operator communication network may include transporting the first data using a first subnetwork of the first local communication network, and transporting the second data between the second client and the first operator communication network may include transporting the second data using a second subnetwork of the first local communication network. (C3) In any one of the methods denoted as (C1) and (C2), the first service profile may specify one or more of communication bandwidth, communication latency, communication quality of service (QoS), communication volume, security service, data origination address controls, data destination address controls, and parental control service, associated with the first client. Changes may be made in the above methods, devices, and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description and shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween. | 54,884 |
11863406 | DETAILED DESCRIPTION Specialized computing resources can be provided within a set of reusable general computing resources by configuring a server computer including a configurable logic platform (such as by providing a server computer with an add-in card including a field-programmable gate array (FPGA)) as a choice among the general computing resources. Configurable logic is hardware that can be programmed or configured to perform a logic function that is specified by configuration data that is applied to the configurable logic. For example, a user of the computing resources can provide a specification (e.g., written in a hardware description language (e.g., Verilog, SystemVerilog, and/or VHDL) or other language (e.g., C, C++, and/or SystemC), in a netlist generated with a schematic capture application, or in a netlist generated by a script) for configuring the configurable logic. The configurable logic can be configured according to the specification, and the configured logic can be used to perform a task for the user. However, allowing a user access to low-level hardware of the computing facility can potentially introduce security and privacy issues within the computing facility. A programmable logic service provider is disclosed that operates a programmable logic service for authorizing and mapping customer requests for virtual machines to compute instances having reconfigurable logic device resources. The programmable logic service provider controls access to configuration data, including configuration data provided by third parties. The programmable logic service can be operated as a web-based service, for example a web-based service hosted in a cloud that maps user requests received via a computer network to compute instances comprising reconfigurable logic resources. In some examples, a programmable logic service is implemented on a different physical server than the computing host providing the allocated computing instance with the reconfigurable logic devices. In other examples, the programmable logic service provider is hosted on the same computing host. In some examples, the programmable logic service uses an identifier contained in a request to authenticate the request and produce configuration information from a networked database or networked storage using the identifier. In some examples, the indicator indicates a machine image used by the compute instance. In some examples, the indicator indicates a product code for a machine image in a software application marketplace. In some examples, the indicator identifies a virtual instance of the compute host's virtual instance (e.g., by indicating a virtual CPU ID or MAC address assigned to the virtual instance). In some examples, the indicator identifies a physical instance of the compute host's physical instance (e.g., by indicating an actual CPU ID or MAC address assigned to the virtual instance). In some examples, the provider allocates the computing instance prior to receiving a request to implement application logic with reconfigurable hardware. In some examples, the provider is configured to allocate the computing instance with the application logic prior to initiating execution of the instance. In some examples, the compute instance is launched prior to producing the configuration data. In other examples, configuration data is produced prior to launching, and the launching action includes programming reconfigurable logic resources with the produced configuration data prior to providing the compute instance to the requester, for example, a user. In some examples, the computing instance can further reprogram a portion, but not all, of the reconfigurable logic device coupled to the computing host. As described herein, a compute services facility can include a variety of computing resources, where one type of the computing resources can include a server computer (alternatively dubbed a host computer) comprising a configurable logic platform. The configurable logic platform can be programmed or configured by a user of the computer system so that hardware (e.g., the configurable logic) of the computing resource is customized by the user. For example, the user can program the configurable logic so that it functions as a hardware accelerator that is tightly coupled to the server computer. For example, the hardware accelerator can be accessible via a local interconnect, such as a Peripheral Component Interconnect Express (PCI-Express or PCIe) or an IEEE 802.3 (Ethernet) connection, of the server computer. The user can execute an application on the server computer and tasks of the application can be performed by the hardware accelerator using PCIe transactions. By tightly coupling the hardware accelerator to the server computer, the latency between the accelerator and the server computer can be reduced which can potentially increase the processing speed of the application. A compute services provider can manage the computing resources using software services, such as a programmable logic service provider, to manage the configuration and operation of the configurable hardware. As one example, the compute service provider can execute a logic repository service for ingesting a hardware or logic design of a user, generating validated configuration data for configuring the configurable logic platform based on the logic design of the user, and downloading the validated configuration data in response to a request to configure an instance of the configurable logic platform. The configuration data can include data for creating debugging resources on the configurable logic platform, allowing for viewing of signal values, triggers that indicate the occurrence of event, performance counters, and other suitable debugging technology for monitoring reconfigurable logic devices. The download request can be from the user that developed the logic design or from a user that has acquired a license to use the logic design. Thus, logic designs can be created by the programmable logic service provider, a user, or a third party that is separate from the user or the programmable logic service provider. For example, a marketplace of accelerator intellectual property (IP) can be provided to the users of the compute services provider, and the users can potentially increase the speed of their applications by selecting an accelerator from the marketplace. FIG.1is a system diagram showing an example of a system100including a programmable logic service provider110that provides a configuration and management interface for accessing reconfigurable hardware resources120. For example, the programmable logic service provider110can be used for managing access and deployment of configuration data to the configurable compute resources120when the resources are deployed. The programmable logic service provider110can be a network-accessible service, such as a web service. Web services are commonly used in cloud computing. A web service is a software function provided at a network address over the Internet, cloud, or another network. Clients initiate web service requests to servers and servers process the requests and return appropriate responses. The client web service requests are typically initiated using, for example, an API request. For purposes of simplicity, web service requests will be generally described below as API requests, but it is understood that other web service requests can be made. An API request is a programmatic interface to a defined request-response message system, typically expressed in JSON or XML, which is exposed via the web—most commonly by means of an HTTP-based web server. Thus, in certain implementations, an API can be accessed via a set of Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, which can be in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. The API can specify a set of functions or routines that perform an action, which includes accomplishing a specific task or allowing interaction with a software component. When a web service receives the API request from a client device, the web service can generate a response to the request and send the response to the endpoint identified in the request. Additionally or alternatively, the web service can perform actions in response to the API request without generating a response to the endpoint identified in the request. The programmable logic service provider110can receive an API request130to generate configuration data for a configurable hardware platform, such as configurable hardware142of a server computer140. Typically, the configurable hardware142includes reprogrammable logic devices, such as Field Programmable Gate Arrays (FPGAs), configurable programmable logic devices (CPLDs), programmable logic devices (PLDs), and programmable memory resources (e.g., electrically erasable programmable read only memory (EEPROM) or flash memory). In some examples, some or all of the configurable hardware is one-time programmable. In some examples, functionality for the programmable logic service provider110is implemented in whole or in part using the server computer140, while in other examples, the functionality is implemented with computer resources separate from the server computer. In some examples, one instance of a programmable logic service provider can manage configurable hardware resources on a number of different physical and/or virtual hosts. In some examples, the programmable logic service provider110provides domain logic or otherwise applies rules for instantiating, operating, and terminating compute instances. For example, the domain logic may restrict access to all or a portion of a compute instance, including all or a portion of reconfigurable logic resources, until a financial transaction is processed. For example, a developer/partner may be required to purchase or lease a compute instance, or aspects of the compute instance, before or during operation of the compute instance. In some examples, the domain logic may restrict access based on attributes of the requester, such as identity of an associate organization, geographic location, or whether the requester has been sufficiently authenticated and/or authorized. The API request130can be originated by a developer or partner user of the programmable logic service provider. The request130can include fields for specifying data and/or metadata about the logic design, the configurable hardware platform, user information, access privileges, production status, and various additional fields for describing information about the inputs, outputs, and users of the programmable logic service provider110. As specific examples, the request can include a description of the design, a production status (such as trial or production), an encrypted status of the input or output of the service, a reference to a location for storing an input file (such as the hardware design source code), a type of the input file, an instance type of the configurable hardware, and a reference to a location for storing an output file or report. In particular, the request can include a reference to a hardware design specifying application logic for implementation on the configurable hardware platform. The hardware design can be specified using source code files (e.g., hardware description language files written in a language such as SystemC, SystemVerilog, or VHDL) and/or references to configuration data including bitstream files used to program reconfigurable logic resources. Host logic, which will be used to control operation of the application logic when programmed into the configurable hardware, is received from, for example, a programmable logic service provider development team. A specification of the application logic and/or of the host logic can be a collection of files, such as source code, a netlist generated by a logic synthesis tool, and/or placed and routed logic gates generated by a place and route tool. The source code can include code written in a hardware description language (HDL), a register transfer logic (RTL) language, or a high-level language such as Open Computing Language (OpenCL) or C. The compute resources120can include many different types of hardware and software categorized by instance type. In particular, an instance type specifies at least a portion of the hardware and software of a resource. For example, hardware resources can include servers with central processing units (CPUs) of varying performance levels (e.g., different clock speeds, architectures, cache sizes, and so forth), servers with and without co-processors (such as graphics processing units (GPUs) and configurable logic), servers with varying capacity and performance of memory and/or local storage, and servers with different networking performance levels. Example software resources can include different operating systems, application programs, and drivers. One example instance type can comprise the server computer140including a central processing unit (CPU)144in communication with the configurable hardware142. The configurable hardware142can include programmable logic such as an FPGA, a programmable logic array (PLA), a programmable array logic (PAL), a generic array logic (GAL), or a complex programmable logic device (CPLD), for example. The programmable logic service provider110can generate configuration data136in response to receiving the API request130. The generated configuration data136can be based on the application logic and the host logic. Specifically, the generated configuration data136can include information that can be used to program or configure the configurable hardware142so that it performs the functions specified by the application logic and the host logic. As one example, the programmable logic service provider can generate the host logic including logic for interfacing between the CPU144and the configurable hardware142. In some examples, the host logic can include logic for masking or shielding the application logic, including any of its included debugging functionality, from communicating directly with the CPU144so that all CPU-application logic transactions pass through the host logic. In this manner, the host logic can potentially reduce security and availability risks that could be introduced by the application logic. In other examples, the application logic can communicate directly to the CPU144via an interface, such as PCIe, Ethernet, Infiniband, or other suitable interface. Generating the configuration data136can include performing checks and/or tests on the application logic, integrating the application logic into a host logic wrapper, synthesizing the application logic, and/or placing and routing the application logic. Generating the configuration data136can include compiling and/or translating source code of the application logic and the host logic into data that can be used to program or configure the configurable hardware142. For example, the programmable logic service provider110can integrate the application logic into a host logic wrapper. Specifically, the application logic can be instantiated in a system design that includes the application logic and the host logic. The integrated system design can synthesized, using a logic synthesis program, to create a netlist for the system design. The netlist can be placed and routed, using a place and route program, for the instance type specified for the system design. The placed and routed design can be converted to configuration data136which can be used to program the configurable hardware142. For example, the configuration data136can be directly output from the place and route program. As one example, the generated configuration data136can include a complete or partial bitstream for configuring all or a portion of the configurable logic of an FPGA. An FPGA can include configurable logic and non-configurable logic. The configurable logic can include programmable logic blocks comprising combinational logic and/or look-up tables (LUTs) and sequential logic elements (such as flip-flops and/or latches), programmable routing and clocking resources, programmable distributed and block random access memories (RAMs), digital signal processing (DSP) bitslices, and programmable input/output pins. The bitstream can be loaded into on-chip memories of the configurable logic using configuration logic (e.g., a configuration access port). The values loaded within the on-chip memories can be used to control the configurable logic so that the configurable logic performs the logic functions that are specified by the bitstream. Additionally, the configurable logic can be divided into different partitions or regions which can be configured independently of one another. As one example, a full bitstream can be used to configure the configurable logic across all of the regions and a partial bitstream can be used to configure only a portion of the configurable logic regions. For example, individual partial bitstreams for each of a host logic portion, and a number of user portions: a first application logic portion, a second application logic portion, etc., can be generated, downloaded to a configurable hardware platform, and used to independently program different portions of a single FPGA. Because the partial bitstreams can be applied independently, detailed knowledge of other portions of the FPGA need not be made available to others, thereby protecting user privacy. In some examples, some or all of the bitstreams can be further protected using encryption. The non-configurable logic can include hard macros that perform a specific function within the FPGA, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and configuration logic for loading the configuration data onto the configurable logic. The programmable logic service provider110can store the generated configuration data136in a logic repository database150and/or logic configuration storage155. The logic repository database150and the logic configuration storage155can include storage implemented with removable or non-removable media, including magnetic disks, direct-attached storage, network-attached storage (NAS), storage area networks (SAN), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed by the programmable logic service provider110. In some examples, the configuration data is provided as part of a software application marketplace. Additionally, the programmable logic service provider110can provide an interface for using a programmable logic service provider110generate and store input files (such as the specifications for the application logic and the host logic) and metadata about the logic designs and/or the users of the programmable logic service provider110. The generated configuration data136can be indexed by one or more properties such as a user identifier, an instance type or types, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. In some examples, the programmable logic service provider110is configured to interface with a logic repository service for management of configuration data. The programmable logic service provider110can receive an API request160to download configuration data. For example, the request160can be generated when a user of the compute resources120launches or deploys a new instance (e.g., an “F1.small” instance) within the compute resources120. As another example, the request160can be generated in response to a request from an application executing on an operating instance. The request160can include a reference to the source and/or destination instance, a reference to the configuration data to download (e.g., an instance type, a marketplace identifier, a machine image identifier, or a configurable hardware identifier), a user identifier, an authorization token, and/or other information for identifying the configuration data to download and/or authorizing access to the configuration data. If the user requesting the configuration data is authorized to access the configuration data, the configuration data can be retrieved from the logic repository database150, and validated configuration data162(e.g., a full or partial bitstream) can be downloaded to the requesting instance (e.g., server computer140). The validated configuration data162can be used to configure the configurable logic of the destination instance. The programmable logic service provider110can verify that the validated configuration data162can be downloaded to the requesting instance. Validation can occur at multiple different points by the programmable logic service provider110. For example, validation can include verifying that the application logic is compatible with the host logic. In particular, a regression suite of tests can be executed on a simulator to verify that the host logic performs as expected after the application logic is added to the design. Additionally or alternatively, it can be verified that the application logic is specified to reside only in reconfigurable regions that are separate from reconfigurable regions of the host logic. As another example, validation can include verifying that the validated configuration data162is compatible with the instance type to download to. As another example, validation can include verifying that the requestor is authorized to access the validated configuration data162. If any of the validation checks fail, the programmable logic service provider110can deny the request to download the validated configuration data162. Thus, the programmable logic service provider110can potentially safeguard the security and the availability of the computing resources120while enabling a user to customize hardware of the computing resources120. As stated above, in some examples, operations described above for the programmable logic service provider110can be performed using the server computer140, using other resources within the compute resources120, or using other resources besides the compute resources120. FIG.2is a system diagram showing an example architecture200of a logic repository service205. The logic repository service205can be software executing on a server computer managed by a programmable logic service provider. The logic repository service205can be accessed through one or more web APIs. For example, the programmable logic service provider110can interact with the logic repository service205via an API, including pass-through of certain commands from users to the logic repository service. The logic repository service205can include a provider interface210for servicing API requests from the programmable logic service provider110. The provider interface210can be used to authenticate that requests are from agents of the compute service provider, such as by authenticating the identity of the requestor using credentials provided in the request. The provider interface210can provide host logic ingestion functionality215. In particular, the provider interface210can receive a request to upload a host logic design to the logic repository service205and the request can be processed by the host logic ingestion functionality215. As described previously, the host logic can include logic for sandboxing the application logic to maintain the security and availability of the computing resources. Additionally, the host logic can be further divided into static logic and reconfigurable logic. The static logic can be configured during an initialization sequence (e.g., at boot time), whereas the reconfigurable logic can be configured at different times during the operation of the configurable logic. As one example, a PCI Express interface can specify that a PCI endpoint be booted and enumerated within about one hundred milliseconds after a reset signal is deasserted. The host logic can be divided into static logic that can be loaded within the allotted time window, and reconfigurable logic that can be loaded after the time window has passed. The static logic can be used as an interface between different reconfigurable regions. The host logic design can be specified using HDL source code, written in, for example, System Verilog, Verilog, or VHDL. The HDL source code can be encrypted or non-encrypted. In some examples, netlists describing logic components can be provided in addition to, or instead of, HDL source code. The host logic ingestion module215can be used to perform checks on the received host logic design, decrypt the host logic design, and/or provide versioning information for the host logic design. Additionally, the request can include information for associating the host logic design with one or more instance types. For example, some host logic designs may work only with one subset of instance types and other host logic designs may work only with a different subset of instance types. The logic repository service205can include a customer-developer interface220for servicing API requests from the users of the logic repository service205. The customer-developer interface220can be used to authenticate that requests are from users of the compute service provider, such as by authenticating the identity of the requestor using credentials provided in the request. For example, each of the users can be provided with an account that can be used to identify the user for access management, billing, and usage tracking. The users can be limited to viewing and modifying only the logic designs to which they are authorized to access. For example, the users can be prevented from uploading and/or modifying host logic. The customer-developer interface220can include application logic ingestion functionality225for receiving and/or processing an application logic design. The application logic design can be specified using source code (e.g., HDL language code, expressed in SystemVerilog, Verilog, C, SystemC, or other suitable description language), a netlist including a list of configurable logic blocks and the connections between the configurable logic blocks, and/or configuration data. For example, the HDL code may describe instantiations of virtual debug units, which will then be stitched into the configuration data by including proprietary netlists not accessible to the engineer developing the source code. As another example, the configuration data can include a full or partial bitstream which has been pre-compiled for at least certain portions before being uploaded to the logic repository service. The application logic will be combined with host logic (such as by a configuration data generation block230) to create the logic that can be loaded onto a configurable hardware platform. Processing the application logic design can include translating and/or compiling source code to a lower level format (e.g., compiling OpenCL to generate behavioral or structural Verilog), verifying that required logic and/or signals are present (such as interface signals to the host logic), verifying that known restricted circuits are not present (such as ring oscillators), and other various tasks in preparation for generating configuration data. The customer-developer interface220can accept various types of requests from a user. As one example, a user can request to create a configurable hardware image (CHI). A CHI can provide information for configuring an instance of configurable hardware within a computing environment. For example, a CHI can include one or more compatible instance types, the configuration data for configuring the configurable hardware, access permissions for controlling access to the CHI, and any other information associated with configuring the configurable hardware. The request to create the CHI can include fields for a design description or title, a production status of the design, whether or not the design is encrypted, a reference to source code for the design, a type of source code indicator, an instance type or types that are compatible with the configuration data, and a reference to a location to store reporting information. The configuration data generation block230can be used to create configuration data for programming a reconfigurable logic device. For example, the configuration data can be based on an application logic design and a host logic design. As another example, the configuration data can be based on only an application logic design or only a host logic design. In particular, the configuration data generation block230can generate static logic based only on the host logic design. Additionally, the configuration data generation block230can generate reconfigurable logic for one or more reconfigurable regions of the configurable logic. For example, the configuration data generation block230can be used to generate host reconfigurable logic for a region reserved for host functions. As another example, the configuration data generation block230can be used to generate application reconfigurable logic for a region reserved primarily for application functions. Inputs to the configuration data generation block230can be an application logic design (such as from the application logic ingestion225), a host logic design (such as from the host logic ingestion215), and/or constraints describing various implementation details (such as clock frequencies, partitioning information, placement information, a target technology, and so forth). The logic designs can include source code described using an HDL, a netlist, and/or configuration data. The configuration data generation block230can combine an application and a host design into one design to create the configuration data. As described in more detail with reference toFIG.3, the configuration data generation block230can include a logic synthesis tool and a place and route tool. Using these tools, the configuration data generation block230can create configuration data for loading on a configurable hardware platform. The output from the configuration data generation block230can be managed using the logic library management block240. For example, the logic library management block240can associate user information with the configuration data and store the information at the logic repository database250. The computing services interface260can be used as an interface between the logic repository service205and computing resources. For example, when an instance is created on the computing resources, an API request can be sent to the computing services interface260and configuration data can be downloaded to the requesting resource. The static logic download component265can be used to download static logic to the configurable hardware platform on the requesting instance. Additionally, a request can be for reconfigurable logic, and the reconfigurable logic download component264can be used to service the request. Specifically, the reconfigurable logic download can retrieve the configuration data through the logic repository database250via the logic library management block240. The request can be for reconfigurable host logic or for reconfigurable application logic. FIG.3is a block diagram300further detailing an example of the server computer140, including CPU144and configurable hardware142, as can be used in certain examples of the disclosed technology. As shown, the configurable hardware142includes reconfigurable logic devices that have been programmed to implement host logic310and application logic320. The host logic310can includes static logic, which is typically reprogrammed infrequently, and dynamic logic, which is typically reprogrammed more frequently. For example, the dynamic logic may be reconfigured each time the application logic320is reprogrammed or modified. The application logic320can be used to implement function accelerators, which are reconfigurable hardware that has been configured in order to accelerate calculation of functions specified to be performed by the application logic320. The configurable hardware142can include a plurality of application logic portions, for example, that communicate with different users of the system. In some examples, the application logic portions can be reprogrammed independently of the other application logic portions. For example, if two or more application logic portions are included on a single FPGA integrated circuit, any other portions of the FPGA can be partially reconfigured in order to reprogram only one of the application logic portions selected. In some examples, FPGA portions are selected, based in part on programming granularity and features of the targeted FPGAs. For example, FPGA portions may be created by assigning a range of rows or a range of columns of arrayed logic components in an FPGA to different portions. For the example shown inFIG.3, the host logic310is associated with a supervisor mode process315executing on the CPU144. The supervisor mode process315executes at a higher level of privilege than other processes of the CPU. For example, an administrator of the server computer140may be the only entity with sufficient permissions to use or control the supervisor mode process315. The CPU144can also host an FPGA service (or daemon), dubbed FPGAd316. The FPGAd is a lightweight service that controls operation and maintenance functions for the configurable hardware. The application logic320is associated with a corresponding user mode process325. The user mode processes have a lower permission level than the supervisor mode process315, and thus other users, in addition to an administrator, can control and use the user mode processes. In some examples, the programmable logic service provider110is hosted by the computing host CPU144. In other examples, the programmable logic service provider110is provided by a separate server that accesses the computing host server computer140via a network interface360. For example, Ethernet, 802.11 wireless protocols, virtual private networks, the Internet, and other suitable computer networks can transmit messages to and from the programmable logic service provider110. The configurable hardware142(e.g., as in an FPGA) can be programmed using a configuration port330, which can be used to program both the host logic310and the application logic. In the example shown, the host logic310has a dedicated input/output (I/O) port335which can send and receive data from the application logic320(as well as data from the host logic itself) to the CPU144via an interface350. In alternative examples, another I/O port336can send data between the application logic320and the CPU144directly, bypassing the host logic310. The interface350can be implemented with any suitable interconnect technology, including, but not limited to: PCIe, Ethernet, and Infiniband. Each of the application logic portions uses a different reserve portion of the interface350in order to communicate to its associated user mode process. For example, each of the user mode processes may be allowed access to a different range of memory addresses, and the host logic310in turn couples each of the individual application logic portions to only the memory address ranges associated with their corresponding process. Similarly, the supervisor mode process315can be coupled to the host logic310via another restricted memory range. In other examples, data from the application logic320is sent to the CPU144via the host logic I/O port335not through a separate I/O port. In some examples, each of the processes coupled to the host logic310and/or the application logic portion320are associated with a process that is executed in a different virtual machine hosted by the CPU144. In other examples, two or more of the processes can execute within the same virtual machine. FIG.4illustrates an example flow400of ingesting logic designs and producing configuration data as can be performed by a logic repository service. During ingestion410, descriptions of application logic405and host logic406can be received by a programmable logic service provider. The logic design can be encrypted, such as by using the IEEE 1735-2014 encryption standard. The logic design can be decrypted during ingestion410or during a later step of the flow400. As one example, source code for the application logic405can be received during the ingestion410and the application logic and the debug unit logic can be combined into a design to produce source code for logic synthesis420for programming a first portion of a reconfigurable logic device. Source code for the host logic406can be used to produce source code for logic synthesis420for programming a second portion of the reconfigurable logic device. The logic synthesis420can be used to transform a specification written in behavioral and/or structural RTL into a netlist based on a target technology. For example, the logic synthesis420can target different configurable logic technologies, such as FPGAs having different architectures, manufacturing processes, capacities, and/or manufacturers. The netlist can include a number of configurable logic blocks, non-configurable blocks (e.g., hard or soft macros), and the connections between the different blocks. The netlist can be a logical netlist where blocks of the netlist are enumerated but unplaced within the target technology. The netlist can be used as input to place and route440. The place and route440can take the instances of the configurable blocks from the netlist and the routing information, and map the blocks to a physical, reconfigurable logic device. The place-and-routed design can include a physical mapping for each of the logical components of the netlist. Additionally or alternatively, the place and route440can be timing driven so that the netlist is modified based on timing constraints of the design and the physical constraints of the physical device. The output of the place and route440can be configuration data, such as a bitstream image. The configuration data can be partitioned or divided into different components. For example, the configuration data can include data associated with static host logic (e.g., static logic), reconfigurable host logic (e.g., dynamically reconfigurable logic), and/or reconfigurable application logic (e.g., application logic320). The different components can be overlapping or non-overlapping. For example, the static host logic can be routed through regions that are used by the reconfigurable application logic. Thus, a partial bitstream for the reconfigurable application logic can also include portions of the static host logic. As another example, a netlist for the application logic and/or the host logic can be received during the ingestion410. As a specific example, a netlist can be received for the application logic and source code can be received for the host logic. In this case, the host logic can be synthesized with the logic synthesis420to generate a netlist for the host logic, and the netlists for the host and application logic can be combined into a single design to produce a netlist for the place and route440. As another example, configuration data for the application logic and/or the host logic can be received during the ingestion410. For example, a partial bitstream for the application logic design can be received, or a full bitstream for the host and application logic design can be received. As another example, a timing report can provide a static timing analysis showing whether the design meets timing specifications of the configurable hardware. The logic synthesis420and the place and route440can involve random, non-deterministic steps that vary with each run of the tools so that each run of the logic synthesis420and the place and route440may provide different results. Thus, if a developer has a design that does not meet timing (as indicated by the timing report), the developer may desire to rerun the logic synthesis420and/or the place and route440. In this manner, the developer can iterate on their design by executing multiple synthesis and routing runs for the same design. The library management and validation450functionality can be used to validate the user designs for the configurable logic at various points during the development and deployment steps. As one example, the validation450can include performing simulations to verify whether the application logic is compatible with the host logic so that the host logic can constrain the functionality of the application logic. The validation450can include comparing a netlist of the application logic and confirming that the application logic meets capacity and area restraints of the configurable hardware platform. For example, the application logic can be restricted to use only logic within one or more reconfigurable regions. If the application logic is outside of those regions, then the application logic can be rejected. Additionally, the application logic can be ingested as a bitstream, and the bitstream can be validated by the validation450. The validation of a bitstream can include comparing a portion of the ingested bitstream data corresponding to the host logic to a baseline version of the host logic to confirm that the host logic is not corrupted. The output from the validation450can be validated configuration data. FIG.5shows further detail of an example system500including components of a control plane and a data plane for configuring and interfacing to a configurable hardware platform510. The control plane includes functions for initializing, monitoring, reconfiguring, and tearing down the configurable hardware platform510. The data plane includes functions for communicating between a user's application and the configurable hardware platform510. The control plane can be accessible by users or services having a higher privilege level and the data plane can be accessible by users or services having a lower privilege level. In one example, the configurable hardware platform510is connected to a server computer540using a local interconnect, such as PCIe. In some examples, a different interconnect, such as Ethernet or Infiniband are used. In an alternative example, the configurable hardware platform510can be integrated within the hardware of the server computer540. As one example, the server computer540can be one of the plurality of server computers1102A-1102C of the compute service provider1100ofFIG.11. The host server computer540has underlying hardware542including one or more CPUs, memory, storage devices, interconnection hardware, etc. Running a layer above the hardware542is a hypervisor or kernel layer544. The hypervisor or kernel layer can be classified as a type 1 or type 2 hypervisor. A type 1 hypervisor runs directly on the host hardware542to control the hardware and to manage the guest operating systems. A type 2 hypervisor runs within a conventional operating system environment. Thus, in a type 2 environment, the hypervisor can be a distinct layer running above the operating system and the operating system interacts with the system hardware. Different types of hypervisors include Xen-based, Hyper-V, ESXi/ESX, Linux, etc., but other hypervisors can be used. A management partition550(such as Domain 0 of the Xen hypervisor) can be part of the hypervisor or separated therefrom and generally includes device drivers needed for accessing the hardware542. The management partition550can host supervisor privilege level processes that can access privileged portions of the host logic520, and depending on a particular configuration, may also access one or more portions of the application logic530. Configuration data, such as bitstreams used to configure FPGAs on the configurable hardware platform510can be cached in a bitstream cache546, which may be implemented using, for example, memory or storage devices coupled to the host server computer. After storing a bitstream in the bitstream cache546a first time, the configurable hardware platform can be re-programmed using the cached bitstreams multiple times, thereby avoiding the overhead of transferring configuration data via network storage. User host partitions560are logical units of isolation within the hypervisor. Each user partition560can be allocated its own portion of the hardware layer's memory, CPU allocation, storage, interconnect bandwidth, etc. Additionally, each user partition560can include a virtual machine and its own guest operating system. As such, each user partition560is an abstract portion of capacity designed to support its own virtual machine independent of the other partitions. The user host partitions560execute at a lower level of privilege than the management partition550(such as Domain U of the Xen hypervisor). Each of the user host partitions560can include a user privilege level process that can access an associated portion of the application logic530. The management partition550can be used to perform management services for the user host partitions560and the configurable hardware platform510. The management partition550can communicate with web services (such as a deployment service, a logic repository service, and a health monitoring service) of the compute service provider, the user host partitions560, and the configurable hardware platform510. The management services can include services for launching and terminating user host partitions560, and configuring, reconfiguring, and tearing down the configurable logic of the configurable hardware platform510. As a specific example, the management partition550can launch a new user partition560in response to a request from a deployment service (such as the deployment component1126ofFIG.11). The request can include a reference to a machine image (MI) and/or a configurable hardware image (CHI). The MI can specify programs and drivers to load on the user partition560and the CHI can specify configuration data to load on the configurable hardware platform510. The management partition550can initialize the user partition560based on the information associated with the MI and can cause the configuration data associated with the CHI to be loaded onto the configurable hardware platform510. The initialization of the user partition560and the configurable hardware platform510can occur concurrently so that the time to make the instance operational can be reduced. The management partition550can be used to manage programming and monitoring of the configurable hardware platform510. The management partition550can also be used to send and receive debug data to and from the configurable hardware platform510. By using the management partition550for these purposes, access to the configuration data and the configuration ports of the configurable hardware platform510can be restricted. Specifically, users with lower privilege levels can be restricted from directly accessing the management partition550. Further, users with lower privilege levels can be restricted from accessing other user host partitions. Thus, the configurable logic cannot be modified without using the infrastructure of the programmable logic service provider and any third party IP used to program the configurable logic can be protected from viewing by unauthorized users. Further, unauthorized users are also prevented from sending debug data to, or receiving any debug data from, unauthorized partitions on the configurable hardware platform510. The management partition550can include a software stack for the control plane to configure and interface to a configurable hardware platform510. The control plane software stack can include a service process551(e.g., a Unix daemon or a Windows service) dubbed “FPGAd.” The FPGAd service process551provides a command interface that can be accessed using simple C language functions and structures, and thus uses minimal message parsing. In other examples, the FPGAd service process can include other more sophisticated interfaces. The FPGAd service process can forward requests for operations to be performed with a configuration logic received from a programmable logic service provider, and return responses generated by performing these operations. For example, the service process can use a privileged domain mailbox request/response communication channel, one for each FPGA integrated circuit, in order to transmit requests and responses. In some examples, the FPGAd service process is stateless with regards to servicing requests and responses. In some examples, the FPGAd service process can supervise the downloading and management of FPGA bitstreams in parallel and provide secure and isolated environment for multi-tenant environments, where more than one different user are sharing reconfigurable resources on the computing instance. In some examples, the service process uses PCIe memory mapped I/O to write bitstreams for programming the FPGAs. The FPGAd service process can update any of the configurable logic of a reconfigurable logic device, including static logic, reconfigurable logic, and other logic resources. The FPGAd service process can be implemented as a Unix daemon or a Windows service, for example. The control plane software stack can also include a configurable logic (CL) application management layer552for communicating with web services (such as the programmable logic service provider110, a logic repository service, or a health monitoring service), the configurable hardware platform510, and the user host partitions560. For example, the FPGAd service process551can issue a request to the programmable logic service provider110to fetch configuration data in response to a user partition560being launched. The FPGAd service process551can communicate with the user partition560using shared memory of the hardware542or by sending and receiving inter-partition messages over the interconnect connecting the server computer540to the configurable hardware platform510. Specifically, the FPGAd service process551can read and write messages to mailbox logic521of the configurable hardware platform510. The messages can include requests by an end-user application561to reconfigure or tear-down the configurable hardware platform510. The FPGAd service process551can issue a request to the programmable logic service provider110to fetch configuration data in response to a request to reconfigure the configurable hardware platform510. The FPGAd service process551can initiate a tear-down sequence in response to a request to tear down the configurable hardware platform510. The FPGAd service process551can perform watchdog related activities to determine whether the communication path to the user partition560is functional. The control plane software stack can include a CL configuration layer554for accessing the configuration port522(e.g., a configuration access port) of the configurable hardware platform510so that configuration data can be loaded onto the configurable hardware platform510. For example, the FPGAd service process551can send messages or commands to the CL configuration layer554, which in turns sends a command or commands to the configuration port522to perform a full or partial configuration of the configurable hardware platform510. The CL configuration layer554can send the configuration data (e.g., a bitstream) to the configuration port522so that the configurable logic can be programmed according to the configuration data. The configuration data can specify host logic and/or application logic. The control plane software stack can include a management driver556for communicating over the physical interconnect connecting the server computer540to the configurable hardware platform510. The management driver556can encapsulate commands, requests, responses, messages, and data originating from the management partition550for transmission over the physical interconnect. Additionally, the management driver556can de-encapsulate commands, requests, responses, messages, and data sent to the management partition550over the physical interconnect. Specifically, the management driver556can communicate with the host logic520of the configurable hardware platform510via the host interface514. For example, the management driver556can access a physical or virtual function mapped to an address range during an enumeration of devices connected to the physical interconnect. For example, in PCIe implementations, the management driver556can communicate with the host logic520by addressing transactions to and assigned address range. The control plane software stack can include a CL management and monitoring layer558. The CL management and monitoring layer558can monitor and analyze transactions occurring on the physical interconnect to determine a health of the configurable hardware platform510and/or to determine usage characteristics of the configurable hardware platform510. For example, the CL management and monitoring layer558can monitor whether configuration data is successfully deployed on the configurable hardware platform510and can cause a report to be transmitted to the logic repository service indicating the status of the deployment. The programmable logic service provider110can be used to send configuration data575to the management partition550. The configuration data575can be validated and then used to program a portion (e.g., one or more configurable logic partitions) of the application logic530. The programmable logic service provider110can also send commands to the management partition to initiate operation of the programmed partitions. The configurable hardware platform510can include non-configurable hard macros and configurable logic. The hard macros can perform specific functions within the configurable hardware platform510, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and a configuration port522. The configurable logic can be programmed or configured by loading configuration data onto the configurable hardware platform510. For example, the configuration port522can be used for loading the configuration data. As one example, configuration data can be stored in a memory (such as a Flash or EEPROM memory) accessible by the configuration port522and the configuration data can be automatically loaded during an initialization sequence (such as during a power-on sequence) of the configurable hardware platform510. Additionally, the configuration port522can be accessed using an off-chip processor or an interface within the configurable hardware platform510. The configurable logic can be programmed to include host logic520and application logic530. In multi-tenant implementations, the host logic520can shield the interfaces of at least some of the hard macros from the end-users so that the end-users have limited access to the hard macros and to the physical interconnect. For example, the host logic can restrict access of the user host partitions560to only access their associated configurable logic partition(s) within the application logic530. In a PCIe context, this can be implemented by assigning different user host partitions to different memory address ranges by configuring the base address registers (BARs) to reserve certain memory address ranges for certain combinations of host partitions and configurable logic partitions. The application logic530can include both hard macros and configurable logic. The application logic530can be partitioned into two or more portions, and each of the portions can be assigned to one or more of the user host partitions. Each of the configurable logic partitions are excluded from accessing other partitions of the configurable hardware platform by the host logic520. The host logic520can further be coupled to the mailbox logic521, the configuration port522, the host interface514, and the application logic530. The host interface logic514can include circuitry (e.g., hard macros and/or configurable logic) for signaling on the physical interconnect and implementing a communications protocol. The communications protocol specifies the rules and message formats for communicating over the interconnect. In alternative examples, the application logic530is configured to communicate to their respective associated user host partitions560without communicating through the host logic520. The mailbox logic521can include one or more buffers and one or more control registers. For example, a given control register can be associated with a particular buffer and the register can be used as a semaphore to synchronize between the management partition550and the user partition560. As a specific example, if a partition can modify a value of the control register, the partition can write to the buffer. The buffer and the control register are accessible from the host logic520. In alternative examples, buffer and the control register are accessible from both the host logic520and the application logic530. When the message is written to the buffer, another control register (e.g., the message ready register) can be written to indicate the message is complete. The message ready register can polled by the partitions to determine if a message is present, or an interrupt can be generated and transmitted to the partitions in response to the message ready register being written. In other examples, the mailbox logic521is replaced or augmented by messages transmitted between the application logic530and the host logic520via the programmable logic service provider110, the FPGAd service process551, or both the service provider and the service process. By requiring messages to be sent via the programmable logic service provider110, additional security features (e.g., message authentication, authorization, or other security features) can be applied by a service executing separately from the configurable hardware platform510(and in certain cases, the host server computer540). The user partition560can include a software stack for interfacing an end-user application executing within the user partition to the configurable hardware platform510. The application software stack can include functions for communicating with the control plane and the data plane. However, the user partitions560may be restricted from accessing the configuration port522. For example, the user partitions may be restricted from accessing read or write data from the configuration port. In some examples, the user partitions560may be granted limited read access to the configuration port. The application software stack can include a CL-Application API564for providing the end-user application executing within the user partition560with access to the configurable hardware platform510. The CL-Application API564can include a library of methods or functions for communicating with the configurable hardware platform510and the management partition550. For example, the end-user application561can send a command or data to the configurable application logic530by using an API of the CL-Application API564. In particular, the API of the CL-Application API564can interface with the application logic (AL) data plane driver563which can generate a transaction targeted to the application logic530which can communicate with the targeted partition. In this manner, the end-user application561can cause the configurable application logic530to receive, process, and/or respond with data to potentially accelerate tasks of the end-user application561. As another example, the end-user application561can send a command or data to the management partition550by using an API of the CL-Application API564. In particular, the API of the CL-Application API564can interface with the AL management driver562which can generate a transaction targeted to the application logic530which can communicate with the mailbox logic521. In this manner, the end-user application561can cause the management partition550to provide operational or metadata about the configurable hardware platform510. FIG.6is a sequence diagram600illustrating an example of messages passed between system components during system initialization, as can be performed in certain examples of the disclosed technology. For example, the system500discussed above regardingFIG.5can be used to implement the disclosed operations. At message610, a supervisor level process executing within the management partition550submits a request to create a compute instance. This request can include, for example, an instance ID and slot number. The programmable logic service provider110can provide a mapping to a particular compute instance metadata identifier, which identifies an image to load on the compute instance. The message610is sent to the programmable logic services provider110, which create the instance and returns a status message615indicating whether the operation was completed successfully. The programmable logic services provider110in turn sends a request620with an encoded identifier (e.g., a machine image identifier, a product code, or an identifier of a physical or virtual compute instance) to the storage resources150,155in order to retrieve uncached bitstreams621identified using the identifier. In some examples, a compute instance identifier can be mapped to a reconfigurable resource identifier, which identifies configuration data that can be used to program the reconfigurable resources. In some cases, the compute instance identifier may be matched to multiple different reconfigurable device identifiers, depending on available reconfigurable hardware resources, which can vary based on the reconfigurable logic device: type, manufacturer, size, capability, or other suitable parameters of the device. Responsive to sending the request with the compute instance identifier, the storage155,150returns625a metadata file including a bitstream identifier, a bitstream uniform resource identifier (URI), the state of the request, and a timestamp. The programmable logic services provider110analyzes the response and if the identified configuration data is acceptable, sends a request message630to the storage150,155containing the bitstream URI. Responsive to receiving this request, the storage returns635configuration data, for example the identified bitstream. As the bitstream is received by the programmable logic services provider110, a file system write message640is sent to the bitstream cache546. There, configuration data including FPGA bitstreams can be temporarily stored in local storage at the computing instance on which the reconfigurable hardware will be programmed and executed. The programmable logic services provider110then sends a load bitstream request message650to the FPGAd service process. Responsive to receiving the request, the service process sends a request to one or more reconfigurable logic devices of the computer instance to load the bitstream660, and receives a status response670from the reconfigurable logic devices indicating whether bitstream loading was successful. If programming the reconfigurable logic devices is successful, the FPGAd service process sends a status message675to the programmable logic service provider indicating whether loading the bitstream was successful. FIG.7is a sequence diagram700outlining a series of actions and message performed when loading and programming a bitstream for one or more reconfigurable logic devices, as can be performed in certain examples of the disclosed technology. The user partition560initiates loading of the bitstream by sending a load request710to the FPGAd service process551. Responsive to receiving the request, the service process sends a get bitstream message715, including an indication of the bitstream type, a compute instance identifier, a bitstream identifier, and an FPGA slot identifier to the programmable logic services provider110. For example, the user operating the previously allocated compute instance can decide to load a bitstream on their local computing hardware, and send a request over a computer network to a programmable logic service provider located at another server, including servers hosted in a computing cloud. The programmable logic services provider110in turn submits a request720to database and/or networked storage150,155and receives a response message725indicating the bitstream identifier, a bitstream URI, the status of the request, and a time stamp. The programmable logic services provider110authenticates this response and if the bitstream is authorized for use by the requesting compute instance user, submits a request730to the storage150,155containing the bitstream URI. Responsive to receiving the request message730, the storage sends response message735including the requested configuration data721, such as FPGA bitstreams. After transmission of the configuration data begins, the programmable logic services provider110sends a write message740to the bitstream cache546. As, or after, the bitstream is cached, the services provider sends a load bitstream request message750to the FPGAd service process750. Responsive to receiving the load bitstream request, the service process551sends a load bitstream command760including the bitstream data to one or more of the configurable logic devices and receives a status message770once loading the bitstreams and programming the reconfigurable logic devices has completed. The service process551then sends a message780to the user partition560indicating whether the bitstream was successfully loaded, and then sends another message785to the programmable logic service provider110indicating whether programming of FPGAs with the indicated bitstreams has completed. FIG.8is a sequence diagram800outlining messages that can be sent as part of a register access operation, according to certain examples of the disclosed technology. As shown, a remote user application810can initiate the registered access transaction. For example, a remote user can initiate a request to access one or more registers of the FPGA using an application transport layer (e.g., using http requests). This message820is transmitted via a computer network to a programmable logic services provider110. The programmable logic services provider110maps the request830to the associated compute instance and transmits the request to the FPGAd service process551. The service process551sends a request840to read the requested registers to one or more of the reconfigurable logic devices and receives response message845, indicating whether the register read requests were successful and, if the request was successful, one or more values produced as a result of the read operation. The FPGAd service process551sends a response message850indicating the status and any read register values to the programmable logic services provider110, which in turn sends a message860to the user remote app810. Thus, users located at arbitrary locations within a computing network, including over the Internet or other suitable computing networks, can access FPGA data such as register values. The illustrated sequence diagram800can similarly be adapted in order to write data to the FPGA registers, using different message commands and FPGA commands. FIG.9is a flowchart900outlining an example method of programming reconfigurable logic resources using a networked programmable logic service provider, as can be performed in certain examples of the disclosed technology. For example, systems such as those described above regardingFIGS.1,3, and5can be used to implement the illustrated method. At process block910, a request is received via computer network to create a computing instance that includes reconfigurable logic resources. For example, a user can send a request to a programmable logic service provider hosted on a network server provided by a computing cloud. The programmable logic service provider can implement domain logic for authenticating and controlling access to configuration data and compute hardware containing reconfigurable logic devices. At process block920, configuration data is produced for programming the reconfigurable logic resources. In some examples, the producing configuration data occurs prior to launching the request to compute instance. In some examples, the producing includes authenticating the request to determine whether the request authorizes an associated user to access the requested configuration data. In some examples, the request is received from a first party user and the configuration data is received from a third party user different than the first party user. In some examples, a financial transaction is processed associated with the request prior to providing the configuration data. The configuration data is provided if, and only if, the financial transaction is successfully processed. In other examples, usage of compute resources (e.g., including usage of reconfigurable logic resources) is metered and a financial transaction is processed at a later point in time based on the metered usage. In some examples, the configuration data is provided without an additional fee. Thus, configuration data including bitstreams can be sold or leased to other users from third party providers. In some examples, producing the configuration data further includes having a machine image indicator to set a configuration data and selecting configuration data to produce based on the mapping. For example, a machine image indicator for a particular type or class of computing instance may be matched to one, or more than one, configuration data indicators, and a selected one of a plurality of configuration data can be selected based on the target computing host. For example, computing instances in the environment may have access to different types, manufacturers, or size of reconfigurable logic devices. In some examples, producing configuration data includes retrieving a bitstream URI from storage that is sent to the programmable logic service provider, and the provider in turn selects one of the indicated bitstreams to request and then sent the computing instance. In some examples, producing the configuration data is performed by compiling source code indicated by the request to create a programming file for at least a portion of the configuration data. For example, source code expressed in a hardware description language such as SystemVerilog, SystemC, C, or other suitable source code can be provided by the requesting user and compiled using the programmable logic service provider. In some examples, a library or API is provided that maps function calls to accelerator functions implemented using configurable hardware resources. Thus, the programmable logic service provider provides an encapsulated tool chain for converting the source code into bitstreams that can be loaded onto reconfigurable logic devices of the computing instance. The requesting user thus need not have access to low level implementation details such as netlists, FPGA place and route data, or other such data. Further, access to the FPGA can be provided as a web service instead of requiring the use of a command line interface to run a series of tools in sequence. Thus, a web service can provide a robust interface that hides complexity from the user, thereby providing a user-friendly environment for implementing tasks such as function accelerators using reconfigurable logic devices. In some examples, the programmable logic service provider further performs operations associated with purchasing and/or licensing machine instance identifiers and their associated reconfigurable logic identifiers. In some examples, configuration data can be produced from a bitstream cache local to the computing instance, for example such as when reinitializing the compute instance with a previously used set of configuration data that is obtained from network storage. In some examples, the configuration data is a predefined set of configuration data that can then have a portion of the data reprogrammed for a particular user. In such examples, a generic configuration image can be cached at a compute instance, and customized in a shorter period of time than required to produce and load a complete set of bitstreams. In some examples, reprogramming of the FPGA can be implemented multiple times per compute instance session. This can be particularly useful in cases where a user of the compute instance is performing debugging operations of an accelerator function implemented using a reconfigurable logic device. At process block930, a compute instance is launched. Launching the instance includes executing a supervisor privilege level process and at least one user process using a general-purpose processor on the compute instance host. For example, a service process such as an FPGAd service process can be used to control management and configuration of the reconfigurable logic resources. The user processes can interact with the FPGAd service process and/or the programmable logic service provider to receive of configuration data and provide requests to the service process. In some examples, the compute instance is completely cleared before indicating a new compute instance. In other examples, some of the compute instance state is preserved and the compute instance is partially reset. For example, the existing service process and/or user processes can maintain their state while the reconfigurable logic devices are reset and reprogrammed. In some examples, only a portion of the reconfigurable logic devices such as static logic, reconfigurable logic, host logic, and/or customer logic are reprogrammed and/or reinitialized. In some examples, the configuration data is produced prior to launching the compute instance and the launching includes programming the reconfigurable logic resources with the produced configuration data prior to providing the compute instance to the requester, such as a requesting user. At process block940, the reconfigurable logic resources are programmed with the configuration data. For example, an FPGAd service process can manage application and configuration data to one or more FPGAd processes of the computing instance and return status messages indicating success or failure of the reprogramming operation. FIG.10outlines an example method1000of programming FPGAs in a web-based service environment as can be performed in certain examples of the disclosed technology. For example, the systems discussed above regardingFIGS.1-5can be used to implement the outlined method. At process block1010, a request is received to implement application logic at one or more FPGAs. For example, the user can submit a request using an API via the internet to a computing cloud. In some examples, the request is received from a first party that is different from the third party that will provide the configuration data for performing the outlined method. In some examples, the request includes an indicator of a machine image to be used for launching a request to compute instance. The machine image indicator can be mapped to a set of one or more sets of configuration data and one of the sets of the configuration data can be selected for programming the computing instance. At process block1020, a computing instance can be allocated comprising the requested FPGAs. For example, a programmable logic services provider can identify available compute resources and allocate one or more computing hosts as a computing instance for implementing the requested application logic. At process block1030, the request is authenticated and configuration information is produced for programming the FPGAs. In some examples, this includes executing domain logic to authenticate and process financial transactions for buying, leasing, or licensing configuration data images. At process block1040, the configuration information that was authenticated and produced at process block1030is sent to the computing instance that is allocated at process block1020. In some examples, at least a portion of the configuration information can be received from a bitstream cache. For example, previously used, or default configuration data associated with the computing instance image can be stored in a local bitstream cache, thereby avoiding transferring bitstreams of a computing instance and thus improving network bandwidth usage and response time. At process block1050, the requested FPGAs are programmed using the configuration information. For example, a service process executed on the computing host can apply the configuration data to one or more configuration ports of the FPGA in order to program the associated FPGAs. FIG.11is a computing system diagram of a network-based compute service provider1100that illustrates one environment in which examples described herein can be used. By way of background, the compute service provider1100(e.g., a cloud services provider) is capable of delivery of computing and storage capacity as a service to a community of end recipients. In some examples, the compute service provider can be established for an organization by or on behalf of the organization. That is, the compute service provider1100may offer a “private cloud environment.” In another example, the compute service provider1100supports a multi-tenant environment, wherein a plurality of customers operate independently (e.g., a public cloud environment). Generally speaking, the compute service provider1100can provide the following models: Infrastructure as a Service (“IaaS”), Platform as a Service (“PaaS”), and/or Software as a Service (“SaaS”). Other models can be provided. For the IaaS model, the compute service provider1100can offer computers as physical or virtual machines and other resources. The virtual machines can be run as guests by a hypervisor, as described further below. The PaaS model delivers a computing platform that can include an operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on the compute service provider platform without the cost of buying and managing the underlying hardware and software. Additionally, application developers can develop and run their hardware solutions on configurable hardware of the compute service provider platform. The SaaS model allows installation and operation of application software in the compute service provider. In some examples, end users access the compute service provider1100using networked client devices, such as desktop computers, laptops, tablets, smartphones, etc. running web browsers or other lightweight client applications. Those skilled in the art will recognize that the compute service provider1100can be described as a “cloud” environment. The particular illustrated compute service provider1100includes a plurality of server computers1102A-1102C. While only three server computers are shown, any number can be used, and large centers can include thousands of server computers. The server computers1102A-1102C can provide computing resources for executing software instances1106A-1106C. In one example, the software instances1106A-1106C are virtual machines. As known in the art, a virtual machine is an instance of a software implementation of a machine (i.e., a computer) that executes applications like a physical machine. In the example of a virtual machine, each of the servers1102A-1102C can be configured to execute a hypervisor1108or another type of program configured to enable the execution of multiple software instances1106on a single server. Additionally, each of the software instances1106can be configured to execute one or more applications. It should be appreciated that although the examples disclosed herein are described primarily in the context of virtual machines, other types of instances can be utilized with the concepts and technologies disclosed herein. For instance, the technologies disclosed herein can be utilized with storage resources, data communications resources, and with other types of computing resources. The examples disclosed herein might also execute all or a portion of an application directly on a computer system without utilizing virtual machine instances. The server computers1102A-1102C can include a heterogeneous collection of different hardware resources or instance types. Some of the hardware instance types can include configurable hardware that is at least partially configurable by a user of the compute service provider1100. One example of an instance type can include the server computer1102A which is in communication with configurable hardware1104A. Specifically, the server computer1102A and the configurable hardware1104A can communicate over a local interconnect such as PCIe. Another example of an instance type can include the server computer1102B and configurable hardware1104B. For example, the configurable logic1104B can be integrated within a multi-chip module or on the same die as a CPU of the server computer1102B. Yet another example of an instance type can include the server computer1102C without any configurable hardware. Thus, hardware instance types with and without configurable logic can be present within the resources of the compute service provider1100. One or more server computers1120can be reserved for executing software components for managing the operation of the server computers1102and the software instances1106. For example, the server computer1120can execute a management component1122. A customer can access the management component1122to configure various aspects of the operation of the software instances1106purchased by the customer. For example, the customer can purchase, rent, or lease instances and make changes to the configuration of the software instances. The configuration information for each of the software instances can be stored as a machine image (MI)1142on the network-attached storage1140. Specifically, the MI1142describes the information used to launch a VM instance. The MI can include a template for a root volume of the instance (e.g., an OS and applications), launch permissions for controlling which customer accounts can use the MI, and a block device mapping which specifies volumes to attach to the instance when the instance is launched. The MI can also include a reference to a configurable hardware image (CHI)1142which is to be loaded on configurable hardware1104when the instance is launched. The CHI includes configuration data for programming or configuring at least a portion of the configurable hardware1104. The MI1142and the CHI can be referenced by software using a machine image identifier (MII) and a configurable hardware image identifier (CHIT), respectively. The MII and CHII may uniquely identify their respective images. In some examples, a programmable logic service provider or logic repository service assign an identifying number to the images. In some examples, the identifier may include a hash value generated from other aspects of the image (e.g., an MD5 or SHA hash value of the images). The customer can also specify settings regarding how the purchased instances are to be scaled in response to demand. The management component can further include a policy document to implement customer policies. An auto scaling component1124can scale the instances1106based upon rules defined by the customer. In one example, the auto scaling component1124allows a customer to specify scale-up rules for use in determining when new instances should be instantiated and scale-down rules for use in determining when existing instances should be terminated. The auto scaling component1124can consist of a number of subcomponents executing on different server computers1102or other computing devices. The auto scaling component1124can monitor available computing resources over an internal management network and modify resources available based on need. A deployment component1126can be used to assist customers in the deployment of new instances1106of computing resources. The deployment component can have access to account information associated with the instances, such as who is the owner of the account, credit card information, country of the owner, etc. The deployment component1126can receive a configuration from a customer that includes data describing how new instances1106should be configured. For example, the configuration can specify one or more applications to be installed in new instances1106, provide scripts and/or other types of code to be executed for configuring new instances1106, provide cache logic specifying how an application cache should be prepared, and other types of information. The deployment component1126can utilize the customer-provided configuration and cache logic to configure, prime, and launch new instances1106. The configuration, cache logic, and other information may be specified by a customer using the management component1122or by providing this information directly to the deployment component1126. The instance manager can be considered part of the deployment component. Customer account information1128can include any desired information associated with a customer of the multi-tenant environment. For example, the customer account information can include a unique identifier for a customer, a customer address, billing information, licensing information, customization parameters for launching instances, scheduling information, auto-scaling parameters, previous IP addresses used to access the account, a listing of the MI's and CHI's accessible to the customer, etc. One or more server computers1130can be reserved for executing software components for managing the download of configuration data to configurable hardware1104of the server computers1102. For example, the server computer1130can execute a programmable logic service provider and/or a logic repository service comprising an ingestion component1132, a library management component1134, and a download component1136. The ingestion component1132can receive host logic and application logic designs or specifications and generate configuration data that can be used to configure the configurable hardware1104. The library management component1134can be used to manage source code, user information, and configuration data associated with the logic repository service. For example, the library management component1134can be used to store configuration data generated from a user's design in a location specified by the user on the network-attached storage1140. In particular, the configuration data can be stored within a configurable hardware image1142on the network-attached storage1140. Additionally, the library management component1134can manage the versioning and storage of input files (such as the specifications for the application logic and the host logic) and metadata about the logic designs and/or the users of the logic repository service. The library management component1134can index the generated configuration data by one or more properties such as a user identifier, an instance type, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. The download component1136can be used to authenticate requests for configuration data and to transmit the configuration data to the requestor when the request is authenticated. For example, agents on the server computers1102A-B can send requests to the download component1136when the instances1106are launched that use the configurable hardware1104. As another example, the agents on the server computers1102A-B can send requests to the download component1136when the instances1106request that the configurable hardware1104be partially reconfigured while the configurable hardware1104is in operation. The network-attached storage (NAS)1140can be used to provide storage space and access to files stored on the NAS1140. For example, the NAS1140can include one or more server computers used for processing requests using a network file sharing protocol, such as Network File System (NFS). The NAS1140can include removable or non-removable media, including magnetic disks, storage area networks (SANs), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed over the network1150. In some examples, the NAS1140can be replaced or supplemented with a database system. The network1150can be utilized to interconnect the server computers1102A-1102C, the server computers1120and1130, and the storage1140. The network1150can be a local area network (LAN) and can be connected to a Wide Area Network (WAN)1160so that end users can access the compute service provider1100. It should be appreciated that the network topology illustrated inFIG.11has been simplified and that many more networks and networking devices can be utilized to interconnect the various computing systems disclosed herein. FIG.12illustrates in further detail management components1206that can be used in the multi-tenant environment of the compute service provider1100. In order to access and utilize instances (such as instances1106ofFIG.11), a client device can be used. The client device1210can be any of a variety of computing devices, mobile or otherwise including a cell phone, smartphone, handheld computer, Personal Digital Assistant (PDA), desktop computer, etc. The client device1210can communicate with the compute service provider1100through an end point1212, which can be a DNS address designed to receive and process API requests. In particular, the end point1212can be a web server configured to expose an API. Using the API requests, a client1210can make requests to implement any of the functionality described herein. Other services1215, which can be internal to the compute service provider1100, can likewise make API requests to the end point1212. Other general management services that may or may not be included in the compute service provider1100include an admission control1214, e.g., one or more computers operating together as an admission control web service. The admission control1214can authenticate, validate and unpack the API requests for service or storage of data within the compute service provider1100. The capacity tracker1216is responsible for determining how the servers need to be configured in order to meet the need for the different instance types by managing and configuring physical inventory in terms of forecasting, provisioning, and real-time configuration and allocation of capacity. The capacity tracker1216maintains a pool of available inventory in a capacity pool database1218. The capacity tracker1216can also monitor capacity levels so as to know whether resources are readily available or limited. An instance manager1250controls launching and termination of instances in the network. When an instruction is received (such as through an API request) to launch an instance, the instance manager pulls resources from the capacity pool1218and launches the instance on a decided upon host server computer. Similar to the instance manager are the storage manager1222and the network resource manager1224. The storage manager1222relates to initiation and termination of storage volumes, while the network resource manager1224relates to initiation and termination of routers, switches, subnets, etc. A network of partitions1240is described further in relation toFIG.13and includes a physical layer upon which the instances are launched. A health monitoring service1260can provide monitoring for resources and the applications customers run on the compute service provider1100. System administrators can use the monitoring service1260to collect and track metrics, and gain insight to how applications are running. For example, the monitoring service1260can allow system-wide visibility into application performance and operational health. Metrics generated by the health monitoring service1260can be stored in the metrics database1262. FIG.13illustrates the network of partitions1240and the physical hardware associated therewith. The network of partitions1240can include a plurality of data centers, such as data center1310, coupled together by routers1316. The routers1316read address information in a received packet and determine the packet's destination. If the router decides that a different data center contains a host server computer, then the packet is forwarded to that data center. If the packet is addressed to a host in the data center1310, then it is passed to a network address translator (NAT)1318that converts the packet's public IP address to a private IP address. The NAT also translates private addresses to public addresses that are bound outside of the datacenter1310. Additional routers1320can be coupled to the NAT to route packets to one or more racks of host server computers1330. Each rack1330can include a switch1332coupled to multiple host server computers. A particular host server computer is shown in an expanded view at1340. Each host1340has underlying hardware1350including one or more CPUs, memory, storage devices, reconfigurable hardware, etc. Running a layer above the hardware1350is a hypervisor or kernel layer1360. The hypervisor or kernel layer can be classified as a type 1 or type 2 hypervisor. A type 1 hypervisor runs directly on the host hardware1350to control the hardware and to manage the guest operating systems. A type 2 hypervisor runs within a conventional operating system environment. Thus, in a type 2 environment, the hypervisor can be a distinct layer running above the operating system and the operating system interacts with the system hardware. Different types of hypervisors include Xen-based, Hyper-V, ESXi/ESX, Linux, etc., but other hypervisors can be used. A management layer1370can be part of the hypervisor or separated therefrom and generally includes device drivers needed for accessing the hardware1350. The partitions1380are logical units of isolation by the hypervisor. Each partition1380can be allocated its own portion of the hardware layer's memory, CPU allocation, storage, etc. Additionally, each partition can include a virtual machine and its own guest operating system. As such, each partition is an abstract portion of capacity designed to support its own virtual machine independent of the other partitions. Any applications executing on the instances can be monitored using the management layer1370, which can then pass the metrics to the health monitoring service1260for storage in the metrics database1262. Additionally, the management layer1370can pass to the monitoring service1250the number of instances that are running, when they were launched, the operating system being used, the applications being run, etc. All such metrics can be used for consumption by the health monitoring service1260and stored in database1262. FIG.14depicts a generalized example of a suitable computing environment1400in which the described innovations may be implemented. The computing environment1400is not intended to suggest any limitation as to scope of use or functionality, as the innovations may be implemented in diverse general-purpose or special-purpose computing systems. For example, the computing environment1400can be any of a variety of computing devices (e.g., desktop computer, laptop computer, server computer, tablet computer, etc.) With reference toFIG.14, the computing environment1400includes one or more processing units1410,1415and memory1420,1425. InFIG.14, this basic configuration1440is included within a dashed line. The processing units1410,1415execute computer-executable instructions. A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC) or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example,FIG.14shows a central processing unit1410as well as a graphics processing unit or co-processing unit1415. The tangible memory1420,1425may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory1420,1425stores software1480implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s). A computing system may have additional features. For example, the computing environment1400includes storage1440, one or more input devices1450, one or more output devices1460, and one or more communication connections1470. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment1400. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment1400, and coordinates activities of the components of the computing environment1400. The tangible storage1440may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing environment1400. The storage1440stores instructions for the software1480implementing one or more innovations described herein. The input device(s)1450may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment1400. The output device(s)1460may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment1400. The communication connection(s)1470enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier. Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods. Any of the disclosed methods can be implemented as computer-executable instructions stored on one or more computer-readable storage media (e.g., one or more optical media discs, volatile memory components (such as DRAM or SRAM), or non-volatile memory components (such as flash memory or hard drives)) and executed on a computer (e.g., any commercially available computer, including smart phones or other mobile devices that include computing hardware). The term computer-readable storage media does not include communication connections, such as signals and carrier waves. Any of the computer-executable instructions for implementing the disclosed techniques as well as any data created and used during implementation of the disclosed examples can be stored on one or more computer-readable storage media. The computer-executable instructions can be part of, for example, a dedicated software application or a software application that is accessed or downloaded via a web browser or other software application (such as a remote computing application). Such software can be executed, for example, on a single local computer (e.g., any suitable commercially available computer) or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a client-server network (such as a cloud computing network), or other such network) using one or more network computers. For clarity, only certain selected aspects of the software-based implementations are described. Other details that are well known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any specific computer language or program. For instance, the disclosed technology can be implemented by software written in C, C++, Java, Perl, JavaScript, Adobe Flash, or any other suitable programming language. Likewise, the disclosed technology is not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure. It should also be well understood that any functionality described herein can be performed, at least in part, by one or more hardware logic components, instead of software. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Furthermore, any of the software-based examples (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means. The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed examples, alone and in various combinations and subcombinations with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed examples require that any one or more specific advantages be present or problems be solved. In view of the many possible examples to which the principles of the disclosed technology may be applied, it should be recognized that the illustrated examples are only preferred examples and should not be taken as limiting the scope of the claims. Rather, the scope of the claimed subject matter is defined by the following claims. We therefore claim as our invention all that comes within the scope of these claims. | 103,498 |
11863407 | It is to be noted that the drawings presented are intended solely for the purpose of illustration and that they are, therefore, neither desired nor intended to limit the disclosure to any or all of the exact details of construction shown, except insofar as they may be deemed essential to the claimed disclosure. DETAILED DESCRIPTION In describing the exemplary embodiments of the present disclosure, as illustrated inFIGS.1-9, specific terminology is employed for the sake of clarity. The present disclosure, however, is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner to accomplish similar functions. The claimed invention may, however, be embodied in many different forms and should not be construed to be limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples, and are merely examples among other possible examples. In order to understand the present disclosure, certain variables need to be defined. As will be appreciated by one of skill in the art, the present disclosure may be embodied as a method, data processing system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product on a computer-readable storage medium having computer-readable program code means embodied in the medium. Any suitable computer readable medium may be utilized, including hard disks, ROM, RAM, CD-ROMs, electrical, optical, magnetic storage devices, solid-state drives (SSDs) and the like. The present disclosure is described below with reference to flowchart illustrations of methods, apparatus (systems) and computer program products according to embodiments of the present disclosure. It will be understood that each block or step of the flowchart illustrations, and combinations of blocks or steps in the flowchart illustrations, can be implemented by computer program instructions or operations. These computer program instructions or operations may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions or operations, which execute on the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks/step or steps. These computer program instructions or operations may also be stored in a computer-usable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions or operations stored in the computer-usable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks/step or steps. The computer program instructions or operations may also be loaded onto a computer or other programmable data processing apparatus (processor) to cause a series of operational steps to be performed on the computer or other programmable apparatus (processor) to produce a computer implemented process such that the instructions or operations which execute on the computer or other programmable apparatus (processor) provide steps for implementing the functions specified in the flowchart block or blocks/step or steps. Accordingly, blocks or steps of the flowchart illustrations support combinations of means for performing the specified functions, combinations of steps for performing the specified functions, and program instruction means for performing the specified functions. It should also be understood that each block or step of the flowchart illustrations, and combinations of blocks or steps in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems, which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions or operations. Computer programming for implementing the present disclosure may be written in various programming languages, database languages, and the like. However, it is understood that other source or object-oriented programming languages, and other conventional programming language may be utilized without departing from the spirit and intent of the present disclosure. Referring now toFIG.1, there is illustrated a block diagram of a computer system10that provides a suitable environment for implementing embodiments of the present disclosure. The computer architecture shown inFIG.1is divided into two parts—motherboard100and the input/output (I/O) devices200. Motherboard100preferably includes subsystems or processor to execute instructions such as central processing unit (CPU)102, a memory device, such as random access memory (RAM)104, input/output (I/O) controller108, and a memory device such as read-only memory (ROM)106, also known as firmware, which are interconnected by bus110. A basic input output system (BIOS) containing the basic routines that help to transfer information between elements within the subsystems of the computer is preferably stored in ROM106, or operably disposed in RAM104. Computer system10further preferably includes I/O devices202, such as main storage device214for storing operating system204and instructions or application program(s)206, and display208for visual output, and other I/O devices212as appropriate. Main storage device214preferably is connected to CPU102through a main storage controller (represented as108) connected to bus110. Network adapter210allows the computer system to send and receive data through communication devices or any other network adapter capable of transmitting and receiving data over a communications link that is either a wired, optical, or wireless data pathway. It is recognized herein that central processing unit (CPU)102performs instructions, operations or commands stored in ROM106or RAM104. Many other devices or subsystems or other I/O devices212may be connected in a similar manner, including but not limited to, devices such as microphone, speakers, flash drive, CD-ROM player, DVD player, printer, main storage device214, such as hard drive, and/or modem each connected via an I/O adapter. Also, although preferred, it is not necessary for all of the devices shown inFIG.1to be present to practice the present disclosure, as discussed below. Furthermore, the devices and subsystems may be interconnected in different configurations from that shown inFIG.1, or may be based on optical or gate arrays, or some combination of these elements that is capable of responding to and executing instructions or operations. The operation of a computer system such as that shown inFIG.1is readily known in the art and is not discussed in further detail in this application, so as not to overcomplicate the present discussion. Referring now toFIG.2, there is illustrated a diagram depicting an exemplary communication system201in which concepts consistent with the present disclosure may be implemented. Examples of each element within the communication system201ofFIG.2are broadly described above with respect toFIG.1. In particular, the server system260and user system220have attributes similar to computer system10ofFIG.1and illustrate one possible implementation of computer system10. Communication system201preferably includes one or more user systems220,222,224, one or more server system260, and network250, which could be, for example, the Internet, public network, private network or cloud. User systems220-224each preferably include a computer-readable medium, such as random-access memory, coupled to a processor. The processor, CPU102, executes program instructions or operations stored in memory. Communication system201typically includes one or more user system220. For example, user system220may include one or more general-purpose computers (e.g., personal computers), one or more special purpose computers (e.g., devices specifically programmed to communicate with each other and/or the server system260), a workstation, a server, a device, a digital assistant or a “smart” cellular telephone or pager, a digital camera, a component, other equipment, or some combination of these elements that is capable of responding to and executing instructions or operations. Similar to user system220, server system260preferably includes a computer-readable medium, such as random-access memory, coupled to a processor. The processor executes program instructions stored in memory. Server system260may also include a number of additional external or internal devices, such as, without limitation, a mouse, a CD-ROM, a keyboard, a display, a storage device and other attributes similar to computer system10ofFIG.1. Server system260may additionally include a secondary storage element, such as database270for storage of data and information. Server system260, although depicted as a single computer system, may be implemented as a network of computer processors. Memory in server system260contains one or more executable steps, program(s), algorithm(s), or application(s)206(shown inFIG.1). For example, the server system260may include a web server, information server, application server, one or more general-purpose computers (e.g., personal computers), one or more special purpose computers (e.g., devices specifically programmed to communicate with each other), a workstation or other equipment, or some combination of these elements that is capable of responding to and executing instructions or operations. Communication system201is capable of delivering and exchanging data between user system220and a server system260through communications link240and/or network250. Through user system220, users can preferably communicate over network250with each other user system220,222,224, and with other systems and devices, such as server system260, to electronically transmit, store, print and/or view multidimensional digital master image(s)303(seeFIG.3). Communications link240typically includes network250making a direct or indirect communication between the user system220and the server system260, irrespective of physical separation. Examples of a network250include the Internet, cloud, analog or digital wired and wireless networks, radio, television, cable, satellite, and/or any other delivery mechanism for carrying and/or transmitting data or other information, such as to electronically transmit, store, print and/or view multidimensional digital master image(s)303. The communications link240may include, for example, a wired, wireless, cable, optical or satellite communication system or other pathways. It is contemplated herein that RAM104, main storage device214, and database270may be referred to herein as storage device(s) or memory device(s). Referring again nowFIG.3, by way of example, and not limitation, therein is illustrated various subscriber handsets within a telecom network, simplified to better illustrate and describe the activities thereof. Starting at the righthand side, subscriber devices320a,320bmay be closest to or otherwise coordinated to receive and transmit data wirelessly to and from antenna A1. Clockwise, subscriber devices322a,322bmay be closest to or otherwise coordinated to receive and transmit data wirelessly to and from antenna A2. Finally, subscriber devices324a,324bmay be closest to or otherwise coordinated to receive and transmit data wirelessly to and from antenna A3. As may be noted and observed by those skilled in the art of telecom infrastructure design and implementation, each of subscriber devices320a-b,322a-b, and324a-bare representative only and may in fact represent hundreds, thousands, or millions of subscriber devices, each connected to various antennas throughout a mobile telecommunications infrastructure. From each of antenna A1, antenna A2, and Antenna A3may be telecommunications lines L1, L2, and L3, respectively, which may reach telecom computing machine360for receipt and intake/storage/processing by the company using its human and machine infrastructure. Also clear to those having ordinary skill in the art, such telecom computing machine360may represent one machine or, more likely, many machines at one or more locations. Furthermore, such a telecom computing machine360may be implemented in a cloud computing or distributive environment. Subsequent to receipt, data from each subscriber may arrive simultaneously or in quick succession as incoming data stream401. Further processing of incoming data is described below. Having described the basics of the structure and function of example methods of computing, networks, and mobile telecommunications, incoming data stream401, and the exemplary methods and systems for use of incoming data stream401may be further illustrated and described below. Starting atFIG.4, illustrated therein and described herein is a flowchart showing the disclosed method steps of initial incoming data receipt/handling/processing. Starting at incoming data stream401, large scale data may be arriving at a machine within a system of the disclosure as herein described. In a potentially preferred embodiment of the systems and methods of the disclosure, incoming data stream401may be an 8-bit octet encoded character stream where every character consumes 1-byte space. For many fields, including fields by way of example and not limitation: Mobile Station Integrated Services Digital Network (MSISDN), other device identifiers, duration, Internet Protocol (IP) Address, the like and/or combinations thereof, some, part, or all of the character strings may be wasted space if the purpose of storing/processing this information is to quickly act upon observations about incoming data stream401. Therefore, some, part, or all of those character strings may be truncated, ignored, or otherwise disposed of prior to data stream tokenization step402. At data stream tokenization step402, the incoming data stream401may be tokenized into this 8-byte octet encoded character stream and subsequently divided into low density and high density at sorting step403. Having tokenized the incoming data stream401into an 8-byte octet encoded character stream, a master list of field-legal values may be developed and compiled such that common values may appear on a master list, but less common and/or unique values or value sets for fields may not. While any threshold for density may be chosen, in a potentially preferred embodiment, 25% may be chosen as a threshold. If a given token has 25% of values or less which appear on a compiled master list, that token may be considered “Low-Density” and separated from “High-Density” tokens, or those having more than 25% of characters from the master list. At sorting step403, High-Density tokens may be sorted into High-Density bins and Low-Density tokens may be sorted into Low-Density bins. At re-mapping step404, Low-Density tokens may be assigned a new alphabet (a master list of symbols), in line with field-legal characters and my then be encoded using as many bits that are necessary for full encoding. By way of example and not limitation, an alphabet comprising [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, . . . ] may be used for a Low-Density field, such as IP addresses, and it may be encoded using 4-bits per character instead of the default 8-bits from the incoming data stream401. Since direct mapping may take far less time than performing lookups may consume, the record building may work as a pipeline where the next record is picked for direct mapping even when it contains fields which are being looked upon for the current record. Then, turning to parallel processing first stage406, an asynchronous parallel pipeline may facilitate low-latency turnover for records entering the stream. Records which have all their sections mapped may be pushed or steered ahead to the next stage and those that do not have all their sections mapped, for whatever reason, may be suspended or set aside to “make way” for those that do. By handling in such a way, tardiness or lateness of decision-making may be reduced in likelihood. Then at parallel processing second stage407, other techniques may be employed which may be best described as “inertia boosting”—that is keeping fast processes fast, and making them faster, and keeping slow processes slow and perhaps even slowing them to make way for the faster processes. The systems and methods of the disclosure at this step may push ahead those records or records types that are being parsed sooner than others such that they arrive sooner in the parallel pipeline. This may be accomplished incrementally. While computing algorithms may have average and best core times deduced under asymptotic analyses, each individual entry may still end up consuming a different time. That may be true of any tree-based/non-uniform hash-based construct. The systems and methods of the disclosure, instead do not attempt to equalize or otherwise force data processing to be accomplished at a steady or even pace, but rather push ahead those capable of completing very quickly to achieve the greatest speed for processing those records. Those which consume, or are observed to consume, greater time and resources are pushed behind. This enables fast-processing records to finish even quicker in order to enable further decision-making to be based on those that can be processed very quickly. The reason may be that with time-sensitive data and decision-making thereof, a late or delayed processing may be equivalent to not processing whatsoever, and the resources may therefore be wasted on such tasks. Further details and benefits of the disclosed system and method will be recognized by those having ordinary skill in the art following additional review of the remaining Drawings and related Detailed Description below. Turning now toFIG.5, illustrated therein is a block diagram illustrating an exemplary physical memory unit of the disclosure. Starting at HTD memory unit541, a pre-formatted memory layout with optimal data diversity is illustrated thereon. A block or HTD memory unit541may be related to a particular subscriber and may contain a self-organizing sequence of Hour to Day (HTD) and Hour of the Day (HOD) counters. Each HTD counter may be observed at the front of HTD memory unit541and HTD memory unit541may be exactly 17 bytes wide. As illustrated by way of example and not limitation, the HTD counter ID may consume 3 bytes, a timestamp may consume 2 bytes, a download volume may consume 4 bytes, total volume may consume 4 bytes, and total duration may consume 4 bytes for a total of exactly 17 bytes for the HTD memory unit541. Each HOD counter may further precede or appear at the front of each HOD memory unit542and HOD memory unit542may be exactly 41 bytes wide. Within HOD memory unit542, as illustrated thereinFIG.5by way of example and not limitation, HOD counter ID may consume 3 bytes, a timestamp may consume 2 bytes, a first bin may consume 12 bytes, a second bin may consume 12 bytes, and a third bin may consume 12 bytes, for a total of exactly 41 bytes. As may be understood by those having ordinary skill in the art, one or more bins as illustrated thereinFIG.5as second HOD bin543may be sub-divided into, by way of example and not limitation, download volume consuming 4 bytes, total volume consuming 4 bytes, and total duration consuming 4 bytes. It may be further observable to those having ordinary skill in the art that HTD memory unit541may be organized similarly or identically to exemplary second HOD bin543such that each have 3 4-byte sections containing download volume, total volume, and total duration. In use, free space tracking may be done efficiently without using any additional bits or flags, given this pre-formatted arrangement for HTD memory unit541and HOD memory unit542. For instance, if Counter ID for any of HTD memory unit541or HOD memory unit542equals zero (0), it may be assumed to be free space. Contrarily, if Counter ID for any of HTD memory unit541or HOD memory unit542equals any non-zero number or character, said respective memory unit may be assumed to be occupied. This way HTD memory unit541and HOD memory unit542memory marking may proceed smoothly, and without using additional bits and/or flags, but rather instead relying on solely bitwise operators and equality comparison. In practice or in a preferred embodiment of the disclosed system, a record parsing phase may occur upon receipt of incoming data stream401where the entire stream of bytes representing one record is captured as an 86-digit number with certain digits in lower words mapped directly from incoming records and many digits in upper words are deduced in parallel as they involve look-up and additional computation, as illustrated inFIG.4. Then, weight assignment strategy may be implemented for category and sub-category domain values. Prime numbers may be assigned to category values and sub-category line items under a particular category may be assigned square-free numbers, which may be a product of main category weight, and another prime number as a local-weight. By using such a schema, each subcategory weight may have exactly 4 factors including, for instance: 1, itself, category weight, and the local weight. Within each HTD memory unit541and each HOD memory unit542which consume 17-bytes and 41-bytes, respectively, each counter may capture accumulated internet data traffic statistics for a particular kind of traffic determined by the counter ID. The details may be captured into bins of 3 cells, as herein illustrated, each where the cells contain metrics, such as download volume, total volume, and total duration. These may each be normalized to meaningful business values, such as megabytes for download and total volume and seconds for total duration. HTD memory unit541, in a potentially preferred embodiment of the disclosure, may store traffic statistics since midnight of the current day until the current time and HOD memory unit542, in such an embodiment, may store traffic statistics since the beginning of the current hour until the present moment in time. For organization optimization and access optimization purposes, each HOD memory unit542may possess, as illustrated herein, and capture incoming data stream401into these 3 bins (e.g., download volume, total volume, and total duration), similar to the HTD memory unit541, but may represent three subunits of time in minutes, each subunit being a factor of 60. For example and not limitation, one such bin in HOD memory unit542may represent cumulative traffic statistics for the preceding 2 minutes, while the other two bins may represent cumulative statistics for the preceding 12 minutes and preceding 20 minutes, respectively. Then, each HOD memory unit542for the 3 bins or subunits may capture from a total possible 12 factors of the number 60 (i.e., for each minute of the hour), 1 minute, 2 minutes, 3 minutes, 4 minutes, 5 minutes, 6 minutes, 10 minutes, 12 minutes, 15 minutes, 20 minutes, 30 minutes, and 60 minutes (each being a minutes-factor of an hour, or 60 minutes). Then the assigned weights may be included within the counter ID as described above. By way of example and not limitation, one such bin in HTD memory unit541, a corresponding counter ID with, for example, 38 for a date/time, such as May 20, 2021 at 17:29 may capture accumulated download megabytes, upload megabytes and total duration spent browsing, for instance FACEBOOK® by an individual subscriber between 00:00 of that day and 17:29 that day. Business rules may then be applied to any given data set on any given HTD memory unit541and/or HOD memory unit542. For instance, a business may be very interested in the overall consumption of health and fitness browsing time based upon a hypothetical promotion. Then a rule may be desired within the system of the disclosure such that if such browsing time exceeds 18 minutes and/or 500 MB, as an example, an alert is made to the business and/or the subscriber. Even in examples where several hundred HTD memory units541and/or HOD memory units542exist for a given subscriber, a rule evaluation may complete a simple single pass through all counter IDs and filter only those which have counter IDs which yield no remainder when divided by the derived rule divisor obtained using numerical functions such as product, greatest common divisor (GCD), least common multiple (LCM), other simple numerical functions, the like and/or combinations thereof on individual category weights. Additionally, it may be a feature or important benefit of the disclosure that counter IDs on any given HTD memory unit541and/or HOD memory unit542retain their traits even when being continually merged with other counter types, using algebraic and arithmetic characteristics. By using prime numbers and square-free numbers may yield an unambiguous scheme for determining the behavior profile of an individual despite retaining only the aggregated counter numbers, as may be more easily understood with a thorough understanding of the above and below. Put simply, a ratio of powers of prime factors may be indicative of ratios of the underlying traffic types. Thus, for highly specific business queries or analyses, it may not be necessary to complete prime factorization, but instead repeated division by a prime number representing the desired traits for as long as the number is no longer cleanly divisible (i.e., having no non-integer remainder) may be sufficient to obtain the power, which may be valuable when output into the business CRM system. Then, as it may be better understood by those having ordinary skill in the art, an example of a simplified system of the disclosure as described herein may be useful to further understanding of prime and square-free number weight assignment, its exemplary methods, and its benefits. In such a simplified embodiment categories of user activities may be assigned prime number weights. For instance, by way of example and not limitation, Social Media might be assigned 2, Gaming might be assigned 3, Health might be assigned 5, Lifestyle might be assigned 7, Fitness might be assigned 11, and so on. Then specific activity within each category may be assigned square-free numbers. In this simplified example, YouTube® might be classified as Social Media and be assigned 17, FACEBOOK® might be classified similarly and receive a 19 assignment, and WhatsApp® might be similarly classified and receive a 23 assignment. The product of each may be important to later processing, which would be 2 times the square-free number selected. In this example, that may be 34 for YouTube®, 38 for FACEBOOK®, and 46 for WhatsApp®. Essentially, these prime numbers, which may be assigned to category values, and these square-free which may be assigned to sub-category line items under a particular category, may be multiplied to form what may be understood herein as a local-weight. Using such a scheme/schema may guarantee that each of the subcategory weights have exactly 4 prime factors−1, itself, category weight and the local-weight, which may prove useful for later accessing/processing. Turning to understanding such later accessing/processing, as it may be useful in many business queries, divisibility and prime factorization may be utilized to develop various understandings, processes, and observations as to users and their data, which may be accomplished at much greater speeds across much more voluminous data, thanks to the method of assembly/storage of incoming data stream401. Using the example above, where various categories and specific subcategories were chosen to obtain various local-weights, divisibility and prime factorization and their respective utility may be better understood using this simplified example. Using an example where Fitness received 11 and Lifestyle received 7 (the product of which would be 77) and the WhatsApp® subcategory received a 46 local-weight, those may be multiplied and stored according to the systems and methods of the disclosure. If further activity, such as FACEBOOK® having the local-weight of 38 were then appended, the total product across all such activity may be 134,596, which is factorized to 22×7×11×23×19. Note, that 2 is raised to a power of 2 because two user activity categories fall under Social Media (WhatsApp® and FACEBOOK®). Then if a business wanted to determine, who among its subscribers use social media and fitness at a ratio of 2:1, this could easily be obtained and match such a user to such a query. Those having ordinary skill in the art may understand that counters may retain their traits or underlying “essence” of traffic types even when they continue to be merged with other counter types. By choosing weights using prime numbers and square free numbers as described herein may yield an unambiguous scheme/schema of determining the behavioral profile of an individual despite retaining only the aggregated counter numbers (or their products). Ratio of powers of prime factors may then be indicative of ratio of underlying traffic types. For pointed business queries, it may not be necessary to do complete prime factorization, instead, repeated division by the prime number representing the desired traits for as long as the number is divisible without a reminder is sufficient to get the power. Since computing systems can complete such mathematical procedures without significant resource expenditure, such a scheme/schema allows for basic computing mathematics to not only save on processing/storing incoming data stream401, but also receive a significant benefit when performing business relevant queries upon such data at limited resource expenditure. Given this method of organization of incoming data stream401, including those described and illustrated thereinFIG.5, as it relates to pre-formatting of said incoming data stream401, further details and benefits of the disclosed system and method will be recognized by those having ordinary skill in the art following additional review of the remaining Drawings and related Detailed Description below. Turning toFIG.6, illustrated therein is a block diagram illustrating a simplified exemplary efficient incoming data management/parsing. Given the pre-formatted and self-organizing principles described above, the memory layout may be further optimized according to the principles illustrated inFIG.6and described herein. A legend has been provided such that a “O” may denote HTD memory unit541, a “A” may denote HOD memory unit542(each described in detail inFIG.5), and an “X” may denote unformatted memory unit540. In a potentially preferred embodiment, actual positioning of counters and their related memory units may be based on the most common traffic-type, behavioral profiles, and/or the like. An example seed blueprint610is illustrated thereonFIG.6, which may indicate two HTD counters, two HOD counters, another two HTD counters, another two HOD counters, followed by four unformatted regions. Example profiles of Alice640aand Tom650bare further provided having a similar layout. One example, initial Alice640a, may be a subscriber having HTD YOUTUBE® counter644, followed by HTD social media counter645, and then followed by other HOD counters, HTD counters, and unformatted counters. Another example, initial Tom640a, may be a subscriber having HTD POKEMON® counter654, followed by HTD gaming counter655, and then followed by other HOD counters, HTD counters, and unformatted counters. As will be observed by those having ordinary skill in the art, each blueprint may be initially only partially consumed, such that several unformatted memory units540may be present, then at fact-based assignment step660, these may be formatted if a given expectation, based on any variety of factors, indicates that upcoming arriving data from incoming data stream401and associated with either Alice640or Tom650may be of HOD or HTD. Then, fact-based processed Alice may be assigned HOD memory unit542at each unformatted memory unit540and Tom may be assigned HTD memory unit541for each unformatted memory unit540. Additionally, various HTD memory unit541and HOD memory unit542may be assigned as a given traffic type, using predictive algorithmic techniques based on previous user behavior, such that when a future incoming data stream401, and the data associated therewith, may be associated with either Alice640or Tom650may be pre-assigned into those memory units without further machine effort dedicated toward assigning traffic types. Then, at deduction step670, further deductions may occur. In this example, both Alice640and Tom650may begin with the same seed blueprint610. If, in this non-limiting example, the first record for Alice640is Alice YOUTUBE®640aand for Tom640is Tom POKEMON®650b, Alice640may be known to be currently browsing YOUTUBE® and streaming video thereof that service and Tom640may be known to be currently playing POKEMON® on his mobile device, each mobile device connected to a mobile telecommunications network. Such knowledge may be obtained via said mobile telecommunications data network and received as incoming data stream401. Given the known data consumption behavior of Alice640and Tom650, then each may having the same seed blueprint610, the first counters may occupy respective traffic types. Counters towards the head may be more efficient for read and write as bytes after that may be simply skipped and not parsed. Given these initial steps, a persona for users may be generated such that these advantages inherent in memory allocation may be leveraged to increase the speed at which later observations may be made, such as pre-allocation and/or pre-determination of unformatted memory units540into formatted spaces, based on for instance, persona categories. Further details and benefits of such a self-organization and preformatting system and method will be recognized by those having ordinary skill in the art following additional review of the remaining Drawings and related Detailed Description below. Turning now toFIG.7, therein illustrated and herein described may be a block diagram illustrating a proposed exemplary subscriber classification mechanism. Hypothetical persona matrix700is used herein as an exemplary method of determining a persona for a mobile telecommunications data user in order to implement the systems and methods of the disclosure. Hypothetical persona matrix700may comprise observations of users, based on observed behaviors during a user's interaction(s) with the mobile telecommunications infrastructure as incoming data stream401. By way of example and not limitation, these may include the opposite pairs of dispersed behavior versus focused behavior and intense behavior versus sporadic behavior. Then, these behaviors may each be determined, based on the user activity, and assigned into one of intense/dispersed behavior category (ID)701, sporadic/dispersed behavior category (SD)702, intense/focused behavior category (IF)703, and finally sporadic/focused behavior category (SF)704. In total these may comprise four (4) total personas within the hypothetical persona matrix700such that any particular user may, for at least a period of time, be classified into one category. In such an organization of hypothetical persona matrix700, beginning with intense behavior, those users assigned that behavior category may include those users who use the internet in dedicated chunks of time, and when they do so, they routinely use it for approximately an hour. Consumption of data may occur during specific times during the day, but may not consistently consume data throughout the day. If one were to chart an intense behavior user's data consumption throughout the day, one may observe periods of great data consumption and periods having none or very little. A sporadic behavior may include internet users who may consume a more consistent amount of data throughout the day, but do so in smaller time-period “chunks”, such that the consumption chart may reflect smaller and narrower peaks, but with a more consistent amount of consumption on average throughout a given day. A focused behavior may include users who may consume data in one or few ways (e.g., games, music streaming, etc.) and a dispersed behavior may include user who may consume data in more varying ways throughout a given day. As it relates to the systems and methods of the disclosure, intense behavior users may need fewer HOD counters but benefit from more fine-grained HOD counters. Sporadic behavior users instead may need more HOD counters and benefit from coarse grains. By implementing the above system in order to establish various seed blueprints610as provided herein, the personas of IF701, SF702, ID703, and SD704may each have their own of four pre-designed seed blueprints610based on these various traits and the seed blueprint610may in fact change based on behavioral changes. Furthermore, it may be observed by those having ordinary skill in the art that users with focused behavior may benefit from having many related traffic-types placed in close proximity of each other while users having sporadic behavior may benefit from having the top-traffic-type toward the head of HTD memory unit541. Now, having characterized an exemplary hypothetical persona matrix700, the processes by which such a persona may be processed and handled within HTD memory units541and HOD memory units541may be understood by those having ordinary skill in the art through review of the remaining Drawings and related Detailed Description below. Turning now toFIG.8, therein illustrated is a flowchart of a proposed exemplary subscriber profile assessment cycle. At new profile step801, information is received in incoming data stream401related to a user within, for instance, a mobile telecommunication network, but for whom the system of the disclosure has not yet proceeded to implement the profile information as herein described. Upon receipt, allocation step802is immediately performed and a seed persona or seed blueprint610is assigned based on the characteristics of user behavior as described above. In a potentially preferred embodiment of the disclosed system, those may be ID701, SD702, IF703, and SF704as described in detail above. Once a seed blueprint610has been chosen at allocation step802, various milestones may be used in order to determine whether to re-assess. Milestone step803may occur, for instance, after 500 MB of data has been consumed by the user, the user has transitioned among 3 various traffic-types, or 3 hours has passed. Once milestone step803has occurred by achieving one or more of the indicated milestones, reassessment step804may occur. Various bases may exist to inform whether re-allocation is needed at reassessment step804and may occur based on accrued knowledge870. After an assessment has occurred at reassessment step804, then the systems of the disclosure may determine that a change is needed at change step805. If no changes are needed, unformatted sections which have arrived via incoming data stream401related to the user may be formatted based on traffic types that may be determined to be better suited based on clustered data at formatting step806, and allocated as such. Even in situations where behavior remains constant, resources may be dedicated to re-cycling the user through incremental re-assessment step807, by simply cycling to reassessment step804. Additionally, this reassessment step807may occur incrementally, for instance, at every 5 GB of data consumption and/or every day. In general, the cycling of persona may be based on fact and/or rule-based decisioning, heuristics-based decisioning, and/or knowledge/learning-based decisioning, as may be known to those having ordinary skill in the art. In order to continuously re-calibrate each subscriber and/or user to stay aligned with an optimal layout, the decisioning itself may be based around receiving and/or implementing the optimal traffic-types around the head of HTD memory unit and/or541HOD memory unit542. Additionally, pre-formatting the optimal expected traffic-types for each user/subscriber based on a hypothetical persona matrix700may be important. Finally, achieving the optimal density between HTD memory unit541and/or HOD memory unit542may achieve further benefits in a system of the disclosure. Finally, turning toFIG.9, therein illustrated and herein described may be a diagram illustrating an exemplary switching process to designate certain modes of operation of the system of the disclosure. For illuminate purposes only, the term Socialist/Capitalist Switch900has been coined and therein illustrated. Certain concepts are described herein as a background to the development of such a proposed switch. A system of the disclosure may contain such a Socialist/Capitalist Switch900in order to track certain metrics that assist the system of the disclosure assess the effectiveness of tracking certain traffic-types and certain users and/or personas. These may include monthly bills having thousands of records/month per user or subscriber, monthly bills having traffic types tracked, processed record volumes and/or events tripped, number of offers activated, and entropy scores. An entropy score may measure the minimum number of distinct traffic types that consume, for instance 70% of a particular subscriber's usage over a time period or, alternatively, while at a specific location or general region. Subscribers that may exhibit very random-access patterns or user behavior may receive a high entropy value because many traffic types may be need to consume that level of total user consumption (e.g., 70%). If, by default, every user is treated equal and all traffic types are also treated equal, Socialist mode930may be said to be occurring and/or implemented. In Socialist mode930, fairness among users and traffic types may occur with unbiased, uniform tracking. From a business outcome perspective, Socialist mode930may not be ideal, but resources may not be dedicated specifically to re-assessment, memory rearrangement, etc. Instead, it may be beneficial in some circumstances to target users or subscribers who have a greater propensity to act upon a specific marketing promotion or offer, transmitted directly to the user via the mobile device upon which they consume data. A switch from a potentially preferred default, or Socialist mode930may occur to transition or switch into Capitalist mode920. This may occur automatically when metrics fall or rise to a certain threshold. Capitalist mode920may be thought of by those having ordinary skill in the art as a focus on return on investment (ROI) and can assist in reducing waste on computing resources by selectively targeting only a subset of subscribers likely to act upon an offer or promotion and do so with fewer resources and at a faster response time. Putting these disclosed concepts, machines, systems, methods, and other features of the disclosure into practice may further assist those having ordinary skill in the art in implementing such a system and method upon large datasets. Essentially, a real-time streaming aggregation and event check framework may be developed using these disclosed concepts, machines, systems, methods, and other features of the disclosure. By implementing such concepts, machines, systems, methods, and other features of the disclosure on a zero-message queue (ZMQ or OMQ) asynchronous messaging platform and using facilities consumption of telecommunications grade deep packet inspection data at real time and by aggregating them on-the-fly, many benefits may be observable to those having ordinary skill in the art. By consuming DPI data over industry-standard messaging frameworks at input incoming data stream401, performing traffic categorization based on pre-configured data-classification rules, and aggregating them on-the-fly to different levels of granularity, such as 1 minute (m), 2 m, 5 m, 10 m, 30 m, and 60 m, across different traffic types, such as streaming, social media, gaming, as varying degrees of granularity may be valuable depending on traffic type. Additionally, redundancy in marketing or other offers may be achieved in order to avoid subscribers being bothered with inapplicable offers. For example, someone without any social media activity would not be contacted with such offers, and may only receive those related to gaming, if such behavior and/or persona is relevant to such a subscriber. This may support real time monitoring of IP traffic by type to assist in reaching out to audience(s) consuming specific data and/or content and meeting certain additional criterion. This may also be thought of by those having ordinary skill in the art as a holistic data analysis and monitoring system that consumes DPI data by parsing and translating into internal 86-digit numbers, then, labelling such traffic using mapping rules and weights that allow unambiguous traffic-type determination, accumulating such real-time information into efficiently laid out HTD memory unit541and/or HOD memory unit542and the counter IDs thereon, performing composite event checks using single modulo operation(s) and then passing over to other business modules which may be capable of directly soliciting subscribers for campaign management systems, and in potentially preferred embodiments, generating real-time offers that are highly relevant to individual subscribers' or subscriber groups' contextual behavior, and finally communicating the same to either the business and/or the subscriber. In other aspects, the disclosed system and method for efficient numerical data model, memory management and streaming aggregation cumulative event check in large semi-structured datasets may model the incoming data stream401as a large hexadecimal number thus facilitating simple math/arithmetic on related fields of the records realized using numerical operations on groups of digits of the modelled number. By assigning such weights to categories of labelled traffic using prime numbers and subcategories using square free numbers, users of the disclosed system may be empowered to easily determine the lineage of any individual counter ID unambiguously from a plurality of counter ID number(s) using basic rules of divisibility. By organizing accumulated traffic statistics into an efficient in memory layout formed using HOD memory unit(s)542and HTD memory unit(s)541having stored thereon counter IDs with embedded free-space tracking and memory management, further benefits may be achieved. The layout(s) of counter IDs stored upon HOD memory unit(s)542may allow, enable, or make simple the tracking of any 3 factors of 60 minutes cohesively in a single counter with 3 bins each having 3 cells. By assimilating behavioral profiles for subscribers realized using multiplication of constituent counter face values or weights, compact encoding may be achieved and may make concrete abstract behavior into numbers, which can in turn rely on numerical methods, such as divisibility and the like, in order to return to a business unit and/or approach directly using a connected system the subscribers relevant to a specific business-desired promotion/offer. Thus, even otherwise hidden behavioral traits may be observed simply by analyzing powers of prime factors of counter IDs at any particular point in time as well as the change in patterns over time. Having implemented the disclosed methods upon a system according to the disclosure, many benefits have been achieved that may be notable to those having ordinary skill in the art. First, a highly efficient and fast data organization using custom and/or proprietary formatting techniques for coding and memory management has been enabled across many business-user CRM systems. Fast aggregation using 2 second complement addition for incoming data stream401has been achieved for efficient messaging and reduced end-to-end response times with highly efficient memory management logic. In one such implementation, 63 billion incoming data stream401records have been processed daily at an average speed of more than 800,000 records per second. Real time traffic has been aggregated into more than 30 traffic-types to assist telecommunication companies with direct solicitation of marketing offers to subscribers, with highly-relevant offers prior to the completion of a given browsing session. Parsing cumulative labelling speeds of more than 300,000 records per second per core have been achieved having aggregation cumulative event trip speeds of 60,000 records per second per core on an INTEL® XEON® 2.5 GHz core system. As a result, processing speeds of 750,000 records per second may be sustained to process over 60 billion streaming records using only 2 units of INTEL® XEON® ES-2690 at 2.70 GHz with 8 cores and 256 GB RAM. The foregoing description and drawings comprise illustrative embodiments. Having thus described exemplary embodiments, it should be noted by those skilled in the art that the within disclosures are exemplary only, and that various other alternatives, adaptations, and modifications may be made within the scope of the present disclosure. Merely listing or numbering the steps of a method in a certain order does not constitute any limitation on the order of the steps of that method. Many modifications and other embodiments will come to mind to one skilled in the art to which this disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Although specific terms may be employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Accordingly, the present disclosure is not limited to the specific embodiments illustrated herein, but is limited only by the following claims. | 50,148 |
11863408 | DETAILED DESCRIPTION Throughout the description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present techniques described herein. It will be apparent to one skilled in the art, however, that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of embodiments of the invention. 1.0. General Overview 1.1. Event-Based Data Storage Systems Generally, a data-processing system may perform data operations on data stored in one or more data repositories. Depending on the type of data-processing system, the data operations may range from simple operations such as storing and retrieving the data to more complex operations such as calculating statistics from the data, or arranging or formatting the data. One example of a data-processing system is a relational database system, in which data is stored in highly structured tables and accessed through rigid data storage rules (e.g., data storage and retrieval “schemas”). Another example of a data-processing system is a file system, such as a Network File System (NFS) server. Yet another example of a data-processing system is a web application server. A data-processing system may also include an event-based system, such as the SPLUNK® ENTERPRISE system produced and sold for on-premise and cloud use by Splunk Inc. of San Francisco, CA. In some event-based systems, data is derived from lines or rows of unstructured time-series data, such as data from web logs and/or machine logs. Each row and/or group of rows is generally associated with a timestamp and one or more associated data points or parameter-value pairs. A timestamp may be any sequence of characters or encoded information that identifies the time at which a certain event is recorded. For example, a timestamp may provide the date, hour, minute, and/or second at which an application is initialized on a computer system. Based on the timestamps, data structures representing events may be derived from the associated data and include some or all of the associated data. A variety of event types may be derived from such data. For example, in the context of web logs, events may be derived from errors, specific user inputs, navigation events, and so forth. As used herein, the term “events” may refer to anything that occurs and carries information in an event-based system. Some event-based systems feature flexible data storage and retrieval schemas that may be redefined as needed and applied after the associated data is stored in a database or other memory structure of the data storage system. For example, the schemas may be applied upon receiving a request to perform an operation on such data. Such schemas may indicate how to extract one or more pieces of data from data associated with an event. In addition, in connection-oriented network communications systems, a “data stream” generally refers to a sequence of encoded signals (e.g., in network packets) used to transmit or receive information over a network. 1.2. Remote Capture Agent Architecture One or more embodiments include a network architecture for capturing network data in one or more networks using a configuration server working in combination with a set of remote capture agents distributed throughout the network(s). The remote capture agents may capture network packets from multiple sources (e.g., hosts, servers, etc.) and analyze the network packets to determine the packets' contents. The remote capture agents may then generate one or more events from the network packets and communicate the events to the configuration server over one or more additional networks. In one or more embodiments, the configuration server includes configuration information used to determine how remote capture agents capture network data and build events therefrom. The remote capture agents may obtain the configuration information from the configuration server (e.g., using a push or pull mechanism) and use the configuration information to generate event data containing a series of timestamped events from the network data. The event data may be included in an event stream that is transmitted to additional network elements within the distributed network for additional processing and/or storage. In this manner, both network traffic between the remote capture agents and other network elements and subsequent processing of the network traffic by the other network elements may be drastically reduced because capturing and pre-processing of the network data may be performed at the remote capture agents. For example, the remote capture agents may transmit events in lieu of network packets from which the events were generated to one or more centralized servers for further processing, indexing, and/or storage. 1.3. Dynamically Configurable Remote Capture Agents Remote capture agents may be dynamically configured based on configuration information stored at the configuration server. For example, the remote capture agents may be configured in real-time as events are processed by the remote capture agents. The remote capture agents may be dynamically configured during runtime with: (1) events (or types of events) to be included in event streams for use by other components of the remote capture agent architecture, (2) fields to be included in each of the events streams, and (3) additional parameters associated with generation of the events and/or event streams. The configuration information may be modified on-demand by users (e.g., administrators) at the configuration server and/or at a network component in communication with the configuration server. The configuration information may also be dynamically updated during processing of event streams by one or more applications running on separate servers in communication with the configuration server, such as one or more data storage servers in communication with the configuration server. Events may then be generated from the captured network packets based on the configuration information and/or any updates to the configuration information. When changes are made to the configuration information at the configuration server, logic in the remote capture agents may be automatically updated in response. In one embodiment, the remote capture agents poll the configuration server at periodic intervals to determine if there have been any changes to the configuration information stored therein. If changes to the configuration information have been made, the remote capture agents may pull this configuration information from the configuration server. Alternatively, changes to the configuration information may be pushed from the configuration server to the remote capture agents at periodic intervals. Such propagation of updates to the configuration information to the remote capture agents may allow the remote capture agents to be dynamically configured to store different types of network data in events, generate different types of events, aggregate event data, and/or send event data to other network components at different times and/or intervals. 1.4. Transforming Event Data at the Remote Capture Agents The configuration information may also be used by the remote capture agents to perform higher-level processing of the events before communicating the events to the configuration server. More specifically, the remote capture agents may use some or all of the configuration information to transform (e.g., aggregate, process, clean, filter, etc.) events into one or more sets of transformed event data. The remote capture agents may provide the transformed event data to the configuration server and/or other network components, in lieu of or in addition to the events. The network components may further process the transformed event data and/or store the transformed event data (e.g., in a data storage server). In one or more embodiments, some or all of the configuration information related to transforming events is specified by applications running on other servers or systems and communicated to the configuration server. For example, the applications may run on a data-processing system such as the SPLUNK® ENTERPRISE system. Users may use the applications to perform queries and/or visualizations related to event data from the remote capture agents. The applications may provide the configuration server with information regarding the events (or types of events) the application is adapted to receive, along with information related to subsequent processing and/or transformation of those events. The configuration server may obtain the information from the applications for propagation to the remote capture agents, and the remote capture agents may use the information to configure or reconfigure the creation and processing of event data accordingly. In one embodiment, the applications include data storage applications running on a data storage server to facilitate optimizing data storage and retrieval operations. 1.5. Graphical Interface for Configuring Event Streams A graphical user interface (GUI) may facilitate the configuration of the remote capture agents and/or other network components in generating and/or processing event streams containing event data. The GUI may provide a visual way to create, manage, and/or process event streams based on configuration information associated with each event stream. The GUI may be provided by the configuration server and/or by a network element in communication with the configuration server. The GUI may display representations of one or more components associated with creating and/or processing event streams generated from network traffic. The components may be configured or reconfigured using various icons and/or other user-interface elements in the GUI. 2.0. Structural Overview 2.1. Operating Environment The data processing techniques described herein are suitable for use by systems deployed in a variety of operating environments.FIG.1depicts an example block diagram embodiment of a data-processing system100for capturing and processing network data in a distributed network environment. In the illustrated embodiment, system100includes a set of configuration servers120in communication with a set of remote capture agents151-153over one or more networks190. Although system100only depicts three configuration servers120and three remote capture agents151-153, any number of configuration servers120and/or remote capture agents151-153may be configured to operate and/or communicate with one another within the data-processing system. For example, a single physical and/or virtual server may perform the functions of configuration servers120. Alternatively, multiple physical and/or virtual servers or network elements may be logically connected to provide the functionality of configuration servers120. The configuration server(s) may direct the activity of multiple distributed remote capture agents151-153installed on various client computing devices across one or more networks. In turn, remote capture agents151-153may be used to capture network data from multiple remote network data sources. Further, embodiments described herein can be configured to capture network data in a cloud-based environment, such as cloud140depicted in the illustrated embodiment, and to generate events such as clickstream events and/or business transactions out of the network data. Remote capture agents151-153may capture network data originating from numerous distributed network servers, whether they are physical hardware servers or virtual machines running in cloud140. In cloud-based implementations, remote capture agents151-153will generally only have access to information that is communicated to and received from machines running in the cloud-based environment. This is because, in a cloud environment, there is generally no access to any of the physical network infrastructure, as cloud computing may utilize a “hosted services” delivery model where the physical network infrastructure is typically managed by a third party. Embodiments further include the capability to separate the data capture technology into a standalone component that can be installed directly on client servers, which may be physical servers or virtual machines residing on a cloud-based network (e.g., cloud140), and used to capture and generate events for all network traffic that is transmitted in and out of the client servers. This eliminates the need to deploy and connect physical hardware to network TAPS or SPAN ports, thus allowing users to configure and change their data capture configuration on-the-fly rather than in fixed formats. In the illustrated embodiment, remote capture agents152-153are in communication with network servers130residing in cloud140, and remote capture agent151is located in cloud140. Cloud140may represent any number of public and private clouds, and is not limited to any particular cloud configuration. Network servers130residing in cloud140may be physical servers and/or virtual machines in cloud140, and network traffic to and from network servers130may be monitored by remote capture agent151and/or other remote capture agents connected to network servers130. Further, remote capture agents152-153may also run in cloud140on physical servers and/or virtual machines. Those skilled in the art will appreciate that any number of remote capture agents may be included inside or outside of cloud140. Remote capture agents151-153may analyze network packets received from the networks(s) to which remote capture agents151-153are connected to obtain network data from the network packets and generate a number of events from the network data. For example, each remote capture agent151-153may listen for network traffic on network interfaces available to the remote capture agent. Network packets transmitted to and/or from the network interfaces may be intercepted by the remote capture agent and analyzed, and relevant network data from the network packets may be used by the remote capture agent to create events related to the network data. Such events may be generated by aggregating network data from multiple network packets, or each event may be generated using the contents of only one network packet. A sequence of events from a remote capture agent may then be included in one or more event streams that are provided to other components of system100. Configuration servers120, data storage servers135, and/or other network components may receive event data (e.g., event streams) from remote capture agents151-153and further process the event data before the event data is stored by data storage servers135. In the illustrated embodiment, configuration servers120may transmit event data to data storage servers135over a network101such as a local area network (LAN), wide area network (WAN), personal area network (PAN), virtual private network, intranet, mobile phone network (e.g., a cellular network), WiFi network, Ethernet network, and/or other type of network that enables communication among computing devices. The event data may be received over a network (e.g., network101, network190) at one or more event indexers (seeFIG.10) associated with data storage servers135. In addition, system100may include functionality to determine the types of network data collected and/or processed by each remote capture agent151-153to avoid data duplication at the indexers, data storage servers135, and/or other components of system100. For example, remote capture agents152-153may process network traffic from the same network. However, remote capture agent152may generate page view events from the network traffic, and remote capture agent153may generate request events (e.g., of HyperText Transfer Protocol (HTTP) requests and responses) from the network traffic. In one or more embodiments, configuration servers120include configuration information that is used to configure the creation of events from network data on remote capture agents151-153. In addition, such configuration may occur dynamically during event processing (e.g., at runtime). Conversely, because most conventional network capture technologies target specific end uses, they have been designed to operate in a fixed way and generally cannot be dynamically or easily modified to address different and changing business needs. At least certain embodiments described herein are adapted to provide a distributed remote capture platform in which the times at which events are communicated to the configuration servers120and the fields to be included in the events are controlled by way of user-modifiable configuration rather than by “hard coding” fixed events with pre-determined fields for a given network capture mechanism. The remote configuration capability described herein also enables additional in-memory processing (e.g., filtering, transformation, normalization, aggregation, etc.) on events at the point of capture (e.g., remote capture agents151-153) before the events are transmitted to other components of system100. Configuration information stored at each configuration server120may be created and/or updated manually at the configuration server and/or at a network element in communication with the configuration server. For example, a user may upload a configuration file containing configuration information for a remote capture agent to one or more configuration servers120for subsequent propagation to the remote capture agent. Alternatively, the user may use a GUI to provide the configuration information, as described in further detail below with respect toFIGS.8-9. The configuration information may further be provided by one or more applications running on a separate server or network element, such as data storage servers135. Remote capture agents151-153may then use the configuration information to generate events from captured network packets. When changes in the configuration information at the configuration server are detected at the remote capture agents, logic in the remote capture agents may be automatically reconfigured in response. This means the remote capture agents may be configured dynamically to produce different events, transform the events, and/or communicate event streams to different components of system100. To detect changes in configuration information at configuration servers120, remote capture agents151-153may poll configuration servers120at periodic intervals for updates to the configuration information. The updates may then be pulled from configuration servers120by remote capture agents151-153. Conversely, updates to the configuration information may be pushed from configuration servers120to remote capture agents151-153at periodic intervals and/or when changes to the configuration information have been made. In one embodiment, configuration servers120include a list of event streams generated by remote capture agents151-153, as well as the configuration information used to generate the event streams at remote capture agents151-153. The configuration information may include a unique identifier for each event stream, the types of events to be included in the event stream, one or more fields to be included in each event, and/or one or more filtering rules for filtering events to be included in the event stream. Configuration information for dynamically modifying network data capture by remote capture agents (e.g., remote capture agents151-153) is described in further detail below with respect toFIG.2. The configuration information may also specify transformations of network data and/or events into transformed events. Such transformations may include, for example, aggregations of network data and/or events, generation of statistics and/or metrics from the network data or events, and/or cleaning and/or filtering of the network data and/or events. As with other event streams, event streams containing transformed event data may be transmitted from remote capture agents151-153to configuration servers120, data storage servers135, and/or other components of system100for further processing, storage, and/or use. Configuration information associated with transformed events may be obtained from end users and/or applications running on various network elements that receive the events. For example, an application executing on a data storage server (e.g., data storage servers135) may provide statistics associated with network usage in cloud140. To reduce overhead associated with real-time processing of event data by the application into the statistics, the application may provide configuration information for generating some or all of the statistics at one or more remote capture agents (e.g., remote capture agents151-153) connected to cloud140. The configuration information may be transmitted to configuration servers120and subsequently propagated to the relevant remote capture agents. In turn, the remote capture agents may use the configuration information to generate transformed events containing statistics associated with events captured by the remote capture agents, and the transformed events may be provided to the application to enable access to the statistics by users of the application without requiring the application to calculate the statistics at query time. Such use of distributed remote capture agents151-153may offload processing tasks from configuration servers120and/or other components of system100to remote capture agents120(e.g., similar to parallelizing a network), while avoiding overloading of client network servers at remote networks by burdening the client network servers with the full functionality of configuration servers120. System100may further reduce network traffic between remote capture agents151-153and the other components of system100because remote capture agents120convert a potentially large volume of raw network traffic into a smaller volume of events and further filter the event data as directed by the configuration information before transmitting the event data to other components of system100. Another advantage is that the work performed by system100may be distributed among multiple remote capture agents151-153on one or more networks. Remote capture agents151-153may occupy small footprints on remote client servers, thus mitigating resource usage by remote capture agents151-153on the client servers. For example, remote capture agents151-153may execute as background processes on physical and/or virtualized servers. On the other hand, configuration servers120may execute from one or more centralized locations and/or on one or more sets of dedicated resources because the operation of configuration servers120may require significantly more computing resources than the operation of remote capture agents151-153. As depicted inFIG.1, system100further includes one or more data storage servers135. Data storage servers135may be general or special-purpose computers configured to process and manipulate data within one or more data repositories. As depicted, data storage servers135may be coupled to data storage devices155using any suitable mechanism, such as a Fiber Channel network, a Serial ATA (SATA) link, a Universal Serial Bus (USB) connection, an Infiniband link, an Ethernet connection, and/or other type of interface. Data storage servers135can be configured to communicate input/output (I/O) requests to storage devices155. These I/O requests may be communicated via messages in protocols such as Server Message Block protocol, Network File System (NFS) protocol, Small Computer System Interface (SCSI) protocol, and/or Fibre Channel. In response to the requests, data storage servers135may read and write data structures such as data blocks, files, tables, and/or result sets from storage devices155. In an embodiment, data storage servers135may include some or all of storage devices155. Instructions for processing and manipulating data (e.g., event data) may be executed by data storage servers135. For example, data storage servers135may perform data operations with respect to one or more data repositories. Data operations supported by these processes may include relatively simple operations such as adding or retrieving lines or rows of data from the data storage devices. The supported data operations may further include operations such as filtering the contents of retrieved data and/or performing transformations (e.g., aggregations, calculations, processing, cleaning, filtering, etc.) of the retrieved data. In one or more embodiments, data storage servers135and/or configuration servers120provide one or more transformation servers that perform additional processing of event data from remote capture agents151-153. Conversely, one or more configuration servers120and/or data storage servers135may be installed within a transformation server and/or execute independently from transformation servers in the data-processing system100. The transformation servers may be used to aggregate, filter, format, query, transform, store, and/or otherwise manipulate event data, as described in further detail below with respect toFIG.8. In another embodiment, data storage servers135may constitute one or more conventional database servers, such as a relational database server. These processes need not necessarily support the entire functionality of a database server or operate on conventional database structures. Data repositories accessed by data storage servers135may be stored on data storage devices155. Data storage devices155may be, for instance, non-volatile computer-readable media such as hard disk drives, flash/SSD drives, non-volatile memory, optical storage devices, disk arrays, storage area network devices, networked-attached storage devices, and/or file server devices. Storage devices155may store the data repositories in any suitable underlying form(s), such as disk blocks, file structures, or database tables. If multiple storage devices155are used in system100, different portions of a data repository may be stored on different storage devices155. Optionally, certain storage devices155may be configured to store some or all portions of a data repository redundantly, using any suitable backup or synchronization mechanism(s). In an embodiment, each storage device155is equally accessible to each data storage server135, and thus any data storage server135may perform operations on any data stored within the data repositories. In other embodiments, each data storage server135is assigned to only some or even one of the data storage devices155, and is only configured to perform operations on data storage device(s)155to which it is assigned. System100is only one example of the many types of operating environments in which the techniques described herein may be practiced. Other suitable operating environments may include additional or fewer elements, in varying arrangements. For instance, some or all data storage servers135may be replaced by virtual computing environments (e.g., virtual machines), some or all of which may execute on a single computing device. System100further utilizes data repositories provided by storage devices155. The data repositories may include one or more data collections, and each data collection may be a collection of data structures having a variety of forms. For example, a data collection may include a collection of time-based event data structures (e.g., one or more event streams), a group of data rows, a relational database, a relational database table, set of Extended Markup Language (XML) elements, and/or one or more files. Different data collections within the same repository may support different data structure types. In an embodiment, a data collection containing of any of the foregoing data structures is augmented with system-defined or user-defined variables that can be updated to describe certain characteristics of the data stored in the data collection. Examples of such variables may include counters or metrics. In an embodiment, each data collection is stored redundantly on multiple data storage devices155, and synchronized therebetween. In an embodiment, each data collection is found on only some or even one of the data storage devices155. FIG.2depicts an example block diagram embodiment of a remote capture agent250. In the illustrated embodiment, remote capture agent250is adapted to receive configuration information from one or more configuration servers120over network101. Remote capture agent250may be installed at a customer's premises on one or more of the customer's computing resources. For example, remote capture agent250may be installed on a physical server and/or in a virtual computing environment (e.g., virtual machine) that is distributed across one or more physical machines. Remote capture agent250includes a network communications component203configured to communicate with network elements on one or more networks (e.g., network101) and send and receive network data (e.g., network packets) over the network(s). As depicted, network communications component203may communicate with configuration servers120over network101. Network communications component203may also communicate with one or more sources of network data, such as network servers130ofFIG.1. Network data received at network communications component203may be captured by a capture component205coupled with network communications component203. Capture component205may capture some or all network data from network communications component203. For example, capture component205may capture network data based on the sources and/or destinations of the network data, the types of network data, the protocol associated with the network data, and/or other characteristics of the network data. In addition, the network data may be captured based on configuration information stored in a configuration component204of remote capture agent250. As mentioned above, the configuration information may be received from configuration servers120over network101. The configuration information may then be used to dynamically configure or reconfigure remote capture agent250in real-time. For example, newly received configuration information in configuration component204may be used to configure the operation of remote capture agent250during processing of events from network data by remote capture agent250. To dynamically configure remote capture agent250, configuration information received by configuration component204from configuration servers120may be provided to other components of remote capture agent250. More specifically, remote capture agent250includes an events generator207that receives network data from network data capture component205and generates events from the network data based on configuration information from configuration component204. Using configuration information provided by configuration servers120, remote capture agent250can be instructed to perform any number of event-based processing operations. For example, the configuration information may specify the generation of event streams associated with network (e.g., HTTP, Simple Mail Transfer Protocol (SMTP), Domain Name System (DNS)) transactions, business transactions, errors, alerts, clickstream events, and/or other types of events. The configuration information may also describe custom fields to be included in the events, such as values associated with specific clickstream terms. The configuration information may include additional parameters related to the generation of event data, such as an interval between consecutive events and/or the inclusion of transactions and/or errors matching a given event in event data for the event. An events transformer209may further use the configuration information to transform some or all of the network data from capture component205and/or events from events generator207into one or more sets of transformed events. In one or more embodiments, transformations performed by events transformer209include aggregating, filtering, cleaning, and/or otherwise processing events from events generator207. Configuration information for the transformations may thus include a number of parameters that specify the types of transformations to be performed, the types of data on which the transformations are to be performed, and/or the formatting of the transformed data. For example, configuration information for generating an event stream from network data (e.g., at events generator207) may include the following Javascript Object Notation (JSON) data:{“id”: “trans_class”,“name”: “auto-classified transactions”,“streamType”: “trans_class”} The JSON data may include a unique identifier (e.g., “id”) of “trans_class” for the event stream, a descriptive name (e.g., “name”) of “auto-classified transactions” for the event stream, and an event stream type (e.g., “streamType”) of “trans_class.” Event data in the event stream may be identified by the identifier and/or descriptive name. The “trans_class” event stream type may indicate that events in the event stream represent automatically classified transactions such as user logins and logouts, shopping cart checkouts, new user signups, and/or file transfers, with a new event generated per automatically classified transaction. In addition, the event may include a unique identifier for the classified transaction type, as well as a Uniform Resource Identifier (URI) stem, a query string, a host name, and/or a page title for the transaction. In another example, configuration information for performing transformations on events from events generator207(e.g., at events transformer209) may include the following JSON data:{“id”: “trans_metrics”,“name”: “transaction metrics aggregated by id”,“streamType”: “agg_trans”,“fields”: [{“name”: “sessions”,“desc”: “total number of visitor sessions”,“term”: “clickstream.new-session”,“aggType”: “sum”},{“name”: “hits”,“desc”: “total number of HTTP transactions”,“term”: “clickstream.page-hits”,“aggType”: “sum”},{“name”: “cs_bytes”,“desc”: “total octets from client to server (ingress)”,“term”: “clickstream.cs-bytes”,“aggType”: “sum”},{“name”: “sc_bytes”,“desc”: “total octets from server to client (egress)”,“term”: “clickstream.sc-bytes”,“aggType”: “sum”},{“name”: “total_time”,“desc”: “total clock time from start to end of the transaction (microsec)”,“term”: “clickstream.page-load”,“aggType”: “sum”},{“name”: “redirect_time”,“desc”: “total clock time spent processing HTTP redirects (microsec)”,“term”: “clickstream.page-load-redirect”,“aggType”: “sum”},{“name”: “base time”,“desc”: “total clock time spent loading the base HTML file (microsec)”,“term”: “clickstream.page-load-base”,“aggType”: “sum”},{“name”: “content_time”,“desc”: “total clock time spent loading everything else (microsec)”,“term”: “clickstream.page-load-content”,“aggType”: “sum”},{“name”: “time_taken”,“desc”: “sum of measurements from start to end of each HTTP transaction (microsec)”,“term”: “clickstream.time-taken”,“aggType”: “sum”},{“name”: “client_rtt_sum”,“desc”: “sum of round trip time measurements between client & agent (microsec)”,“term”: “clickstream.cp-rtt-sum”,“aggType”: “sum”},{“name”: “client_rtt_count”,“desc”: “count of round trip time measurements between client & agent”,“term”: “clickstream.cp-rtt-packets”,“aggType”: “sum”},{“name”: “server_rtt_sum”,“desc”: “sum of round trip time measurements between server & agent (microsec)”,“term”: “clickstream.ps-rtt-sum”,“aggType”: “sum”},{“name”: “server_rtt_count”,“desc”: “count of round trip time measurements between server & agent”,“term”: “clickstream.ps-rtt-packets”,“aggType”: “sum”},{“name”: “refused”,“desc”: “total number of HTTP transactions that were refused by the server”,“term”: “clickstream.refused”,“aggType”: “sum”},{“name”: “canceled”,“desc”: “total number of HTTP transactions that were canceled by the client”,“term”: “clickstream.canceled”,“aggType”: “sum”},{“name”: “cached”,“desc”: “total number of HTTP transactions that had cached responses”,“term”: “clickstream.cached”,“aggType”: “sum”}]} The JSON data may include a unique identifier (e.g., “id”) of “trans_metrics” for the set of transformed events and a descriptive name (e.g., “name”) of “transaction metrics aggregated by id” for the transformed events. The JSON data may also provide an event stream type (e.g., “streamType”) of “agg_trans,” indicating that the configuration relates to transformations that aggregate transactions from other event data, such as event data generated using the “trans_class” configuration above. The JSON data may additionally include a list of custom fields (e.g., “fields”) that specify the types of data to be aggregated, such as numbers of visitor sessions or HTTP transactions, octets between clients and servers, clock times associated with page loads, and/or round-trip time (RTT) measurements between various network components. Each field may include a name (e.g., “name”) for the corresponding aggregation, a description (e.g., “desc”) of the aggregation, a clickstream term (e.g., “term”) representing the data to be aggregated, and an aggregation type (e.g., “aggType”). While the exemplary configuration information above shows an aggregation type of “sum” (e.g., summing of values represented by “term” across all events within an aggregation interval) for all aggregations, other aggregation types may be supported by remote capture agent250. Such aggregation types may include, for example, a key (e.g., hash) for each set of aggregated values, statistics (e.g., mean, median, variance, standard deviation, minimum value, maximum value, etc.) associated with the aggregated values, a uniqueness count for each unique value within an aggregation interval, and/or calculations used to aggregate values from two or more fields. A rules comparison engine208in remote capture agent250may receive events from event generator207and compare one or more fields from the events to a set of filtering rules in the configuration information to determine whether to include the events in an event stream. For example, the configuration information may specify packet-level, protocol-level, and/or application-level filtering of event data from event streams generated by remote capture agent250. Finally, a data enrichment component211may further transform event data to a different form or format based on the configuration information from configuration component204. For example, data enrichment component211may use the configuration information to normalize the data so that multiple representations of the same value (e.g., timestamps, measurements, etc.) are converted into the same value in transformed event data. Data can be transformed by data enrichment component211in any number of ways. For example, remote capture agent250may reside on a client server in Cupertino, California, where all the laptops associated with the client server have been registered with the hostname of the client server. Remote capture agent250may use the registration data to look up an Internet Protocol (IP) address in a look-up table (LUT) that is associated with one or more network elements of the client server's local network. Remote capture agent250may then resolve a user's IP address into the name of the user's laptop, thereby enabling inclusion of the user's laptop name in transformed event data associated with the IP address. The transformed event data may then be communicated to configuration servers120and/or a central transformation server residing in San Francisco for further processing, indexing, and/or storage. A further advantage of the techniques described herein includes relates to the transformation of network data at least at two distinct levels, including at the remote capture agents during generation of the events and at the configuration server and/or other components during subsequent processing of event data.FIG.3depicts an example block diagram embodiment of a configuration server320. As shown in the illustrated embodiment, configuration server320is in communication with multiple remote capture agents350over network190, and remote capture agents350are distributed throughout network190and cloud140. Configuration server320includes a network communications component303that receives events from remote capture agents350over networks190and/or140. Communications component303may also communicate with one or more data storage servers, such as data storage servers135ofFIG.1. Configuration server320also includes a configuration component304that stores configuration information for remote capture agents350. As described above, the configuration information may specify the types of events to produce, data to be included in the events, and/or transformations to be applied to the data and/or events to produce transformed events. Some or all of the transformations may be specified in a set of filtering rules321that may be applied to event data at remote capture agents350to determine a subset of the event data to be included in one or more event streams that are sent to configuration server320and/or other components. Configuration server320also includes a data processing component311that performs additional processing of the event streams based on configuration information from configuration component304. As discussed in the above example with respect toFIG.2, event data may be transformed at a remote capture agent (e.g., remote capture agent250) during resolution of the user's IP address was into the name of the user's laptop. The transformed event data may be sent to configuration server320and/or a transformation server for additional processing and/or transformation, such as taking the host name from the transformed event data, using an additional LUT to obtain a user identifier (user ID) of the person to which the laptop is registered, and further transforming the event data by including the user ID in the event data before forwarding the event data to a third server (e.g., a transformation server) for another round of processing. Configuration server320may also provide a GUI325that can be used to configure or reconfigure the information contained in configuration component304. The operation of GUI325is discussed in further detail below with respect toFIGS.7-9. 3.0. Functional Overview 3.1. Remote Capture Agent Architecture The techniques described in this section can be performed by the data processing system for capturing and processing network data in a distributed network environment as shown inFIG.1.FIG.4shows a flowchart illustrating the processing of network data. More specifically,FIG.4shows a flowchart of network data capture and processing in accordance with the disclosed embodiments. In one or more embodiments, one or more of the steps may be omitted, repeated, and/or performed in a different order. Accordingly, the specific arrangement of steps shown inFIG.4should not be construed as limiting the scope of the embodiments. Initially, one or more event streams are obtained from one or more remote capture agents on one or more networks (operation402). The event streams may include event data that is generated from network data (e.g., network packets) captured by the remote capture agent(s) on the network(s). For example, the event streams may include a series of sequentially timestamped events, with each event generated from data in one or more network packets related to the event. As a result, event data for the event may include information such as an identifier, a transaction type (e.g., for an HTTP transaction and/or business transaction), a timestamp, and/or any errors associated with the event. In addition, the event data may be associated with (e.g., represent) clickstream data, transactions, business transactions, errors, and/or alerts. The event streams may additionally include transformed event data generated from the network data and/or event data by the remote capture agent(s). For example, the event streams may include transformed event data that is obtained by performing aggregations, calculations, filtering, normalization, and/or formatting of the network data and/or event data at the remote capture agent(s). Next, one or more transformations are applied to the event stream(s) to obtain transformed event data from the event data (operation404). As with any transformations already applied at the remote capture agent(s), the transformation(s) may include aggregations, calculations, filtering, normalization, and/or formatting of the network data and/or event data at the remote capture agent(s). Moreover, the transformation(s) may be applied on top of previous transformations performed by the remote capture agent(s), so that one round of transformations may initially be applied at the remote capture agent(s) during generation of the event streams and another round after the event streams are received from the remote capture agent(s). Such transformation(s) may be performed by one or more reactors on one or more transformation servers, as described in further detail below with respect toFIG.7. The transformation(s) may also be used to store the event data and/or transformed event data (operation406). For example, the transformation(s) may be used to store the event data and/or transformed event data in a database and/or log file. Finally, querying of the transformed event data is enabled (operation408). For example, the transformed event data may be indexed, and queries may be executed on the indexed, transformed event data. The queries may further be performed in parallel on different subsets of the transformed event data. For example, a set of indexers may be used to index mutually exclusive time spans of the transformed event data and query the transformed event data using a map-reduce technique that operates on the time spans in parallel, as described in further detail below with respect toFIGS.10-12. Similarly, capturing of the network data may be divided among the remote capture agents to avoid data duplication. In addition, the remote capture agents may execute in and/or capture the network data from one or more virtual machines running in a cloud-based environment. This avoids the necessity of using a network TAP or SPAN port connection for access to and/or capturing of network data from physical network infrastructure. 3.2. Dynamically Configurable Remote Capture Agents for Capturing Network Data FIG.5shows a flowchart illustrating the process of facilitating the processing of network data. More specifically,FIG.5shows a flowchart of configuring a remote capture agent in accordance with the disclosed embodiments. In one or more embodiments, one or more of the steps may be omitted, repeated, and/or performed in a different order. Accordingly, the specific arrangement of steps shown inFIG.5should not be construed as limiting the scope of the embodiments. First, configuration information for a remote capture agent is obtained at the remote capture agent from a configuration server (operation502). The remote capture agent may be located on a separate network from that of the configuration server. For example, the remote capture agent may be installed on a physical and/or virtual machine on a remote network and/or cloud. As discussed above, the remote capture agent and other remote capture agents may be used to capture network data from a set of remote networks in a distributed manner. The captured network data may then be converted into event data that is included in a number of event streams by the remote capture agent(s). For example, a remote capture agent may generate an event to be included in an event stream by identifying one or more network packets associated with a packet and using the network data from the network packet(s) to generate event data corresponding to the event. The configuration information may include a unique numeric or string identifier for each event stream to be generated by the remote capture agent. The configuration information may also include a description and/or a descriptive name of the event stream. The configuration information may further specify an event stream type that identifies the type of event data (e.g., clickstream events, HTTP transactions, business transactions, errors, alerts, classified transactions, etc.) to be included in the event stream. Finally, the configuration information may include a list of custom fields (e.g., for including specific pieces of network data in the events) and/or one or more additional parameters associated with generating the event data (e.g., time interval between events, maximum number of cached and/or aggregated events, inclusion of matching transactions or errors in the event data, types of events used by the event stream, etc.). Next, the configuration information is used to configure the generation of event data from network data (e.g., from network packets) at the remote capture agent (operation504). For example, the configuration information may be used to configure the remote capture agent to identify certain types of network packets, extract network data from the network packets, and/or include the network data in the event data. The configuration information may also be used to configure the transformation of event data or network data into transformed event data at the remote capture agent (operation506). For example, the configuration information may specify that the event data and/or network data be aggregated into a sum, statistic (e.g., mean, median, minimum, maximum, etc.), and/or uniqueness count (e.g., number of times a unique value is found in an aggregation interval). To aggregate the event data and/or network data, a time interval associated with aggregation of the event data and/or network data may be obtained, and the event data and/or network data within the time interval may be aggregated into an event count, statistic, and/or uniqueness count. The configuration information may also specify a calculation (e.g., mathematical function, mathematical formula, etc.) to be performed on the network data and/or event data to produce the transformed event data. The configuration information may further provide a filter (e.g., regular expression, range of values, exact value, etc.) for removing a subset of the event data and/or network data to produce the transformed event data. The configuration information may additionally specify a normalization that is used to transform different representations of the same value (e.g., timestamp, host name, resource name, location, etc.) into the same normalized value. Finally, the configuration information may provide a formatting that may be applied to the event data and/or network data to generate transformed event data that adheres to a specific format. After the remote capture agent is configured, one or more event streams containing the event data and/or transformed event data from the remote capture agent are provided to one or more transformation servers for further transformation of the event data and/or transformed event data by the transformation server(s) (operation508). For example, the event stream(s) may be transmitted over one or more networks to the transformation server(s), and the transformation server(s) may perform additional aggregations, calculations, filtering, normalization, and/or formatting associated with the event data and/or transformed event data. An update to the configuration information may be received (operation512) by the remote capture agent. For example, the update may be detected by the remote capture agent after polling the configuration server and determining that the version of configuration information at the configuration server is newer than the version at the remote capture agent. The remote capture agent may then pull the update from the configuration server. Alternatively, the update may be pushed from the configuration server to the remote capture agent. If no update is received, the remote capture agent may continue to be used (operation516) to capture network data as-is. If an update to the configuration information is received, the update is used to reconfigure the generation and/or transformation of event data and/or network data at the remote capture agent during runtime of the remote capture agent (operation514). For example, the remote capture agent may be reconfigured to generate and/or transform the event data and/or network data while the remote capture agent continues to generate event streams containing event data and/or network data according to the old configuration. The remote capture agent may continue to be used (operation516) to capture network data with or without reconfiguring the remote capture agent using updates to the configuration information. If the remote capture agent is to be used, one or more event streams from the remote capture agent are continually provided to one or more transformation servers for further transformation by the transformation server(s) (operation508), and any updates to the configuration information are used to reconfigure the operation of the remote capture agent (operations512-514) during generation of the event stream(s). Capture of network data by the remote capture agent may continue until the remote capture agent is no longer used to generate event data and/or transformed event data from network data at the network to which the remote capture agent is connected. In one or more embodiments, some or all of the configuration information is provided to the configuration server by an application used to access the transformed event data. The application may be designed around one or more specific use cases associated with network data captured by the remote capture agent, such as managing virtual machines, assessing network security, performing web analytics, and/or managing web application performance. The application may also execute on the SPLUNK® ENTERPRISE platform and have access to both the configuration server and event data generated by the remote capture agent. To offload processing of the event data at the application (e.g., during real-time querying and/or visualization of the event data), the application may provide configuration information for performing the processing at the remote capture agent to the configuration server, and the configuration server may propagate the configuration information to the remote capture agent. In turn, the remote capture agent may use the configuration to perform the processing as the event data is generated and/or transformed instead of requiring the application to perform significant processing the event data in real-time. In other words, subsequent real-time processing of event data by the application and the associated overhead associated with such processing may be reduced by providing configuration information that causes the remote capture agent to transform event data into a form that can be used by the application. This may integrate better with a late-binding schema, such as the late-binding schema implemented by Splunk Inc. of San Francisco, California, because significant resources may be required to aggregate, format, and/or otherwise transform event data and extract fields at runtime. The term “late-binding schema” refers to a system, such as SPLUNK® ENTERPRISE, where the schema need not be defined at index time, as with database technology. Rather, in a system involving late-binding schema, the schema can be developed on an ongoing basis up until a query, during execution, applies (binds) the schema to data to evaluate the data. As a user learns more about the data in stored events, in a late-binding schema, he/she can continue to develop the schema up until the next time it is needed for a query. Because SPLUNK® ENTERPRISE maintains the underlying raw data and enables application of a late-binding schema, SPLUNK® ENTERPRISE may have greater capability to enable deep exploration of the data to solve problems reflected in the data and answer questions about the data than conventional databases or data-processing systems that merely store summaries or portions of data. For example, a security application monitoring login attempts on a web application may use incorrect password entries by users during the login attempts to assess the security of the web application. The security application may provide configuration information for generating event data corresponding to login failures, with the event data containing usernames, IP addresses, timestamps, and/or passwords entered for the login failures. Because the security application may receive events only when failed login attempts occur, the security application may not be required to filter the event data for failed login attempts. Continuing with the above example, the configuration information may specify the aggregation of failed login attempts into failed login attempts per minute. Thus, instead of receiving an event every time a failed login attempt occurs, the security application may receive event data every minute that indicates the number of failed login attempts for the last minute. 3.3. Operation of Configuration Server FIG.6shows a flowchart illustrating the process of facilitating data capture. In particular,FIG.6shows a flowchart illustrating the process of operating a configuration server in accordance with the disclosed embodiments. In one or more embodiments, one or more of the steps may be omitted, repeated, and/or performed in a different order. Accordingly, the specific arrangement of steps shown inFIG.6should not be construed as limiting the scope of the embodiments. First, configuration information for a set of remote capture agents on a set of networks is obtained at the configuration server (operation602). The configuration information may be obtained from a user (e.g., an administrator) and/or an application used to access event data generated by the remote capture agents. Next, the configuration server is used to provide the configuration information to the remote capture agents (operation604). For example, the configuration server may use a push and/or pull mechanism to transmit the configuration information to the remote capture agents. The configuration information may then be used by the remote capture agents to configure the generation and/or transformation of event data, as described above. An update to the configuration information may be obtained (operation606). For example, an update to the configuration information may be obtained to enable the generation of new event streams at one or more of the remote capture agents for use with one or more new use cases associated with network data capture by the remote capture agent(s). If an update to the configuration information is obtained, the configuration server is used to provide the update to the remote capture agents (operation608), and the update is used to reconfigure the generation and/or transformation of the event data at the remote capture agents during runtime of the remote capture agents. If no update is received, no additional configuration information may be transmitted between the configuration server and remote capture agents. The remote capture agents may continue to be configured (operation610) using configuration information from the configuration server. If the remote capture agents are to be configured using the configuration server, any updates to the configuration information are transmitted from the configuration server to the remote capture agents (operation606-608) to enable reconfiguration of the remote capture agents. Such transmission of updates to the configuration information to the remote capture agents may continue until the configuration server is no longer used to dynamically configure the remote capture agents. 3.4. GUI for Configuring Event Streams FIG.7shows a flowchart illustrating the process of facilitating the processing of data. More specifically,FIG.7shows a flowchart of using a GUI to obtain configuration information for managing event streams in accordance with the disclosed embodiments. In one or more embodiments, one or more of the steps may be omitted, repeated, and/or performed in a different order. Accordingly, the specific arrangement of steps shown inFIG.5should not be construed as limiting the scope of the embodiments. Initially, the GUI is provided for obtaining configuration information for configuring the generation of event data from network data obtained from network packets at one or more remote capture agents (operation702). The configuration information may be obtained using a configuration dialog of the GUI, as discussed in further detail below with respect toFIG.9. Next, use of the GUI in configuring the connection of one or more event streams containing the event data to one or more reactors for subsequent processing of the event data by the reactor(s) is enabled (operation704). For example, graphical representations of the event stream(s) and reactor(s) may be displayed in the GUI, and directed edges for connecting the graphical representations may be provided by the GUI. A directed edge from one component (e.g., event stream or reactor) to another may thus represent the passing of output from the component as input to the second component. Using GUIs to connect event streams and reactors is described in further detail below with respect toFIG.8. Use of the GUI in configuring the subsequent processing of the event data by the reactor(s) is also enabled (operation706). For example, the GUI may provide a separate configuration dialog for configuring each type of reactor used to process event streams. Finally, the configuration information is provided to the remote capture agent(s), where the configuration information is used to configure the generation of the event data at the remote capture agent(s) during runtime of the remote capture agent(s). In one or more embodiments, reactors are provided by one or more transformation servers that transform the event data after the event data is created and/or initially transformed at the remote capture agent(s). As noted above, configuration servers may be transformation servers. Alternatively, a configuration server may be included within a transformation server and/or execute independently from the transformation server. The reactors may include collection reactors that collect event and/or network data, processing reactors that process event and/or network data, and/or storage reactors that store event and/or network data. Within the GUI, the reactors may be represented by icons and/or other user-interface elements that may be selected to configure the operation of the reactors. FIG.8depicts an example screen shot of an embodiment of a GUI800that is adapted to display configurable components within a distributed data capture and processing system. GUI800may be provided by a configuration server, such as configuration servers120ofFIG.1. In the illustrated embodiment, GUI800includes two stream icons801and802that correspond to graphical representations of two event streams. Icon801is connected to a filter reactor icon803using a directed edge, which is further connected to a python reactor icon806using another directed edge. Filter reactor icon803may be a graphical representation of a filter reactor that filters event streams provided as input to the filter reactor according to one or more filtering rules (e.g., regular expressions, network data types, event types, time spans, etc.) and outputs the filtered event streams. Python reactor icon806may be a graphical representation of a python reactor that creates, processes, or stores events using the Python programming language. As a result, event data from the event stream represented by stream icon801may be filtered by the filter reactor before being processed by the python reactor. Another series of directed edges in GUI800may connect stream icon802to a cleansing transformation reactor icon804, which in turn is connected to both a filter reactor icon805and an aggregator reactor icon807. Cleansing transformation reactor icon804may be a graphical representation of a cleansing transformation reactor that normalizes different representations of the same value into the same normalized value. For example, the cleansing transformation reactor may convert different timestamp formats into the same normalized timestamp format. Aggregator reactor icon807may be a graphical representation of an aggregator reactor that aggregates event data for multiple events received during a time interval and produces new events representing the aggregated information. The new events may include event counts, statistics, and/or uniqueness counts related to the aggregated information. For example, the aggregated event data may include total page views, average numbers of requests, minimum RTT, and/or counts of requests for uniquely named resources. Other examples of reactors usable with the techniques described herein include: Collection Reactors LogInputReactor: Uses a Codec to store events into log files.SnifferReactor: Passively sniffs network packets, reassembles TCP and decrypts SSL/TLS. Protocol plugins allow you to generate events from any type of network traffic. Processing Reactors AggregateReactor: Aggregates information across multiple events received during an interval of time. Produces new events representing the aggregated information. Can also store historical information into external database tables and produce real-time reports.ClickstreamReactor: Sessionizes a stream of HTTP request events (or clickstream hits) by grouping them into page views and sessionizes. Appends additional session attributes to the request events and produces two new types of events, one each for page views and sessions.ContentHashReactor: Performs a hashing algorithm on a content field and uses the result to populate field. This Reactor controls which content is stored in the Stream Replay database.FilterReactor: Uses configurable rules to detect new events, sequences or patterns. Delivers events to the reactors it is connected to only when these occur.FissionReactor: Used to generate multiple events derived from a single source event. Primarily used to extract RSS and Atom content from individual HTTP requests.PythonReactor: The PythonReactor allows you to build fully-featured Reactors that can create, process, or store events using the Python programming language.ScriptReactor: Executes a shell script to process each event it receives.SessionFilterReactor: Uses rules to detect patterns within visitor sessions. Events for a session are queued in memory until a match is found. If a match is found, all the session's events are passed through as output to other Reactors. If no match is found, the events are discarded.SQLReactor: Uses Database plugins to perform real-time SQL queries derived from the events that it receives. The results of the queries can be used to add additional information to the original event.TransformReactor: Creates new events which are derived from the events that it receives. This can be used to create entirely new types of complex events (for example, to signify that a pattern has been detected), or to derive new attributes which are based on attributes in existing events (i.e. assign a new attribute to “Internet Explorer” if an existing attribute contains “MSIE”). Storage Reactors DatabaseOutputReactor: Stores events directly into database tables using Database pluginsGoogleAnalyticsReactor: Replicates website page tags by delivering real-time clickstream events to Google Analytics using their HTTP interface.HTTPOutputReactor: Converts incoming events into HTTP requests.LogOutputReactor: Uses a Codec to store events into log files.MultiDatabaseReactor: Stores events into a collection of partitioned database tables. Used by Stream Replay to store traffic into an embedded database.OmnitureAnalyticsReactor: Replicates website page tags by delivering real-time clickstream events to Omniture using their XML/HTTP data insertion API.UnicaAnalyticsReactor: Replicates website page tags by delivering real-time clickstream events to Webtrends Analytics using their On Demand HTTP API.WebtrendsReactors: Replicates website page tags by delivering real-time clickstream events to Webtrends Analytics using their On Demand HTTP API. GUI800may thus provide a visual mechanism for configuring event streams that are generated from network traffic. Users may connect graphical representations of event streams and reactors to allow filtering, cleaning, aggregating, transforming, and/or other processing of events in the event streams. Output from the reactors may then be provided to other reactors using connections (e.g., directed edges) specified in GUI800for further processing. In addition, selecting (e.g., double-clicking) on stream icons801-802may invoke the configuration dialog for the corresponding event stream, which allows users to configure the generation of event data in the event stream.FIG.9depicts an example screen shot of an embodiment of a configuration dialog901for obtaining configuration information for configuring the generation of event data from network data at one or more remote capture agents. In the illustrated embodiment, configuration dialog901includes a section902for specifying a descriptive stream name (e.g., “Home Page Requests”) and an event type (e.g., “clickstream.http-event”) associated with the event stream. Another section903may be used to provide terms (e.g., for clickstream data) to be included in event data the event stream. For example, section903may display a list of terms (e.g., “clickestream.c-ip,” “clickstream.host,” “clickstream.uri-stem”) to be included in the event data, as well as a mechanism904for adding a new term to the list. Configuration dialog901further includes a section905that enables the definition of one or more filtering rules. For example, section905may include a filtering rule that requires an exact match between a URI stem of an event and the value “/index.html.” Section905may also include a mechanism906for adding new filtering rules for the event stream. 4.0. Implementation Mechanisms 4.1. Exemplary Systems for Storing and Retrieving Events As noted above, the visualization techniques described herein can be applied to a variety of types of events, including those generated and used in SPLUNK® ENTERPRISE. Further details of underlying architecture of SPLUNK® ENTERPRISE are now provided.FIG.10depicts an example block diagram of an embodiment of a time-based data storage architecture that includes a late-binding schema. Generally, the system includes one or more forwarders1010that collect data from a variety of different data sources1005and forwards the data using forwarders1010to one or more data indexers1015. In one embodiment, forwarders1010and indexers1015can be implemented in one or more hardware servers. Moreover, the functionality of one or more forwarders1010may be implemented by one or more remote capture agents (e.g., remote capture agents151-153ofFIG.1) and/or transformation servers. For example, event data from a set of remote capture agents may be sent over a network to a set of transformation servers and/or reactors (e.g., collection reactors, processing reactors, storage reactors) that implement the indexing, storage and querying functionality of SPLUNK® ENTERPRISE. The data typically includes streams of time-series data. Time-series data refers to any data that can be associated with a time stamp. The data can be structured, unstructured, or semi-structured and come from files or directories. Unstructured data may be data that is not organized to facilitate extraction of values for fields from the data, as is often the case with machine data and web logs. The data indexers1015may provide the time-stamped data for storage in one or more data stores1020. FIG.11illustrates a flowchart of an example embodiment of a process for storing collected data in a data storage architecture that includes a late-binding schema.FIG.11depicts a process that indexers1015may use to process, index, and store data received from the forwarders1010. At operation1105, an indexer1015receives data from a forwarder1010. At operation1110, the data is segmented into events. The events can be broken at event boundaries, which can include character combinations and/or line breaks. In some instances, the software discovers event boundaries automatically, and in other instances the event boundaries may be configured by the user. A time stamp is determined for each event at operation1115. The time stamp can be determined by extracting the time from data in an event or by interpolating the time based on time stamps from other events. In alternative embodiments, a time stamp may be determined from the time the data was received or generated. The time stamp is associated with each event at operation1120. For example, the time stamp may be stored as metadata for the event. At operation1125, the data included in a given event may be transformed. Such a transformation can include such things as removing part of an event (e.g., a portion used to define event boundaries) or removing redundant portions of an event. A client data processing system may specify a portion to remove using a regular expression or any similar method. Optionally, a keyword index can be built to facilitate fast keyword searching of events. To build such an index, in operation1130, a set of keywords contained in the events is identified. At operation1135, each identified keyword is included in an index, which associates with each stored keyword pointers to each event containing that keyword (or locations within events where that keyword is found). When a keyword-based query is received by an indexer, the indexer may then consult this index to quickly find those events containing the keyword without having to examine again each individual event, thereby greatly accelerating keyword searches. The events are stored in a data store at operation1140. The data can be stored in working, short-term and/or long-term memory in a manner retrievable by query. The time stamp may be stored along with each event to help optimize searching the events by time range. In some instances, the data store includes a plurality of individual storage buckets, each corresponding to a time range. An event can then be stored in a bucket associated with a time range inclusive of the event's time stamp. This not only optimizes time based searches, but it can allow events with recent time stamps that may have a higher likelihood of being accessed to be stored at preferable memory locations that lend to quicker subsequent retrieval (such as flash memory instead of hard-drive memory). As shown inFIG.10, data stores1020may be distributed across multiple indexers, each responsible for storing and searching a subset of the events generated by the system. By distributing the time-based buckets among them, the indexers may find events responsive to a query from a search engine1025in parallel using map-reduce techniques, each returning their partial responses to the query to a search head that combines the results together to answer the query. This query handling is illustrated inFIG.12. FIG.12illustrates a flowchart of an example embodiment of a process for generating a query result in a data storage architecture that includes a late-binding schema. At operation1205, a search heard receives a query from a search engine. At operation1210, the search head distributes the query to one or more distributed indexers. These indexers can include those with access to data stores having events responsive to the query. For example, the indexers can include those with access to events with time stamps within part or all of a time period identified in the query. At operation1215, each of one or more indexers to which the query was distributed searches its data store for events responsive to the query. To determine events responsive to the query, a searching indexer finds events specified by the criteria in the query. This criteria can include that the events have particular keywords or contain a specified value or values for a specified field or fields (because this employs a late-binding schema, extraction of values from events to determine those that meet the specified criteria occurs at the time this query is processed). It should be appreciated that, to achieve high availability and to provide for disaster recovery, events may be replicated in multiple data stores, in which case indexers with access to the redundant events would not respond to the query by processing the redundant events. The indexers1015may either stream the relevant events back to the search head or use the events to calculate a partial result responsive to the query and send the partial result back to the search head. At operation1220, the search head combines all the partial results or events received from the parallel processing together to determine a final result responsive to the query. Data intake and query system145and the processes described with respect toFIGS.10-12are further discussed and elaborated upon in Carasso, David.Exploring Splunk Search Processing Language(SPL)Primer and Cookbook. New York: CITO Research, 2012 and in Ledion Bitincka, Archana Ganapathi, Stephen Sorkin, and Steve Zhang.Optimizing data analysis with a semi-structured time series database. In SLAML, 2010. Each of these references is hereby incorporated by reference in its entirety for all purposes. 4.2. Hardware Overview FIG.13depicts an example data processing system upon which the embodiments described herein may be implemented. As shown inFIG.13, the data processing system1301includes a system bus1302, which is coupled to a processor1303, a Read-Only Memory (“ROM”)1307, a Random Access Memory (“RAM”)1305, as well as other nonvolatile memory1306, e.g., a hard drive. In the illustrated embodiment, processor1303is coupled to a cache memory1304. System bus1302can be adapted to interconnect these various components together and also interconnect components1303,1307,1305, and1306to a display controller and display device1308, and to peripheral devices such as input/output (“I/O”) devices1310. Types of I/O devices can include keyboards, modems, network interfaces, printers, scanners, video cameras, or other devices well known in the art. Typically, I/O devices1310are coupled to the system bus1302through I/O controllers1309. In one embodiment the I/O controller1309includes a Universal Serial Bus (“USB”) adapter for controlling USB peripherals or other type of bus adapter. RAM1305can be implemented as dynamic RAM (“DRAM”), which requires power continually in order to refresh or maintain the data in the memory. The other nonvolatile memory1306can be a magnetic hard drive, magnetic optical drive, optical drive, DVD RAM, or other type of memory system that maintains data after power is removed from the system. WhileFIG.13shows that nonvolatile memory1306as a local device coupled with the rest of the components in the data processing system, it will be appreciated by skilled artisans that the described techniques may use a nonvolatile memory remote from the system, such as a network storage device coupled with the data processing system through a network interface such as a modem or Ethernet interface (not shown). 5.0. Extensions and Alternatives With these embodiments in mind, it will be apparent from this description that aspects of the described techniques may be embodied, at least in part, in software, hardware, firmware, or any combination thereof. It should also be understood that embodiments can employ various computer-implemented functions involving data stored in a computer system. The techniques may be carried out in a computer system or other data processing system in response executing sequences of instructions stored in memory. In various embodiments, hardwired circuitry may be used independently or in combination with software instructions to implement these techniques. For instance, the described functionality may be performed by specific hardware components containing hardwired logic for performing operations, or by any combination of custom hardware components and programmed computer components. The techniques described herein are not limited to any specific combination of hardware circuitry and software. Embodiments herein may also be implemented in computer-readable instructions stored on an article of manufacture referred to as a computer-readable medium, which is adapted to store data that can thereafter be read and processed by a computer. Computer-readable media is adapted to store these computer instructions, which when executed by a computer or other data processing system such as data processing system1300, are adapted to cause the system to perform operations according to the techniques described herein. Computer-readable media can include any mechanism that stores information in a form accessible by a data processing device such as a computer, network device, tablet, smartphone, or any device having similar functionality. Examples of computer-readable media include any type of tangible article of manufacture capable of storing information thereon including floppy disks, hard drive disks (“HDDs”), solid-state devices (“SSDs”) or other flash memory, optical disks, digital video disks (“DVDs”), CD-ROMs, magnetic-optical disks, ROMs, RAMs, erasable programmable read only memory (“EPROMs”), electrically erasable programmable read only memory (“EEPROMs”), magnetic or optical cards, or any other type of media suitable for storing instructions in an electronic format. Computer-readable media can also be distributed over a network-coupled computer system stored and executed in a distributed fashion. Throughout the foregoing description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to persons skilled in the art that these embodiments may be practiced without some of these specific details. Although various embodiments incorporating the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these techniques. Embodiments of the invention may include various operations as set forth above or fewer operations or more operations; or operations in an order, which is different from the order described herein. Accordingly, the scope and spirit of the invention should be judged in terms of the claims that follow as well as the legal equivalents thereof. | 83,147 |
11863409 | DETAILED DESCRIPTION OF THE DISCLOSURE The present disclosure relates to systems and methods for monitoring, analyzing, and improving digital user experience. The systems and methods provide experience monitoring in the context of Software-as-a-Service (SaaS) and the cloud, including end user experience monitoring, network/server/endpoint monitoring, cloud application performance monitoring (e.g., Azure, AWS, GCP), SaaS application performance monitoring (GCP, Office 365, Salesforce, Skype), Voice over Internet Protocol (VOIP) and other real-time application performance monitoring, Web performance monitoring, etc. The systems and methods include a digital experience monitoring platform which does not require new hardware or software in the network. Rather, the digital experience monitoring platform leverages an existing cloud infrastructure, namely a distributed security cloud, lightweight connectors at the edge for access to applications, and an application at endpoints such as user devices. Such components are already in place in Zscaler's distributed security cloud. Also, these components perform inline processing, enabling a real-time collection of data for the digital experience monitoring platform. Advantageously, by leveraging existing infrastructure, the digital experience monitoring platform provides real-time data which can be used for remediation and requires no additional equipment. For example, the digital experience monitoring platform can enable an intelligent path selection in real-time for a user. Thus, the digital experience monitoring platform is proactive, not reactive. Aspects of the digital experience monitoring platform include monitoring Internet traffic, destination monitoring, tunnel monitoring, health monitoring for the cloud, etc. This can include endpoint metrics, Service Layer Agreement (SLA) monitoring, Anomaly detection/Security Operations Center (SOC) Integration, topology mapping, packet captures and flow-based monitoring, User Experience (UEX) Score, Infrastructure-as-a-Service (IaaS) monitoring/integration, change monitoring, Autonomous System (AS) monitoring, third-party network monitoring, etc. The objective here is proactive, not reactive, monitoring of end users to detect, as early as possible, issues that impact true user experience and productivity such as to identify root cause of performance issues with actionable insights for remediation. This is performed by correlating user performance in the context of network metrics, application metrics, and endpoint device metrics. § 1.0 Example High-Level System Architecture—Cloud-Based Security System FIG.1is a block diagram of a distributed security system100. The system100may, for example, be implemented as an overlay network in a Wide Area Network (WAN), such as the Internet, a Local Area Network (LAN), or the like. The system100includes Processing Nodes (PN)110, that proactively detect and preclude the distribution of security threats, e.g., malware, spyware, viruses, email spam, Data Loss Prevention (DLP), content filtering, etc., and other undesirable content sent from or requested by an external system. The processing nodes110can also log activity and enforce policies, including logging changes to the various components and settings in the system100. Example external systems may include an enterprise or external system200, a computer device220, and a mobile device230, or other network and computing systems communicatively coupled to the system100including Internet of Things (IoT) devices. In an embodiment, each of the processing nodes110may include a decision system, e.g., data inspection engines that operate on a content item, e.g., a web page, a file, an email message, or some other data or data communication that is sent from or requested by one of the external systems. In an embodiment, all data destined for or received from the Internet is processed through one of the processing nodes110. In another embodiment, specific data specified by each external system, e.g., only email, only executable files, etc., is process through one of the processing node110. Each of the processing nodes110may generate a decision vector D=[d1, d2, . . . , dn] for a content item of one or more parts C=[c1, c2, . . . , cm]. Each decision vector may identify a threat classification, e.g., clean, spyware, malware, undesirable content, innocuous, spam email, unknown, etc. For example, the output of each element of the decision vector D may be based on the output of one or more data inspection engines. In an embodiment, the threat classification may be reduced to a subset of categories, e.g., violating, non-violating, neutral, unknown. Based on the subset classification, the processing node110may allow distribution of the content item, preclude distribution of the content item, allow distribution of the content item after a cleaning process, or perform threat detection on the content item. In an embodiment, the actions taken by one of the processing nodes110may be determinative on the threat classification of the content item and on a security policy of the external system to which the content item is being sent from or from which the content item is being requested by. A content item is violating if, for any part C=[c1, c2, . . . , cm] of the content item, at any of the processing nodes110, any one of the data inspection engines generates an output that results in a classification of “violating.” Each of the processing nodes110may be implemented by one or more computer and communications devices, e.g., server computers, gateways, routers, switches, etc., such as the server300described inFIG.3. In an embodiment, the processing nodes110may serve as an access layer150. The access layer150may, for example, provide external system access to the security system100. In an embodiment, each of the processing nodes110may include Internet gateways and one or more servers, and the processing nodes110may be distributed through a geographic region, e.g., throughout a country, region, campus, etc. According to a service agreement between a provider of the system100and an owner of an external system, the system100may thus provide security protection to the external system at any location throughout the geographic region. Data communications may be monitored by the system100in a variety of ways, depending on the size and data requirements of the external system. For example, an enterprise200may have multiple routers, switches, etc. that are used to communicate over the Internet, and the routers, switches, etc. may be configured to establish communications through the nearest (in traffic communication time, for example) processing node110. A mobile device230may be configured to communicate to the nearest processing node110through any available wireless access device, such as an access point, or a cellular gateway. A single computer device220, such as a consumer's personal computer, may have its browser and email program configured to access the nearest processing node110, which, in turn, serves as a proxy for the computer device220. Alternatively, an Internet provider may have all of its customer traffic processed through the processing nodes110. In an embodiment, the processing nodes110may communicate with one or more authority nodes (AN)120. The authority nodes120may store policy data for each external system and may distribute the policy data to each of the processing nodes110. The policy may, for example, define security policies for a protected system, e.g., security policies for the enterprise200. Example policy data may define access privileges for users, websites and/or content that is disallowed, restricted domains, etc. The authority nodes120may distribute the policy data to the processing nodes110. In an embodiment, the authority nodes120may also distribute threat data that includes the classifications of content items according to threat classifications, e.g., a list of known viruses, a list of known malware sites, spam email domains, a list of known phishing sites, etc. The distribution of threat data between the processing nodes110and the authority nodes120may be implemented by push and pull distribution schemes described in more detail below. In an embodiment, each of the authority nodes120may be implemented by one or more computer and communication devices, e.g., server computers, gateways, switches, etc., such as the server300described inFIG.3. In some embodiments, the authority nodes120may serve as an application layer170. The application layer170may, for example, manage and provide policy data, threat data, and data inspection engines and dictionaries for the processing nodes110. Other application layer functions may also be provided in the application layer170, such as a user interface (UI) front-end130. The user interface front-end130may provide a user interface through which users of the external systems may provide and define security policies, e.g., whether email traffic is to be monitored, whether certain websites are to be precluded, etc. Another application capability that may be provided through the user interface front-end130is security analysis and log reporting. The underlying data on which the security analysis and log reporting functions operate are stored in logging nodes (LN)140, which serve as a data logging layer160. Each of the logging nodes140may store data related to security operations and network traffic processed by the processing nodes110for each external system. In an embodiment, the logging node140data may be anonymized so that data identifying an enterprise is removed or obfuscated. For example, identifying data may be removed to provide an overall system summary of security processing for all enterprises and users without revealing the identity of any one account. Alternatively, identifying data may be obfuscated, e.g., provide a random account number each time it is accessed, so that an overall system summary of security processing for all enterprises and users may be broken out by accounts without revealing the identity of any one account. In another embodiment, the identifying data and/or logging node140data may be further encrypted, e.g., so that only the enterprise (or user if a single user account) may have access to the logging node140data for its account. Other processes of anonymizing, obfuscating, or securing logging node140data may also be used. Note, as described herein, the systems and methods for tracking and auditing changes in a multi-tenant cloud system can be implemented in the data logging layer160, for example. In an embodiment, an access agent180may be included in the external systems. For example, the access agent180is deployed in the enterprise200. The access agent180may, for example, facilitate security processing by providing a hash index of files on a client device to one of the processing nodes110, or may facilitate authentication functions with one of the processing nodes110, e.g., by assigning tokens for passwords and sending only the tokens to a processing node so that transmission of passwords beyond the network edge of the enterprise is minimized. Other functions and processes may also be facilitated by the access agent180. In an embodiment, the processing node110may act as a forward proxy that receives user requests to external servers addressed directly to the processing node110. In another embodiment, the processing node110may access user requests that are passed through the processing node110in a transparent mode. A protected system, e.g., enterprise200, may, for example, choose one or both of these modes. For example, a browser may be configured either manually or through the access agent180to access the processing node110in a forward proxy mode. In the forward proxy mode, all accesses are addressed to the processing node110. In an embodiment, an enterprise gateway may be configured so that user requests are routed through the processing node110by establishing a communication tunnel between enterprise gateway and the processing node110. For establishing the tunnel, existing protocols such as generic routing encapsulation (GRE), layer two tunneling protocol (L2TP), Internet Protocol Security (IPSec), Datagram Transport Layer Security (DTLS), or other tunneling and encapsulation techniques designed for an Internet Protocol (IP)-based underlay data plane (IP) security protocols may be used. In another embodiment, the processing nodes110may be deployed at Internet service provider (ISP) nodes. The ISP nodes may redirect subject traffic to the processing nodes110in a transparent proxy mode. Protected systems, such as the enterprise200, may use a multiprotocol label switching (MPLS) class of service for indicating the subject traffic that is to be redirected. For example, at the within the enterprise, the access agent180may be configured to perform MPLS labeling. In another transparent proxy mode embodiment, a protected system, such as the enterprise200, may identify the processing node110as a next hop router for communication with the external servers. Generally, the distributed security system100may generally refer to a cloud-based security system. Other cloud-based security systems and generalized cloud-based systems are contemplated for the systems and methods for tracking and auditing changes in a multi-tenant cloud system. Cloud computing systems and methods abstract away physical servers, storage, networking, etc. and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's device, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “Software as a Service” (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The distributed security system100is illustrated herein as one embodiment of a cloud-based system, and those of ordinary skill in the art will recognize the tracking and auditing systems and methods contemplate operation on any cloud-based system. An example of the distributed security system100is the Zscaler cloud where the processing nodes110are referred to as Zscaler Enforcement Nodes (ZEN) and the authority nodes120are referred to as Central Authority (CA) nodes. In a practical embodiment, there can be many more processing nodes110relative to the authority nodes120. § 2.0 Example Detailed System Architecture and Operation FIG.2is a block diagram of various components of the distributed security system100in more detail. AlthoughFIG.2illustrates only one representative component processing node110, authority node120and logging node140, those of ordinary skill in the art will appreciate there may be many of each of the component nodes110,120and140present in the system100. A wide area network (WAN)101, such as the Internet, or some other combination of wired and/or wireless networks, communicatively couples the processing node110, the authority node120, and the logging node140to one another. The external systems200,220and230likewise communicate over the WAN101with each other or other data providers and publishers. Some or all of the data communication of each of the external systems200,220and230may be processed through the processing node110. FIG.2also shows the enterprise200in more detail. The enterprise200may, for example, include a firewall (FW)202protecting an internal network that may include one or more enterprise servers216, a Lightweight Directory Access Protocol (LDAP) server212, and other data or data stores214. Another firewall203may protect an enterprise subnet that can include user computers206and208(e.g., laptop and desktop computers). The enterprise200may communicate with the WAN101through one or more network devices, such as a router, gateway, switch, etc. The LDAP server212may store, for example, user login credentials for registered users of the enterprise200system. Such credentials may include user identifiers, login passwords, and a login history associated with each user identifier. The other data stores214may include sensitive information, such as bank records, medical records, trade secret information, or any other information warranting protection by one or more security measures. In an embodiment, a client access agent180amay be included on a client computer206. The client access agent180amay, for example, facilitate security processing by providing a hash index of files on the user computer206to a processing node110for malware, virus detection, etc. Other security operations may also be facilitated by the access agent180a.In another embodiment, a server access agent180may facilitate authentication functions with the processing node110, e.g., by assigning tokens for passwords and sending only the tokens to the processing node110so that transmission of passwords beyond the network edge of the enterprise200is minimized. Other functions and processes may also be facilitated by the server access agent180b.The computer device220and the mobile device230may also store information warranting security measures, such as personal bank records, medical information, and login information, e.g., login information to the computers206of the enterprise200, or to some other secure data provider server. The computer device220and the mobile device230can also store information warranting security measures, such as personal bank records, medical information, and login information, e.g., login information to a server216of the enterprise200, or to some other secure data provider server. § 2.1 Example Processing Node Architecture In an embodiment, the processing nodes110are external to network edges of the external systems200,220and230. Each of the processing nodes110stores security policy data113received from the authority node120and monitors content items requested by or sent from the external systems200,220and230. In an embodiment, each of the processing nodes110may also store a detection process filter112and/or threat data114to facilitate the decision of whether a content item should be processed for threat detection. A processing node manager118may manage each content item in accordance with the security policy data113, and the detection process filter112and/or threat data114, if stored at the processing node110, so that security policies for a plurality of external systems in data communication with the processing node110are implemented external to the network edges for each of the external systems200,220and230. For example, depending on the classification resulting from the monitoring, the content item may be allowed, precluded, or threat detected. In general, content items that are already classified as “clean” or not posing a threat can be allowed, while those classified as “violating” may be precluded. Those content items having an unknown status, e.g., content items that have not been processed by the system100, may be threat detected to classify the content item according to threat classifications. The processing node110may include a state manager116A. The state manager116A may be used to maintain the authentication and the authorization states of users that submit requests to the processing node110. Maintenance of the states through the state manager116A may minimize the number of authentication and authorization transactions that are necessary to process a request. The processing node110may also include an epoch processor116B. The epoch processor116B may be used to analyze authentication data that originated at the authority node120. The epoch processor116B may use an epoch ID to validate further the authenticity of authentication data. The processing node110may further include a source processor116C. The source processor116C may be used to verify the source of authorization and authentication data. The source processor116C may identify improperly obtained authorization and authentication data, enhancing the security of the network. Collectively, the state manager116A, the epoch processor116B, and the source processor116C operate as data inspection engines. Because the amount of data being processed by the processing nodes110may be substantial, the detection processing filter112may be used as the first stage of an information lookup procedure. For example, the detection processing filter112may be used as a front-end to a look-up of the threat data114. Content items may be mapped to index values of the detection processing filter112by a hash function that operates on an information key derived from the information item. The information key is hashed to generate an index value (i.e., a bit position). A value of zero in a bit position in the guard table can indicate, for example, the absence of information, while a one in that bit position can indicate the presence of information. Alternatively, a one could be used to represent absence, and a zero to represent presence. Each content item may have an information key that is hashed. For example, the processing node manager118may identify the Uniform Resource Locator (URL) address of URL requests as the information key and hash the URL address; or may identify the file name and the file size of an executable file information key and hash the file name and file size of the executable file. Hashing an information key to generate an index and checking a bit value at the index in the detection processing filter112generally requires less processing time than actually searching threat data114. The use of the detection processing filter112may improve the failure query (i.e., responding to a request for absent information) performance of database queries and/or any general information queries. Because data structures are generally optimized to access information that is present in the structures, failure query performance has a greater effect on the time required to process information searches for very rarely occurring items, e.g., the presence of file information in a virus scan log or a cache where many or most of the files transferred in a network have not been scanned or cached, using the detection processing filter112. However, the worst case additional cost is only on the order of one, and thus its use for most failure queries saves on the order of m log m, where m is the number of information records present in the threat data114. The detection processing filter112thus improves the performance of queries where the answer to a request for information is usually positive. Such instances may include, for example, whether a given file has been virus scanned, whether content at a given URL has been scanned for inappropriate (e.g., pornographic) content, whether a given fingerprint matches any of a set of stored documents, and whether a checksum corresponds to any of a set of stored documents. Thus, if the detection processing filter112indicates that the content item has not been processed, then a worst-case null lookup operation into the threat data114is avoided, and a threat detection can be implemented immediately. The detection processing filter112thus complements the threat data114that capture positive information. In an embodiment, the detection processing filter112may be a Bloom filter implemented by a single hash function. The Bloom filter may be sparse table, i.e., the tables include many zeros and few ones, and the hash function is chosen to minimize or eliminate false negatives which are, for example, instances where an information key is hashed to a bit position, and that bit position indicates that the requested information is absent when it is actually present. § 2.2 Example Authority Node Architecture In general, the authority node120includes a data store that stores master security policy data123for each of the external systems200,220and230. An authority node manager128may be used to manage the master security policy data123, e.g., receive input from users of each of the external systems defining different security policies and may distribute the master security policy data123to each of the processing nodes110. The processing nodes110then store a local copy of the security policy data113. The authority node120may also store a master detection process filter122. The detection processing filter122may include data indicating whether content items have been processed by one or more of the data inspection engines116in any of the processing nodes110. The authority node manager128may be used to manage the master detection processing filter122, e.g., receive updates from processing nodes110when the processing node110has processed a content item and update the master detection processing filter122. For example, the master detection processing filter122may be distributed to the processing nodes110, which then store a local copy of the detection processing filter112. In an embodiment, the authority node120may include an epoch manager126. The epoch manager126may be used to generate authentication data associated with an epoch ID. The epoch ID of the authentication data is a verifiable attribute of the authentication data that can be used to identify fraudulently created authentication data. In an embodiment, the detection processing filter122may be a guard table. The processing node110may, for example, use the information in the local detection processing filter112to quickly determine the presence and/or absence of information, e.g., whether a particular URL has been checked for malware; whether a particular executable has been virus scanned, etc. The authority node120may also store master threat data124. The master threat data124may classify content items by threat classifications, e.g., a list of known viruses, a list of known malware sites, spam email domains, list of known or detected phishing sites, etc. The authority node manager128may be used to manage the master threat data124, e.g., receive updates from the processing nodes110when one of the processing nodes110has processed a content item and update the master threat data124with any pertinent results. In some implementations, the master threat data124may be distributed to the processing nodes110, which then store a local copy of the threat data114. In another embodiment, the authority node120may also monitor the health of each of the processing nodes110, e.g., the resource availability in each of the processing nodes110, detection of link failures, etc. Based on the observed health of each of the processing nodes110, the authority node120may redirect traffic among the processing nodes110and/or balance traffic among the processing nodes110. Other remedial actions and processes may also be facilitated by the authority node120. § 2.3 Example Processing Node and Authority Node Communications The processing node110and the authority node120may be configured according to one or more push and pull processes to manage content items according to security policy data113and/or123, detection process filters112and/or122, and the threat data114and/or124. In a threat data push implementation, each of the processing nodes110stores policy data113and threat data114. The processing node manager118determines whether a content item requested by or transmitted from an external system is classified by the threat data114. If the content item is determined to be classified by the threat data114, then the processing node manager118may manage the content item according to the security classification of the content item and the security policy of the external system. If, however, the content item is determined not to be classified by the threat data114, then the processing node manager118may cause one or more of the data inspection engines117to perform the threat detection processes to classify the content item according to a threat classification. Once the content item is classified, the processing node manager118generates a threat data update that includes data indicating the threat classification for the content item from the threat detection process and transmits the threat data update to an authority node120. The authority node manager128, in response to receiving the threat data update, updates the master threat data124stored in the authority node data store according to the threat data update received from the processing node110. In an embodiment, the authority node manager128may automatically transmit the updated threat data to the other processing nodes110. Accordingly, threat data for new threats as the new threats are encountered are automatically distributed to each processing node110. Upon receiving the new threat data from the authority node120, each of processing node managers118may store the updated threat data in the locally stored threat data114. In a threat data pull and push implementation, each of the processing nodes110stores policy data113and threat data114. The processing node manager118determines whether a content item requested by or transmitted from an external system is classified by the threat data114. If the content item is determined to be classified by the threat data114, then the processing node manager118may manage the content item according to the security classification of the content item and the security policy of the external system. If, however, the content item is determined not to be classified by the threat data, then the processing node manager118may request responsive threat data for the content item from the authority node120. Because processing a content item may consume valuable resource and time, in some implementations the processing node110may first check with the authority node120for threat data114before committing such processing resources. The authority node manager128may receive the responsive threat data request from the processing node110and may determine if the responsive threat data is stored in the authority node data store. If responsive threat data is stored in the master threat data124, then the authority node manager128provide a reply that includes the responsive threat data to the processing node110so that the processing node manager118may manage the content item in accordance with the security policy data113and the classification of the content item. Conversely, if the authority node manager128determines that responsive threat data is not stored in the master threat data124, then the authority node manager128may provide a reply that does not include the responsive threat data to the processing node110. In response, the processing node manager118can cause one or more of the data inspection engines116to perform the threat detection processes to classify the content item according to a threat classification. Once the content item is classified, the processing node manager118generates a threat data update that includes data indicating the threat classification for the content item from the threat detection process and transmits the threat data update to an authority node120. The authority node manager128can then update the master threat data124. Thereafter, any future requests related to responsive threat data for the content item from other processing nodes110can be readily served with responsive threat data. In a detection process filter and threat data push implementation, each of the processing nodes110stores a detection process filter112, policy data113, and threat data114. The processing node manager118accesses the detection process filter112to determine whether the content item has been processed. If the processing node manager118determines that the content item has been processed, it may determine if the content item is classified by the threat data114. Because the detection process filter112has the potential for a false positive, a lookup in the threat data114may be implemented to ensure that a false positive has not occurred. The initial check of the detection process filter112, however, may eliminate many null queries to the threat data114, which, in turn, conserves system resources and increases efficiency. If the content item is classified by the threat data114, then the processing node manager118may manage the content item in accordance with the security policy data113and the classification of the content item. Conversely, if the processing node manager118determines that the content item is not classified by the threat data114, or if the processing node manager118initially determines through the detection process filter112that the content item is not classified by the threat data114, then the processing node manager118may cause one or more of the data inspection engines116to perform the threat detection processes to classify the content item according to a threat classification. Once the content item is classified, the processing node manager118generates a threat data update that includes data indicating the threat classification for the content item from the threat detection process and transmits the threat data update to one of the authority nodes120. The authority node manager128, in turn, may update the master threat data124and the master detection process filter122stored in the authority node data store according to the threat data update received from the processing node110. In an embodiment, the authority node manager128may automatically transmit the updated threat data and detection processing filter to other processing nodes110. Accordingly, threat data and the detection processing filter for new threats as the new threats are encountered are automatically distributed to each processing node110, and each processing node110may update its local copy of the detection processing filter112and threat data114. In a detection process filter and threat data pull and push implementation, each of the processing nodes110stores a detection process filter112, policy data113, and threat data114. The processing node manager118accesses the detection process filter112to determine whether the content item has been processed. If the processing node manager118determines that the content item has been processed, it may determine if the content item is classified by the threat data114. Because the detection process filter112has the potential for a false positive, a lookup in the threat data114can be implemented to ensure that a false positive has not occurred. The initial check of the detection process filter112, however, may eliminate many null queries to the threat data114, which, in turn, conserves system resources and increases efficiency. If the processing node manager118determines that the content item has not been processed, it may request responsive threat data for the content item from the authority node120. Because processing a content item may consume valuable resource and time, in some implementations the processing node110may first check with the authority node120for threat data114before committing such processing resources. The authority node manager128may receive the responsive threat data request from the processing node110and may determine if the responsive threat data is stored in the authority node data120store. If responsive threat data is stored in the master threat data124, then the authority node manager128provides a reply that includes the responsive threat data to the processing node110so that the processing node manager118can manage the content item in accordance with the security policy data112and the classification of the content item, and further update the local detection processing filter112. Conversely, if the authority node manager128determines that responsive threat data is not stored in the master threat data124, then the authority node manager128may provide a reply that does not include the responsive threat data to the processing node110. In response, the processing node manager118may cause one or more of the data inspection engines116to perform the threat detection processes to classify the content item according to a threat classification. Once the content item is classified, the processing node manager118generates a threat data update that includes data indicating the threat classification for the content item from the threat detection process and transmits the threat data update to an authority node120. The authority node manager128may then update the master threat data124. Thereafter, any future requests for related to responsive threat data for the content item from other processing nodes110can be readily served with responsive threat data. The various push and pull data exchange processes provided above are example processes for which the threat data and/or detection process filters may be updated in the system100ofFIGS.1and2. Other update processes, however, are contemplated herein. The data inspection engines116, processing node manager118, authority node manager128, user interface manager132, logging node manager148, and authority agent180may be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions can, for example, include interpreted instructions, such as script instructions, e.g., JavaScript or ECMAScript instructions, or executable code, or other instructions stored in a non-transitory computer-readable medium. Other processing architectures can also be used, e.g., a combination of specially designed hardware and software, for example. § 3.0 Example Server Architecture FIG.3is a block diagram of a server300which may be used in the system100, in other systems, or standalone. Any of the processing nodes110, the authority nodes120, and the logging nodes140may be formed through one or more servers300. Further, the computer device220, the mobile device230, the servers208,216, etc. may include the server300or similar structure. The server300may be a digital computer that, in terms of hardware architecture, generally includes a processor302, input/output (I/O) interfaces304, a network interface306, a data store308, and memory310. It should be appreciated by those of ordinary skill in the art thatFIG.3depicts the server300in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (302,304,306,308, and310) are communicatively coupled via a local interface312. The local interface312may be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface312may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface312may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor302is a hardware device for executing software instructions. The processor302may be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the server300, a semiconductor-based microprocessor (in the form of a microchip or chip set), or generally any device for executing software instructions. When the server300is in operation, the processor302is configured to execute software stored within the memory310, to communicate data to and from the memory310, and to generally control operations of the server300pursuant to the software instructions. The I/O interfaces304may be used to receive user input from and/or for providing system output to one or more devices or components. User input may be provided via, for example, a keyboard, touchpad, and/or a mouse. System output may be provided via a display device and a printer (not shown). I/O interfaces304may include, for example, a serial port, a parallel port, a small computer system interface (SCSI), a serial ATA (SATA), a fibre channel, Infiniband, iSCSI, a PCI Express interface (PCI-x), an infrared (IR) interface, a radio frequency (RF) interface, and/or a universal serial bus (USB) interface. The network interface306may be used to enable the server300to communicate over a network, such as the Internet, the WAN101, the enterprise200, and the like, etc. The network interface306may include, for example, an Ethernet card or adapter (e.g., 10BaseT, Fast Ethernet, Gigabit Ethernet, 10 GbE) or a wireless local area network (WLAN) card or adapter (e.g., 802.11a/b/g/n). The network interface306may include address, control, and/or data connections to enable appropriate communications on the network. A data store308may be used to store data. The data store308may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store308may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store1208may be located internal to the server300such as, for example, an internal hard drive connected to the local interface312in the server300. Additionally, in another embodiment, the data store308may be located external to the server300such as, for example, an external hard drive connected to the I/O interfaces304(e.g., SCSI or USB connection). In a further embodiment, the data store308may be connected to the server300through a network, such as, for example, a network attached file server. The memory310may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory310may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory310may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor302. The software in memory310may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory310includes a suitable operating system (O/S)314and one or more programs316. The operating system314essentially controls the execution of other computer programs, such as the one or more programs316, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs316may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein. § 4.0 Example Mobile Device Architecture FIG.4is a block diagram of a mobile device400, which may be used in the system100or the like. The mobile device400can be a digital device that, in terms of hardware architecture, generally includes a processor402, input/output (I/O) interfaces404, a radio406, a data store408, and memory410. It should be appreciated by those of ordinary skill in the art thatFIG.4depicts the mobile device400in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (402,404,406,408, and402) are communicatively coupled via a local interface412. The local interface412can be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface412can have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface412may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor402is a hardware device for executing software instructions. The processor402can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the mobile device400, a semiconductor-based microprocessor (in the form of a microchip or chip set), or generally any device for executing software instructions. When the mobile device400is in operation, the processor402is configured to execute software stored within the memory410, to communicate data to and from the memory410, and to generally control operations of the mobile device400pursuant to the software instructions. In an embodiment, the processor402may include an optimized mobile processor such as optimized for power consumption and mobile applications. The I/O interfaces404can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, barcode scanner, and the like. System output can be provided via a display device such as a liquid crystal display (LCD), touch screen, and the like. The I/O interfaces404can also include, for example, a serial port, a parallel port, a small computer system interface (SCSI), an infrared (IR) interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, and the like. The I/O interfaces404can include a graphical user interface (GUI) that enables a user to interact with the mobile device400. Additionally, the I/O interfaces404may further include an imaging device, i.e. camera, video camera, etc. The radio406enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the radio406, including, without limitation: RF; IrDA (infrared); Bluetooth; ZigBee (and other variants of the IEEE 802.15 protocol); IEEE 802.11 (any variation); IEEE 802.16 (WiMAX or any other variation); Direct Sequence Spread Spectrum; Frequency Hopping Spread Spectrum; Long Term Evolution (LTE); cellular/wireless/cordless telecommunication protocols (e.g. 3G/4G, etc.); wireless home network communication protocols; paging network protocols; magnetic induction; satellite data communication protocols; wireless hospital or health care facility network protocols such as those operating in the WMTS bands; GPRS; proprietary wireless data communication protocols such as variants of Wireless USB; and any other protocols for wireless communication. The data store408may be used to store data. The data store408may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store408may incorporate electronic, magnetic, optical, and/or other types of storage media. The memory410may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory410may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory410may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor402. The software in memory410can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example ofFIG.4, the software in the memory410includes a suitable operating system (O/S)414and programs416. The operating system414essentially controls the execution of other computer programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The programs416may include various applications, add-ons, etc. configured to provide end-user functionality with the mobile device400. For example, example programs416may include, but not limited to, a web browser, social networking applications, streaming media applications, games, mapping and location applications, electronic mail applications, financial applications, and the like. In a typical example, the end user typically uses one or more of the programs416along with a network such as the system100. § 5.0 Example General Cloud System FIG.5is a block diagram of a cloud system500for implementing the systems and methods described herein. The cloud system500includes one or more cloud nodes (CN)502communicatively coupled to the Internet504. The cloud nodes502may include the processing nodes110, the server300, or the like. That is, the cloud system500may include the distributed security system100or another implementation of a cloud-based system, such as a system providing different functionality from security. In the cloud system500, traffic from various locations (and various devices located therein) such as a regional office510, headquarters520, various employee's homes530, mobile laptop540, and mobile device542communicates to the cloud through the cloud nodes502. That is; each of the locations510,520,530,540,542is communicatively coupled to the Internet504through the cloud nodes502. For security, the cloud system500may be configured to perform various functions such as spam filtering, uniform resource locator (URL) filtering, antivirus protection, bandwidth control, data loss prevention, zero-day vulnerability protection, web 2.0 features, and the like. In an embodiment, the cloud system500and the distributed security system100may be viewed as Security-as-a-Service through the cloud. In general, the cloud system500can be configured to perform any function in a multi-tenant environment. For example, the cloud system500can provide content, a collaboration between users, storage, application hosting, and the like. In conjunction with the cloud system500and/or the distributed security system100, various techniques can be used for monitoring which is described on a sliding scale between always inline to never inline. First, in an always inline manner, all user traffic is between inline proxies such as the processing nodes110or the cloud nodes502without exception. Second, in a somewhat always inline manner, all user traffic except for certain business partners or third parties is between inline proxies such as the processing nodes110or the cloud nodes502. Third, in an inline manner for most traffic, high bandwidth applications can be configured to bypass the inline proxies such as the processing nodes110or the cloud nodes502. Example high bandwidth applications can include content streaming such as video (e.g., Netflix, Hulu, YouTube, etc.) or audio (e.g., Pandora, etc.). Fourth, in a mixed manner, inline monitoring can be used for “interesting” traffic as determined by security policy with other traffic being direct. Fifth, in an almost never inline manner, simple domain-level URL filtering can be used to determine what is monitored inline. § 6.0 Unified Agent Application FIG.6is a network diagram of a unified agent application600and associated connectivity and functionality in a security cloud602. The unified agent application600is executed on a mobile device604. The unified agent application600dynamically learns all available services, adapts to changing network environments, and provides a seamless and a secure network resource access to Internet and darknet hosted applications. This is achieved through dynamic evaluation of network conditions, enrollment to individual services, learning individual service protocols, creating a link-local network on the device604, and establishing multiple secure tunnels to cloud services over this local network. The unified agent application600is communicatively coupled to an agent manager cloud606, and a security cloud608. Note, the security cloud608can be the distributed security system100, the cloud system500, etc. The unified agent application600enables communication to enterprise private resources612via the security cloud608and to the Internet504via the security cloud608. The agent manager cloud606can communicate with enterprise asset management614, an enterprise Security Assertion Markup Language (SAML) Identity provider (IDP)616, and an enterprise Certificate Authority (CA)618. The device604and the unified agent application600can perform a registration/identity620process through the agent manager cloud606where the user identity, the user's certificates, and a device fingerprint can uniquely identify the device604. Once registered, the unified agent application600has an identity622which can include the user, certificates, device posture, etc. and which is shared with the security cloud608. The unified agent application600operates on a client-server model where an IT admin enables appropriate services for end users at a Cloud Administration Server (CAS) which can be part of an agent manager cloud606, namely the enterprise asset management614. Every client can make a unicast request to the agent manager cloud606(e.g., CAS) to discover all enabled services. On acknowledging the response, the client issues a request to authenticate to each service's cloud Identity Providers, the enterprise SAML IDP616. Authentication can be multi-factor depending upon the nature of the service. On successful authentication, server contacts Mobile Device Management (MDM) or Inventory management provider to define access control rights for the device604. Post authorization, the device604is successfully enrolled into the agent manager cloud606which tracks and monitors all behavior of the device604. Post-enrollment, the device604creates a link local network with a specific IP configuration, opens a virtual network interface to read and write packets and opens multiple listening sockets at custom ports to create secure tunnels to available services through the security cloud608. On network changes, the device604dynamically evaluates reachability to preconfigured domains and depending upon the result it appropriately transitions all network tunnels, thus providing a seamless experience to the end user. Further, the device604also intelligently learns the conditions which are appropriate for setting up network tunnels to cloud services depending upon several network heuristics such as reachability to a particular cloud service. § 6.1 Unified Agent Application—Functionality The unified agent application600enable a user to connect to multiple cloud services through the dynamic discovery of available services followed by authentication and access as exposed in the corresponding service protocol. The unified agent application600addressed the unmanageable growth of mobility and cloud-based services which have led to a proliferation of individual applications for access to individual services. The unified agent application600can be implemented through a mobile application (“app”) which overcomes the hassle of deploying and managing several applications across a gamut of mobile devices, operating systems, and mobile networks to gain secure access to the cloud-based internet or intranet resources. The mobile application can uniquely perform a Dynamic evaluation of Network and Service Discovery, Unified Enrollment to all services, Application dependent service enablement, Service protocol learning, Service Availability through secure network traffic forwarding tunnels, and the like. Again, enterprises have a strong need to provide secure access to cloud services to its end users. The growth of mobility and cloud in the IT enterprise has made it impossible for IT admins to deploy individual applications for individual services. The mobile app associated with the systems and methods overcomes these limitations through the dynamic discovery of available services to the end user, followed by authentication and access to individual services. Further, the mobile app insightfully learns the protocol for each service and establishes a secure tunnel to the service. In essence, the mobile app is one app that an enterprise may use to provide secure connectivity to the Internet and diversified internal corporate applications. At the time of user enrollment, the mobile app will discover all services provided by the enterprise cloud and will enroll the user to all of those services. It will then set up secure tunnels for each application depending upon whether the application is internet bound or if it is internal to the corporate network (intranet). The mobile app will also discover all applications provided within the enterprise cloud along with a Global Virtual Private Network (GVPN) service and show the available services to end user. Endpoint Applications today provide one service for a specific network function (such as Virtual Private Network (VPN) to a corporate network, web security, antivirus to access the Internet). The mobile app can be used to enable all these services with single enrollment. The mobile app will provide services to darknet applications along with securing the Internet traffic. The mobile app can set up a local network on the mobile device. Generally, the unified agent application600support two broad functional categories—1) dynamic service discovery and access controls and 2) service availability. The dynamic service discovery and access controls include service configuration by the administrator, service discovery by the device604, service acknowledgment and authentication, service authorization and enrollment, and the like. For service configuration by the administrator, the IT admin can provide cloud service details at a centralized knowledge server, such as part of the agent manager cloud606, the enterprise asset management614, etc. The cloud service details include the service type (e.g., Internet/intranet), network protocol, identity provider, server address, port and access controls, etc. For service discovery by the device604, the device604can issue a network request to a known Cloud Administrative Server (CAS) in the agent manager cloud606to discover all enabled services for a user. If a specific cloud server is not known a priori, the device604can broadcast the request to multiple clouds, e.g., through the agent manager cloud606communicating to the enterprise asset management614, the enterprise SAML IDP616, and the enterprise CA618. For the service acknowledgment and authentication, the device604acknowledges the response of service discovery and initiates the authentication flow. The device604learns the authentication protocol through the service discovery configuration and performs authentication of a configured nature at the enterprise SAML IDP616. For the service authorization and enrollment, post successful authentication, the CAS, authorizes the device604and fetches the access control information by contacting a MDM/Inventory Solutions Provider. Depending upon the user context and the nature of access, the CAS enrolls the device604into several cloud services and informs the cloud services that the user has been enrolled for access. The service availability includes link local network setup, a traffic interceptor, and dynamic traffic forwarding tunnels to authorized services. The link local network setup, post enrollment, has the device604create a local network on the device604itself to manage various networking functionalities. For the traffic interceptor, the device604intercepts and evaluates all Internet traffic. Allowed traffic is tunneled to the cloud services such as in the security cloud608whereas rest of the traffic is denied as per enterprise policies. For the dynamic traffic forwarding tunnels to authorized services, depending upon the evaluation, the device604splits the traffic into the different tunnel to individual cloud services such as in the security cloud608. The unified agent application600is a single application that provides security connectivity to the Internet504and darknet hosted applications, such as the enterprise private resources612. The unified agent application600communicates securely to the agent manager606which is controlled by an IT admin. The unified agent application600learns available services and authenticates with each service. Post proper enrollment, the unified agent application600securely connects to cloud services by means of network tunnels. § 7.0 Virtual Private Access FIG.7is a network diagram of a virtual private access network700using the security cloud602. Of note, while described with reference to the security cloud602, virtual private access is also contemplated in the distributed security system100, the cloud system500, or any other distributed system. The virtual private access network700includes users702with an application600on their associated user devices (phones, tablets, laptops, etc.). The users702can be remote users, partners, contractors, etc., i.e., anyone who needs remote access to cloud file shares and applications706and/or enterprise file shares and applications708. The file shares and applications706,708can be the private applications, and can be generally referred to as resources. The cloud file shares and applications706are located in the cloud such as in the data center610whereas the enterprise file shares and applications708are located within an enterprise's internal network. Note, while described as file shares and applications706,708, each could only be file shares or applications, i.e., these are generalized to denote something accessible by users. Again, conventional access techniques rely on VPNs to the data center610or the enterprise's internal network, with all of the resulting issues previously discussed. Also, the virtual private access network700includes a central authority710for policy configuration and the like. The virtual private access network700further includes lightweight connectors712at the file shares and applications706,708. The virtual private access is a new technique for the users702to access the file shares and applications706,708, without the cost, hassle or security risk of VPNs, which extend network access to deliver app access. The virtual private access decouples private internal applications from the physical network to enable authorized user access to the file shares and applications706,708without the security risk or complexity of VPNs. That is, virtual private access takes the “Network” out of VPNs. In the virtual private access network700, the users702, the file shares and applications706,708, and the central authority710are communicatively coupled to the security cloud602(or the distributed security system100, the cloud system500, etc.), such as via the Internet104or the like. On the client side, at the users702, the applications600provision both secure remote access and optionally accessibility to the security cloud602. The application600establishes a connection to the closest cloud node102in the security cloud602at startup and may not accept incoming requests. At the file shares and applications706,708, the lightweight connectors712sit in front of the applications. The lightweight connectors712become the path to the file shares and applications706,708behind it, and connect only to the security cloud602. The lightweight connectors712can be lightweight, ephemeral binary, such as deployed as a virtual machine, to establish a connection between the file shares and applications706,708and the security cloud602, such as via the closest cloud node102. The lightweight connectors712do not accept inbound connections of any kind, dramatically reducing overall threat surface. The lightweight connectors712can be enabled on a standard VMware platform; additional lightweight connectors712can be created in less than 5 seconds to handle additional application instances. By not accepting inbound connections, the lightweight connectors712make the file shares and applications706,708“dark,” removing a significant threat vector. Policy is established and pushed by policy engines in the central authority710(e.g., the authority node120), such as via a distributed cluster of multi-tenant policy engines that provide a single interface for all policy creation. Also, no data of any kind transits the policy engines. The cloud nodes102in the security cloud stitch connections together, between the users702and the file shares and applications706,708, without processing traffic of any kind. When the user702requests an application in the file shares and applications706,708, the policy engine delivers connection information to the application600and app-side cloud nodes102which includes the location of a single cloud nodes102to provision the client/app connection. The connection is established through the cloud nodes102, and is encrypted with a combination of the customer's client and server-side certificates. While the cloud nodes102provision the connection, they do not participate in the key exchange, nor do they have visibility into the traffic flows. Advantageously, the virtual private access provides increased security in that the file shares and applications706,708are visible only to the users702that are authorized to access them; unauthorized users are not able to even see them. Because application access is provisioned through the security cloud602, rather than via a network connection, the virtual private access makes it impossible to route back to applications. The virtual private access is enabled using the application600, without need to launch or exit VPN clients. The application access just works in the background enabling application-specific access to individual contractors, business partners or other companies, i.e., the users702. § 8.0 Cloud System for Digital Experience Monitoring FIG.8is a network diagram of a cloud system800for digital experience monitoring. The cloud system800brings aspects ofFIGS.1-8into a single architecture that is leveraged by the systems and methods to provide real-time, continuous digital experience monitoring, as opposed to conventional approaches. A key aspect of the architecture of the cloud system800is the inline monitoring. This means data is accessible in real-time for individual users from end-to-end. Accordingly, digital experience monitoring can include monitoring, analyzing, and improving digital user experience. The cloud system800includes a cloud service802that connects users (e.g., the regional office510, the headquarters520, a mobile device604with the application600, etc.) to applications706,708, services, the Internet504, etc. The cloud service802can include the distributed security system100, the cloud system500, the security cloud602, etc. The cloud service802can be a proxy or firewall. End users can be located in the headquarters520, at the regional office510, mobile via the mobile device604, etc. As described herein, an end user has a user device (e.g., a laptop, mobile device, tablet, desktop computer, etc.) that is used to access digital applications or services over the Internet504or hosted in public or private infrastructure such as the applications706,708via connectivity to the lightweight connector712through the cloud service802. The applications706,708can be hosted on private or separate infrastructure not directly accessible over the Internet504. The cloud service802can have cloud edges that are a network service connecting a fixed location where end users work (e.g., the regional office510, the headquarters520), to the cloud service802to provides security, monitoring, and network services. For example, the cloud edges can include a gateway, a processing node110, a cloud node502, etc. As described herein, the application600is deployed on the end user's device604for connectivity to the cloud service802and to provide security, monitoring, and network services. Finally, the cloud system800can include logging and analytics804either part of or connected to the cloud service802. As described herein, a key aspect of the cloud system800is the inline, end-to-end visibility of all users. This enables digital experience monitoring. The cloud system800has the ability to monitor, diagnose, generate alerts, and perform remedial actions with respect to network endpoints, network components, network links, etc. The network endpoints can include servers, virtual machines, containers, storage systems, or anything with an IP address, including Internet of Things (IoT), cloud, and wireless endpoints. In the cloud system800, the network endpoints can include the cloud edge or gateway at the headquarters520, the application600in the user device604, the lightweight connector712, etc. With these components, these network endpoints can be monitored directly in combination with a network perspective. Further, network components in the cloud service802, etc. including routes, switches, and other network devices including Virtualized Network Functions (VNFs) can be monitored along with network links therebetween. The monitoring here can use various probe techniques to measure availability, latency and quality. Also, the applications706,708can also be monitored for performance, etc. Thus, the cloud system800provides a unique architecture that can enable digital experience monitoring, network application monitoring, infrastructure component interactions, etc. Of note, these various monitoring aspects require no additional components—the cloud system800leverages the existing infrastructure to provide this service. Again, digital experience monitoring includes the capture of data about how end-to-end application availability, latency, and quality appear to the end user from a network perspective. This is limited to the network traffic visibility and not within components such as what application performance monitoring (APM) is able to accomplish. Networked application monitoring provides the speed and overall quality of networked application delivery to the user in support of key business activities. Infrastructure component interactions include a focus on infrastructure components as they interact via the network, as well as the network delivery of services or applications. This includes the ability to provide network path analytics. The cloud system800can enable real-time performance and behaviors for troubleshooting in the current state of the environment, historical performance and behaviors to understand what occurred or what is trending over time, predictive behaviors by leveraging analytics technologies to distill and create actionable items from the large dataset collected across the various data sources, and the like. The cloud system800includes the ability to directly ingest any of the following data sources network device generated health data, network device generated traffic data, including flow-based data sources inclusive of NetFlow and IPFIX, raw network packet analysis to identify application types and performance characteristics, HTTP request metrics, etc. The cloud system800can operate at 10 gigabit (10G) Ethernet and higher at full line rate and support a rate of 100,000 flows per second or higher. § 8.1 Digital Experience Monitoring The applications706,708and the SaaS can include enterprise applications, Office 365, Salesforce, Skype, internal applications, etc. These are critical business applications where user experience is important. The objective here is to collect various data points so that user experience can be quantified for a particular user, at a particular time, for purposes of analyzing the experience as well as improving the experience. In an embodiment, the monitored data can be from different categories including application-related, network-related, device-related (also can be referred to as endpoint-related), protocol-related, etc. Data can be collected at the application600or the cloud edge to quantify user experience for specific applications, i.e., the application-related and device-related data. The cloud system800can further collect the network-related and the protocol-related data (e.g., Domain Name System (DNS) response time). Application-related dataPage Load TimeRedirect count (#)Page Response TimeThroughput (bps)Document Object ModelTotal size (bytes)(DOM) Load TimeTotal Downloaded bytesPage error count (#)App availability (%)Page element count by category (#) Network-related dataHTTP Request metricsBandwidthServer response timeJitterPing packet loss (%)Trace RoutePing round tripDNS lookup tracePacket loss (%)GRE/IPSec tunnel monitoringLatencyMTU and bandwidth measurements Device-related data (endpoint-related data)System detailsNetwork (config)Central Processing Unit (CPU)DiskMemory (RAM)ProcessesNetwork (interfaces)Applications An example of HTTP Request metrics includes CONNECT, time to first byte/first10bytes, time to last byte, Secure Sockets Layer (SSL) handshake time, etc. For example, HTTP can be used to send probes to take measurements as described in commonly-assigned U.S. patent application Ser. No. 16/043,250, filed Jun. 24, 2018, and entitled “Cloud services management systems utilizing in-band communication conveying situational awareness,” the contents of which are incorporated by reference herein. For example, browser triggered data can include collection when a user visits a domain or subnet. The page load performance data can be sampled using the W3C standard HTTP Archive format (HAR). For each session or sample, the agent application600can collect: a device fingerprint profile including: 1) IP/DNS configuration (private/public IP, gateway, etc.), 2) Wired or Wi-fi connection (link speed, signal quality, Service Set Identifier (SSID), Basic SSID (BSSID), etc.), 3) VPN config if possible (from routing table, VPN service), 4) Proxy config (cloud or other, parse Proxy Auto Config (PAC) files), and 5) System metrics (CPU, Mem, Swap, bytes in/out etc.). The device fingerprint profile can also include test probes such as a Ping (Internet Control Message Protocol (ICMP)) to discovered gateway, destination, VPN and/or Proxy, and Traceroute (ICMP/Transmission Control Protocol (TCP)) to discovered gateway, destination, VPN and/or Proxy. Metrics could be combined. For example, device health can be based on a combination of CPU, memory, etc. Network health could be a combination of Wi-Fi/LAN connection health, latency, etc. Application heath could be a combination of response time, page loads, etc. The cloud service800can generate service health as a combination of CPU, memory, and the load time of the service while processing a user's request. The network health could be based on the number of network path(s), latency, packet loss, etc. The lightweight connector712can also generate similar metrics for the applications706,708. In an embodiment, the metrics can be collected while a user is accessing specific applications that user experience is desired for monitoring. In another embodiment, the metrics can be enriched by triggering synthetic measurements in context of an inline transaction by the application600or cloud edge. The metrics can be tagged with metadata (user, time, app, etc.) and sent to the logging and analytics804service for aggregation, analysis and reporting. Further, network administrators can get UEX reports from the cloud service802. The synthetic measurements can include probes from the agent application600, the lightweight connector712, etc. The probes can include HTTP/HTTPS probes, network probes, Voice over IP (VoIP) related probes (e.g., Session Initiation Protocol (SIP), Real Time Protocol (RTP), etc.), DNS probes, Proxy probes, etc. The HTTP/HTTPS probes can configure the URL and interval where the probe is run—it is undesirable to have every device running tests. This can include a configured timeout, website authentication (basic, cert, NTLM), HTTP method (POST, GET, etc), SSL, custom headers, and a configured expected HTTP status code, content (string or Regex). Due to the inline nature and the fact the cloud system800is an overlay (in-between users and services/applications), the cloud system800enables the ability to continuously capture user experience metric data and to historically log such data in the logging and analytics804service. As such, a network administrator can have a long-term detailed view of the network and associated user experience. § 8.2 Process for Digital Experience Monitoring FIG.9is a flowchart of a process820for digital experience monitoring utilizing the cloud system800. The process820includes performing inline monitoring of network access between one or more users each with an associated user device executing an agent application, the Internet, and one or more cloud applications and private applications accessible via lightweight connectors (step821); responsive to a user executing a specific application, obtaining device and application metrics for the user from the associated user device related to usage of specific application (step822); obtaining network metrics from the cloud system related to network performance of the specific application (step823); and providing the device and application metrics and the network metrics to a logging and analytics system for quantifying digital user experience of the specific application (step824). The process820can further include tagging the device and application metrics and the network metrics with metadata for the logging and analytics system to aggregate, analyze, and report. The process820can further include obtaining private application metrics related to performance of the private application via the lightweight connector. The agent application can be configured to detect the specific application and cause metric generation based thereon. The cloud system can be a distributed security system with the inline monitoring of all traffic associated with the one or more users such that the cloud system is an overlay network. The process820can further enrich the inline monitoring by performing periodic synthetic measurements with inline monitored traffic context between the one or more users, the Internet, and the one or more cloud applications and private applications. § 8.3 Digital Experience Analyzing With the various device, application, and network-related metrics, such as in the logging and analytics804, it is possible to aggregate these metrics to provide a User Experience (UEX) score. The UEX score can be based on the metrics collected by the application600, the cloud edge, the cloud service800, the lightweight connectors712, etc. The UEX score captures the digital experience and can be based on a given application with associated device, application, and network-related metrics. For example, the UEX score can be determined based on some weighted combination of the device, application, and network-related metrics for a given application and the UEX score can be normalized within a range, e.g., 0 to 100. Again, the given application can be a core business critical application where UEX is important (e.g., Office365, Salesforce, Internal Inventory app, etc.) or any other designated application. The UEX scores can be determined at fixed time epochs (e.g., 15 minute increments, hour increments, etc.) and normalized. Scores can be aggregated for a group of users (e.g. department, location) or for the whole organization. Administrators are provided UEX score reports over time based on user, department, locations, etc. via a Graphical User Interface (GUI). Drilldown reporting capabilities via the GUI allow administrators to identify where there is a problem. For example, administrators can set alerts when a UEX score falls below a threshold. UEX scores for common applications across organizations can be used for peer comparisons and isolating common application issues affecting multiple organizations. FIG.10is a flowchart of a process850for analyzing digital user experience. The process850includes performing inline monitoring of network access between one or more users each with an associated user device executing an agent application, the Internet, and one or more cloud applications and private applications accessible via lightweight connectors (step851); based on user experience metrics collected by the inline monitoring and stored in a logging analysis system, obtaining user experience metrics for one or more users for a given time epoch and for a given application (step852); determining a user experience score for the one or more users for the given time epoch and for the given application based on the obtained user experience metrics (step853); and providing a graphical user interface displaying data related to various user experience scores for various users over various time epochs with various applications (step854). The process850can further include generating and displaying an alert responsive to any user, group of users, location, and organization's user experience score falling below a threshold for a particular time epoch. The process850can further include aggregating the user experience for users into groups of users, locations, and organizations, and providing a graphical user interface displaying data related to the groups of users, the locations, and the organizations. The user experience score captures digital experience and is based on a given application with associated device, application, and network-related metrics. The user experience score can be utilized for a specific application for peer comparison, and the process850can further include displaying associated user experience scores for the specific application for any users, group of users, locations, and organizations for comparison, and updating the display based on input while a user performs a drill down to remediate poor user experience scores. The process850can further include provide additional data including metrics based on input from a user in the graphical user interface. The various metrics are collected from multiple sources and correlated in the logging and analytics804service to come up with a composite UEX score. Again, the sources of the metrics can include application HTTP/S traffic, browser page load times or app specific metrics provide by app vendors APIs/logs; network measurements provided by traceroute tools such as MTR; User Device system metrics (CPU, memory, etc.); cloud tunnel metrics to provide network hops trace between user device and cloud node502(inside tunnel); lightweight tunnels712, etc. Again, the UEX score is determined in the context of a specific application. For example, a computation can include a point system, e.g., 0-10 (10 being the worst). The points can be allocated based on where the user falls within a percentile threshold (e.g., p80), p100 being the worst UEX. Metrics can be weighted, e.g., Latency=4 pts., % CPU=1 pts. For an application and location, calculate average score based on users that are using the application at the location. The overall score is computed based on average UEX score across all users. For example, in the score card below on scale of 0 (best)-10(worst), John's score is 2.5 (or 75/100). Salesforce.com: threshold p80John DoeUser PercentilePointsEarnedPageloadp9042Latencyp7030% CPUMedian10Metric Xp8520.5Total102.5 <= UEX § 8.4 Digital Experience GUI FIGS.11-24are various screenshots of a Graphical User Interface (GUI) associated with the analysis service to display, report, and provide a drill down of the User Experience (UEX) scores.FIG.11illustrates a GUI listing locations broken down showing an average score of all users at a location.FIG.12illustrates a GUI listing a specific location showing users, their UEX scores, a change in UEX score (e.g., over given time epochs), and impacted applications.FIG.13illustrates a graph of a specific user's UEX score over time.FIG.14illustrates a graph of a specific location's aggregate UEX score over time. Note, a user can drill down on the graph to display data at particular times when the score is low for troubleshooting. FIG.15is a GUI of a global dashboard for the cloud system800. Here, the aggregate UEX score is displayed (all users). There is a listing of application alerts (e.g., threshold crossings), mobile devices, desktop devices, etc. A map displays the global UEX score using color codes for visual indication of locations with good, okay, and poor UEX scores. Again, this visualization can be used for drill down and remediation. FIG.16is a GUI of times in the global dashboard displaying top impacted users, top impacted applications, active alert distribution, and user distribution by UEX score.FIG.17is a GUI of a graph of UEX score over time.FIGS.18and19are a GUI of a dashboard for an individual user. Specifically, the UEX score, location, bandwidth, latency, packet loss, response time, and availability are displayed as are graphs of the UEX score over time and bandwidth for the user inFIG.18.FIG.19includes a graph of various performance metrics over time. Note, the lower performance metrics correlate to lower UEX score. FIG.20is a GUI of a network dashboard. This provides a network availability metric similar to the UEX score, a total number of network devices, network device health score which can be similar to the UEX score providing a view of the average network device health, and a total network users. The network dashboard can also include a network path trace criteria which specifies endpoints, destination, users, frequency, metrics, and threshold criteria (“alert in case”). Also, the network dashboard can include a real-time path trace view that illustrates a selected user to a selected application where real-time monitoring occurs which specifies endpoints, destination, users, frequency, metrics, and threshold criteria (“alert in case”). For example, the availability metric can be 100% is GREEN, <100% is RED, Response Time: >5 sec is RED, 3-5 sec is AMBER, <3 sec is GREEN. FIG.21is a GUI of an alerts dashboard. This includes a number of high severity alerts and a number of application, network, and device alerts. The alerts dashboard further includes a visualization of active alert distribution, a listing of high severity alerts, and a list of the most recent active alerts.FIG.22is a GUI of a performance dashboard. This includes the overall UEX score, an indication of the most impacted location and application, a map of global UEX score, and a graph of UEX score over time.FIG.23is a GUI of a user dashboard illustrating a single user.FIG.24is a GUI of an application dashboard illustrating a single application. § 8.5 Improving Digital Experience With digital user experience monitored and analyzed, it is possible to improve digital user experience in the cloud system800, in real-time. The objective here is to take the monitored metrics and analyzed UEX score and use it for actionable insights that can improve operation of the cloud system800for the purpose of improving the UEX scores, i.e., remedial actions. Here, an analytics service can operate in conjunction with the monitoring service and the analysis service to provide updates to improve the UEX scores in the cloud system800. For example, these services (the monitoring service, the analysis service, and the analytics service) can operate in the cloud service800as one or combined services. The analytics service can include an Artificial Intelligence (AI)/Machine Learning (ML) anomaly detection engine that can isolate common factors affecting the UEX score. For example, Wi-Fi network coverage could be poor in a location, DNS resolution could be taking too long, there could be network congestion between two Internet Service Provider (ISP) peering points, authentication for an application could be taking an abnormally long time, etc. With the logging and analytics804, it is possible to review historical data to train the AI/ML anomaly detection engine for ongoing detection. The analytics service can provide policy based actions to be taken based on the UEX score by the cloud service802and/or the organization's IT. For integration with the organization's IT, examples include i) if UEX score falls below threshold, open service ticket with detailed metrics and reports captured, ii) enable granular analysis with packet captures on application600based on certain conditions, iii) change tunnel from office to different cloud service providers to improve network path, iv) enable bandwidth controls to provide QoS for a business critical application, etc. Example actions that could be taken by the cloud service802include auto scale cloud service resources to improve a performance bottleneck, use the cloud edge to choose better network path, etc. FIG.25is a flowchart of a process870for improving digital user experience. The process870includes performing inline monitoring of network access between one or more users each with an associated user device executing an agent application, the Internet, and one or more cloud applications and private applications accessible via lightweight connectors (step871); obtaining user experience scores for any of a user, a group of users, a location, and an organization from the inline monitoring or from the logging and analytics system (step872); responsive to a low user experience, analyzing the low user experience score to determine one or more likely factors (step873); and causing one or more remedial actions to address the low user experience score based on the one or more likely factors (step874). The process870can further include analyzing user experience scores on one or more of an ongoing basis and a historical basis; determining likely factors in the cloud system, on the associated user device, and in the one or more cloud applications and private applications that cause low user experience scores; and utilizing the determined likely factors in analysis of the low user experience score. The process870can further include analyzing user experience scores on one or more of an ongoing basis and a historical basis; and utilizing the analyzed user experience scores to train a machine learning algorithm. The one or more remedial actions include any of opening of a service ticket with detailed metrics and reports included, causing granular analysis on a user device via the agent application, changing one or more tunnels in the cloud system, and configuring bandwidth controls to adjust priority of a corresponding application. The cloud system can include a plurality of tunnels and tunnels are selected based on the user experience scores for specific users for specific applications. The user experience score captures digital experience and is based on a given application with associated device, application, and network-related metrics. § 8.6 Tunnels and Path Selection As described herein, connectivity between the end user, the cloud system800, the Internet, and the applications706,708can be via tunnels, such as using the various protocols described herein. One aspect of remediation for poor UEX scores can include tunnel selection or switching.FIG.26is a network diagram of selecting a best path from a cloud node to a customer network. Here, a ZEN node is any of the processing node110or cloud node502. The cloud system800, via the analytics service, can selected different Autonomous Systems (AS) to connect based on the user experience scores. FIG.27is a network diagram of selecting a best path between cloud node and user utilizing the agent application600.FIG.28is a network diagram of a detailed path analysis that is displayed in a GUI. Clicking on a segment of the flow, will open a zoomed view for that segment and a zoomed view will indicate hops and other devices in that path. § 8.7 Agent Application Integration and User Workflow FIG.29is a flow diagram of a user workflow with the agent application600. Here, the agent application600(“Z-App”) is installed, a User Performance Monitoring (UPM) browser extension can be installed, and data is collected.FIG.30is a screenshot of a Web browser illustrating the UPM browser extension. § 8.8 Administrator Workflow FIG.31is a flow diagram of administrator workflow with the GUI. § 8.9 Monitoring Techniques In an embodiment, a process includes tracing network path hops encapsulated inside a proxy tunnel by performing tracing from a client computer and routing the trace traffic data into a concentrator system where the hop data is analyzed. In another embodiment, a process to perform synthetic network probes from end user client endpoint in context of inline real time traffic monitoring at large scale includes deploying randomization techniques to break the stride of probe traffic against a target destination, as to avoid being flagged by destination computer to be blacklisted. In a further embodiment, a process to calculate a web page load time outside a web browser includes detecting web page document within inline network traffic, tracking all page sub requests and recording load timings for main request and sub requests then forwarding to an analytics system to reassemble the page and subpage requests timings and compute overall page time. § 8.10 Use Cases A first user case can be how is the real user experience accessing key SaaS business applications? This can be determined by measuring performance from the user browser, when the user visits actual pages of key SaaS applications (ex: Office365, salesforce, workday, etc.). The UEX score can be based on page load timings, network delays, and system metrics during user session time frame. The UEX score metrics can be aggregated by geographic location and application to highlight problems based on default or pre-configured thresholds and metric trends (e.g., 90th percentile, mean) to provide an ability to share or save an interactive snapshot of the problem as part of a service escalation. A second use case can be are there any high network latency or delays to key destinations from user devices—with and without the cloud? This can include scheduling ICMP tests to periodically measure network performance to a discrete network or application domain and reporting My Traceroute (MTR) style metrics (min/max/avg Latency, Jitter, % Loss). This can be measured with and without any proxy and used to display topology flow graph with latency at each hop, aggregate performance metrics by geographic location and application, and highlight problems based on default or pre-configured thresholds to provide an ability to share an interactive snapshot of the problem as part of a service escalation. A third use case can include are there any high response times to my key web applications from my user devices—with and without the cloud? This can include scheduling a monitor to periodically measure HTTP/S target server response to a specific IP address or domain. This includes an ability to provide authentication login parameters and GET/POST parameters to interact with application (e.g., login, load email), an ability to produce page waterfall timings, etc. This can be measured with and without the cloud or proxy and performance metrics can be aggregated by geographic location and application to highlight problems based on default or pre-configured thresholds and metric trends. A fourth use case can include wanting to see user device details to troubleshoot system performance and correlate with application and network metrics. This can include scheduling a monitor to periodically collect system performance metrics on the user device, aggregating performance metrics by slower devices, and overlaying user application and network performance with device performance (how's device % CPU and memory usage at time user experienced slowness?). This can highlight problems based on default or pre-configured thresholds to provide the ability to share an interactive snapshot of the problem as part of a service escalation. § 9.0 Alerts for Monitoring and Responding to Specific Events Again, the cloud system800has the ability to monitor, diagnose, generate alerts, and perform remedial actions with respect to network endpoints, network components, network links, etc. The alerts generated can be with regards to any aspect of the inline monitoring disclosed herein and can be based on rules establishing specific criteria and conditions that, when fulfilled, trigger the alert(s). Alerts can be triggered when results meet a specific condition defined by an alert rule. An alert can also have an action, such as a notification of the alert, associated therewith, such that when alerts are triggered, the action is also triggered. For example, an email can be sent, webhooks can be triggered, or messages can be sent via 3rd party integrations such as Servicenow, Pagerduty, Slack, etc. Actions can be triggered when the alert becomes active, i.e. only at the start, and can also be triggered once the alert is cleared. Further, when multiple alerts are active simultaneously, the data can be grouped into a single action/notification (such as an email) to reduce noise in the system. Alerts and actions triggered therefrom can be customizable by the user. Any combination of rules can be established relative to the inline monitoring and the data and metrics collected therewith. For example, the rules can include one or more of the following: (1) when a UX score degrades by certain threshold percentage over a predetermined or selected period of time; and (2) any of the metrics of the inline monitoring meets a predetermined or selected threshold for the respective metric, such as a UX score, network latency (per log), percentage of packet loss (per leg), total hop count, per leg hop count, incomplete traceroutes, DNS time, Page Fetch Time, availability (HTTP errors), and device health metrics. More particularly, the rules can trigger an alert if: the overall UX score in any location is less than 70% in the past 24 hours; if there are more than 5% or 100 devices seeing a 500 error for web monitors; a particular device has not sent any data for a predetermined or selected amount of time (e.g. in the last 4 hours); a traceroute probe did not complete; a user has a predetermined or selected number of failed web monitor requests within predetermined or selected period of time, for example, 3 failed web monitor requests in 10 minutes (alerts can be throttled so that it triggers only once in an hour for failed logins from the same user); a web application has more than a predetermined or selected number of errors and/or a page fetch time greater than a predetermined or selected time more than a predetermined or selected number of times in a row; a UX score in a specific location becomes below average for that region; and a UX score within a setup geofence degrades from good to okay, such as below 66%. Triggering events based on the alert rules can be checked in real time or can be checked on a predetermined or selected interval, for example, every five minutes. The predetermined or selected interval can be aligned and synchronized with the monitors/inline monitoring described above. In order to reduce noise by sending multiple notifications for the same or similar alerts, alert criteria can be included to limit the notifications. For example, alerts can be limited to when the alert event occurs a predetermined or selected number of times in a row, if the alert events impact a predetermined or selected number or percentage of devices in a particular location, group, department, operating system version, and the like. Furthermore, a repeating alert event can be throttled so that only one notification of the alert is sent or the alert is only triggered once. For example, if the same alert event starts/stops multiple times within 1 minute, the alert can be throttled to send only one notification. Again, as shown inFIG.21, an alerts dashboard can include a number of high severity alerts and a number of application, network, device alerts, a visualization of active alert distribution, a listing of high severity alerts, and a listing of the most recent active alerts, which can be available under an alerts tab. FIG.32is a GUI of a rules tab of an alerts dashboard. The rules tab can include an alert rule list that lists the rules established for triggering alerts. The list can include a rule name, a state of whether the rule is active, muted or inactive, a date and time the rule was last triggered, the type of monitoring associated with the rule, and an application associated with the rule. FIGS.33-39are screens of a GUI for alert rule configuration. The alert rule can be configured under a new tab of the GUI under a configuration called Alerts. The Alert rule can be associated with an application and one or more monitors. As can be seen inFIG.33, the rule can be configured with an alert name, a severity level, a type of alert, and can be enabled or disabled at creation. This can be accomplished in the configure rule screen of the GUI, such as the screen shown inFIG.33. The severity level can be selected from a plurality of severities, which can be predetermined, defined by an administrator, and the like. As illustrated inFIG.33, the severity level can be selectable from a drop down menu. For example, severity levels can be identified as: high, where a critical incident with outage impact occurs, such as when a key application is down for all users; medium, where a critical incident with significant impact occurs, such as when a key application is not accessible for a subset of users; and low, where a minor inconvenience to users occurs, such as when usable performance degradation occurs. As can be seen inFIG.34, the alert rule can further be configured by selecting an application and/or a monitor. Filters can also be selected. As shown inFIG.35, these filters can include one or more locations, groups, departments, users, geolocations, and devices. Other filters can also be selected, such as operating systems and operating system versions. This can be accomplished in the filters screen of the GUI, such as the screen/expanded screen shown inFIGS.34and35. As can be seen inFIG.36, the alert rule can further be configured by selecting which metrics, such as: a ZDX Score; web (fetch time, DNS time, error etc.); traceroute (latency, percent loss, number of hops, incomplete trace route); device health (CPU, memory, etc.); any other criteria disclosed herein, and the like. The threshold or conditions of those metrics that will trigger the alert can also be set. These conditions can be set where any condition met will trigger the alert or where all of the conditions need to be met to trigger the alert. The alert criteria can include percentages, durations, number of times in a row that the condition occurs, etc. An operator can indicate whether the criteria that occurs is less than (<), greater than (>), equal to (=), less than or equal to (≤), greater than or equal to (≥), an error message (!=), and the like. For example, for web monitor, HTTP Code >=500, for a traceroute, percent loss ≥1%, for a Sharepoint, score <50%, and for a device percent, CPU >80% 10 times in a row. This can be accomplished via a criteria screen of the GUI. The determination of when an alert is raised can be based on baselined data. For example, one vendor can compare the metric value to a weighted average of all historical metric data and if the value is greater than the average value by a predetermined or selected value of standard deviations, then the alert is raised. As can be seen inFIG.37, the action taken when the alert is triggered can also be configured. This can include selecting a throttling value for a number of times the event needs to occur in a row, how many devices are impacted based on either a number or percentage of devices, and the type of notification that is sent, such as email, Webhook, etc. This can be accomplished via an action screen of the GUI. As can be seen inFIG.38, a preview of the notification, such as an email preview, can be provided for review. Further, as can be seen inFIG.39, a review of all of the alert settings can be provided, such as in a review screen of the GUI, such that all of the configurations, filters, criteria, and actions can be reviewed simultaneously. Referring again toFIG.21, the GUI can display a list of alerts that occurred within a predetermined or selected timeframe, such as 24 hours, 48 hours, 3 days, 7 days, 30 days, etc. The alerts displayed can be filtered by a global filter, by application, location, geo location, user(s), device type, operating system, operating system version, and the like. The alert display can include a chart that shows UX scores and numbers of alerts. The list view can also be filtered by clicking on or selecting the alerts number. The list view can include a number of columns providing information with regards to each alert. The list view can include columns for alert identification, an alert name, an alert status (active, cleared, disabled, muted), an alert start time, an alert duration (no end time if alert still active), a monitor name (source of the alert), metrics values that caused the alert to trigger, action taken (email) with a link to the Alert action, impacted geo locations with a score, impacted locations with a score, impacted applications with score, number of impacted users (with link to users), impacted groups, impact departments, impacted device types, and the like. The list view of the alerts can be configured to auto refresh on a predetermined or selected time interval, such as once a minute. The list view can also be sorted based on the columns, and administrators can disable, suppress, and clear a single alerts or multiple alerts in a single action. Identifiers, such as color coding can be included in the list view of alerts to quickly identify certain aspects of the alerts, such as if the alerts are active, cleared, muted, disabled, and enabled. For example, an active alert can be shown as red for high severity and orange for a warning severity, a cleared alert can be shown as green, a muted alert can be shown as blue, and a disabled alert can be shown as gray. FIG.40is a GUI of an alert detail page. The alert detail page can be reached, for example, by selecting a link from a notification or by selecting the alert from the list view of alerts. The alerts detail page can include a map illustrating the impacted geolocations along with a number of events that occurred at that location, which can be an overlay on the map. The alerts detail page can also include a list view of the locations (such as defined fences) with the number of users impacted and can include a list view of the users impacted. The list view of the users impacted can include columns for a user identification, the device, the department, the location, the geolocation, the operating system, the operating system version, and the like. The alert detail page can also list the departments, geolocations, and locations impacted with a number of events that occurred for each department, geolocation, and location. The alert detail page can further list the rules (expression triggers) that define the events that trigger the alert. Again, alert actions can include sending email notifications to one user or a list of users, sending one or more webhooks, and sending one or more notifications via third party integration, such as by sending alert data to an external event, incident management or operations center system such as ServiceNow, Slack, PagerDuty, and the like. Email notifications can rely on email configurations of the cloud system800, and thus, may not require configuration of the email server and authentication within the GUI for configuration of the alerts. Establishing an alert for email notifications can simply require the one or more email addresses that the alert will be sent to. The webhooks can make an HTTP post request on a configured URL and can pass formatted alert data in the post request body. The alert data can include an alert ID, a callback URL to Alert information (GET URL), an alert owner, an alert type, an impacted application, a number of impacted users, and the like. The data for the webhook can be XML, JSON, CSV, and the like. The GUI can include screens to create the webhook. The webhook configuration screen can include a name, a URL, an authentication type, such as basic authentication (user/password fields), bearer token (token field), and the like. The webhook configuration screen can also provide a section for testing the webhook, which upon testing the webhook provides an indication on whether the test was successful or failed. FIG.41is a GUI of a UX dashboard. The UX dashboard can include an alert event and impacted user volume on all UX trends and metrics charts with the ability to drill down to the alerts views or to the user view. Further, the geolocations map can include an alerts view, that when selected, can display a heatmap of the alerts across geo locations. A size of the alert circle, along with a number displayed therein can identify the number of alerts at the geolocation. An indicator, such as a color, can indicate a severity of the alerts. As can be seen inFIG.41, upon scrolling over or selecting an alert circle on the map, further information including a name of the geolocation, the number of high severity alerts, and a total number of alerts can be displayed. Selecting the further information can link to a detailed view of the alerts at the corresponding geolocation. Alerts can be created as private, where only the administrator that created the alert can view the alert or administrators given permission to view the alert can view the alert, or can be created as public where the alert is shared across and viewable by all administrators. The GUI can also include a suppression option, which allows administrators to suppress an alert for a predetermined or selected period of time. The GUI can also include a testing option. Under the testing option, a screen is displayed which allows an alert, being created or already created, to be tested against historic data of the cloud system800. The historic data can be filtered by timeframe, location, geolocation, and the like. It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device such as hardware, software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. | 114,524 |
11863410 | DETAILED DESCRIPTION Embodiments of the disclosure are directed to a system configured to provide operational visibility of a network that spans one or more cloud computing environments. According to one embodiment, the system may include a software instance that is operating in one or more cloud computing resources and is configured to collect information and render a graphic user interface (GUI) that provides an interactive, visual rendering of the connectively between constructs of a network spanning multiple (two or more) cloud computing environments (hereinafter, a “multi-cloud computing environment or a “multi-cloud network”). In other embodiments, the system includes the software instance and a controller configured to manage constructs deployed in one or more cloud computing environments such as within a multi-cloud environment and communicate with the software instance. As will be discussed below in further detail, the software instance may query the controller for information using one or more Application Programming Interface (API) calls to retrieve information stored by the controller detailing status information of each construct managed by the controller. The controller obtains such information from one or more gateways deployed within a multi-cloud network, where the gateway(s) are configured to transmit this information to the controller on a periodic (or aperiodic) basis. It should be understood that, as discussed herein, the term “multi-cloud networks” refers a plurality of cloud networks, where each cloud network may constitute a public cloud network provided by a different cloud computing environment resource provider (hereinafter, “cloud provider”). As is known in the art, a controller may be configured to program each gateway to control routing of network traffic such as by providing instructions to gateways as to how network traffic is routed among various gateways. As illustrative examples, the controller may instruct a gateway as to whether a virtual machine (VM) from one subnet work (hereinafter, “subnet”) may communicate directly with a VM from another subnet, or how network traffic will flow from a source to a destination within the cloud computing environment managed by the controller. In addition, embodiments of the disclosure discuss instructions provided by the software instance to the controller, which are then transmitted to one or more gateways by the controller and include instructions to transmit network data from the gateway to a routable address (e.g., an Internet Protocol “IP” address, etc.) of the software instance. Therefore, as a general embodiment, the software instance may query the controller for data indicating a status and metadata of each construct managed by the controller and also receive network data from one or more gateways. The software instance includes logic that, upon execution by one or more processors (e.g., being part of the cloud computing resources), generates various visualizations that are a combination of the construct status and metadata (collectively “construct metadata”) and the network data. The visualizations may be interactive and provided to users such as network administrators, information technology (IT) professionals, or the like. Additionally, the visualizations may be configured to receive user input, which causes the logic of the software instance (“topology system logic”) to alter the visualizations. As discussed below and illustrated in the accompanying drawings, the visualizations may include, but are not limited or restricted to, a dashboard view providing overall status and health of the network as well as specific network parameters; a dynamic topology mapping that provides a visual rendering of each construct and links that identify communications between the constructs; and a network flow visualization providing various illustrations detailing how network traffic is flowing (or has flowed) through the cloud computing environment managed by the controller. Each of the visualizations may provide data spanning a multi-cloud network. In some embodiments, responsive to the user input, the topology system logic may generate tags for one or more of the constructs via the topology mapping visualization and store those tags for searching. For example, further user input may be received causing the topology system logic to search the numerous constructs managed by the controller and display the tagged constructs, as well as any links therebetween, via the topology mapping. In yet some embodiments, responsive to received user input including one or more tags as search items, the topology system logic may generate visualizations illustrating the network flow of the corresponding tagged construct(s). By querying the controller for construct metadata and receiving network data from one or more gateways, the topology system logic may generate the exemplary visualizations described above, and those shown in the accompanying drawings, that illustrate the flow of network traffic associated with one or more tagged constructs. As is noted throughout, the illustrated flow of network traffic may correspond to constructs deployed in multiple cloud networks. Such operability provides numerous advantages to users over the current art by enabling users to tag one or more gateways residing in different public cloud networks with meaningful tags and search for construct parameters, construct status, link status and the flow of network traffic corresponding to that tag. An additional functionality of the topology system logic is the generation of visualizations that illustrate changes to aspects of the network managed by the controller over time. For example and as discussed below, the topology system logic may store the received data pertaining to the network (the network data and the construct metadata) for given points in time, e.g., t1→ti(where i>1). Upon receiving user input corresponding to a request to display the changes between two points in time, e.g., t1and t2, the topology system logic compares the stored data for t1and t2, and generate a visual that highlights the change(s) between the network at t1and t2. The term “highlight” may refer to any visual indicator or combination of visual indicators, such as color-coding constructs having changed parameters, varying the size of constructs having changed parameters, displaying a graphic (e.g., a ring) around constructs having changed parameters, displaying a window or other image that lists the detected changes in state of the network, which may spanning multiple public cloud networks, between time t1and time t2, or other types of visual indicators. I. Terminology In the following description, certain terminology is used to describe features of the invention. In certain situations, the term “logic” is representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic. Alternatively, or in combination with the hardware circuitry described above, the logic may be software in the form of one or more software modules. The software module(s) may include an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage. The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. The term “construct” may be construed as a virtual or physical logic directed to a particular functionality such as a gateway, virtual private cloud network (VPC), sub-network, or the like. For instance, as an illustrative example, the construct may correspond to virtual logic in the form of software (e.g., a virtual machine), which may assign a device-specific address (e.g., a Media Access Control “MAC” address) and/or an IP address within an IP address range supported by to a particular IP subnet. Alternatively, in some embodiments, the construct may correspond to physical logic, such as an electronic device that is communicatively coupled to the network and assigned the MAC and/or IP address(es). Examples of electronic devices may include, but are not limited or restricted to a personal computer (e.g., desktop, laptop, tablet or netbook), a mobile phone, a standalone appliance, a sensor, a server, or an information routing device (e.g., a router, bridge router (“brouter”), etc.). It is contemplated that each construct may constitute at least logic residing as part of a public network, although certain constructs may be deployed as part of an “on-premises” (or local) network. The term “gateway” may refer to a software instance deployed within a public cloud network or a virtual private cloud network deployed with the public cloud network and controls the flow of data traffic within and from the public cloud network (e.g., to one or more remote sites including computing devices that may process, store and/or continue the routing of data). Herein, each gateway may operate as a “transit gateway” or “spoke gateway,” which are gateways having similar architectures but are identified differently based on their location/configurations within a cloud computing environment. For instance, a “spoke” gateway is configured to interact with targeted instances while a “hub” gateway is configured to further assist in the propagation of data traffic (e.g., one or more messages) directed to a spoke gateway or a computing device within an on-premises network. The term “network traffic metrics” may refer to measurements of network traffic transmission including amount, frequency and/or latency. In some embodiments, network traffic metrics may include identification of a source and/or destination (e.g., IP address, originating/destination gateway, originating/destination VPC, originating/destination geographic region, etc.). Further, in some embodiments, network traffic metrics may also refer to analyses performed on and/or filtering of measurements of network traffic transmission. The term “controller” may refer to a software instance deployed within a cloud computing environment (e.g., resources of a public cloud network) that manages operability of certain aspects of one or more cloud computing environments spanning across different public cloud networks (multi-cloud network). For instance, a controller may be configured to collect information pertaining to each VPC and/or each gateway instance and configures one or more routing tables associated with one or more VPCs and/or gateway instances spanning a multi-cloud network to establish communication links (e.g., logical connections) between different sources and destinations. These sources and/or destinations may include, but are not restricted or limited to on-premises computing devices, gateway instances or other types of cloud resources. The term “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable delivery protocol. Hence, each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format. The term “link” may be generally construed as a physical or logical communication path between two or more constructs. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used. A logical communication path includes any communication scheme that enables information to be exchanged between multiple constructs Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive. As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described II. General Architecture—Topology System Referring toFIG.1, a diagram of an exemplary embodiment of a distributed cloud management system100is shown, where the cloud computing system features a controller102for managing constructs residing in multiple cloud networks and a software instance138to visualize the managed constructs (hereinafter, “topology system logic”). More specifically, the controller102is configured to manage multiple constructs spanning multiple cloud networks, such as cloud (network) A104and cloud (network) B106. In the exemplary illustration, cloud A104provides computing resources (“resources”) for a transit gateway114in communication with gateways1181-1182associated with virtual networks (VNETs)1161-1162. Cloud B106provides resources for a transit gateway120in communication with gateways1241-1242associated with virtual private clouds (VPCs)1221-1222. Cloud B106further provides resources for a native transit hub126in communication with VPCs128and130. According to this embodiment of the disclosure, as shown inFIG.1, the transit gateways114,120and the native transit hub126are in communication with each other. Thus, as should be clearly understood that the controller102is managing several constructs, such as the illustrated gateways, that span multiple cloud networks. Specifically, a first grouping of constructs108is deployed within the Cloud A104, and second and third groupings of constructs110,112are deployed within Cloud B106. The controller102utilizes a set of APIs to provide instructions to and receive data (status information) associated with each of these constructs as well as status information pertaining to each connection between these constructs (link state). The construct metadata returned by a construct may depend on the type of construct (e.g., regions, VPCs, gateway, subnets, instances within the VPCs, etc.), where examples of construct metadata may include, but is not limited or restricted to one or more of the following construct parameters (properties): construct name, construct identifier, encryption enabled, properties of the VPC associated with that construct (e.g. VPC name, identifier and/or region, etc.), cloud properties in which the construct is deployed (e.g., cloud vendor in which the construct resides, cloud type, etc.), or the like. Additionally, the cloud management system100includes topology system logic138processing on cloud computing resources136. In some embodiments, the topology system logic138may be logic hosted on a user's Infrastructure as a Service (IaaS) cloud or multi-cloud environment. As one example, the topology system logic138may be launched as an instance within the public cloud networks (e.g., as an EC2® instance in AWS®). As an alternative example, the topology system logic138may be launched as an virtual machine in AZURE®. When launched, the topology system logic138is assigned a routable address such as a static IP address for example. As shown, the topology system logic138is in communication with the controller102via, for example, an API that enables the topology system logic138to transmit queries to the controller102via one or more API calls. The topology system logic138, upon execution by the cloud computing resources136, performs operations including querying the controller102via API calls for construct metadata in response to a particular event. The particular event may be in accordance with a periodic interval or an aperiodic interval or a triggering events such as a user request for a visualization via user input. In some embodiments, in response to receiving a query via an API call from the topology system logic138, the controller102accesses data stored on or by the controller102and returns the requested data via the API to the topology system logic138. For example, the topology system logic138may initiate one or more queries to the controller102to obtain topology information associated with the constructs managed by the controller102(e.g., a list of all gateways managed by the controller102, a list of all VPCs or VNETs managed by the controller102, or other data gathered from database tables) along with status information associated with each construct as described above. Upon receiving the requested construct metadata, the topology system logic138performs one or more analyses and determines whether any additional construct metadata needs to be requested. For example, the topology system logic138may provide a first query to the controller102requesting a list of all gateways managed by the controller102. In response to receiving the requested construct metadata, the topology system logic102determines the interconnections between the gateways listed. Subsequently, the topology system logic138may provide a second query to the controller102requesting a list of all VPCs managed by the controller. In response to receiving the requested construct metadata, the topology system logic138determines the associations between each VPC and a corresponding gateway. For example, in some embodiments, the received construct metadata provides detailed information for each gateway enabling the topology system logic138to generate a data object, e.g., a database table of the construct metadata, that represents a gateway. The data object representing the multiple gateways are cross-referenced to build out a topology mapping based on the parameters of each gateway, which may include, inter alia: cloud network user account name; cloud provider name; VPC name; gateway name; VPC region; sandbox IP address; gateway subnet identifier; gateway subnet CIDR; gateway zone; name of associated cloud computing account; VPC identifier; VPC state; parent VPC name; VPC CIDR; etc. Similarly, the construct metadata is also utilized to generate a data object representing each VPC object and each subnet object. Additionally, in order to determine whether a connection within the network is between two transit gateways, a separate API call may be utilized by the topology system logic138to query the controller102for a listing of all transit gateways. Thus, the topology system logic138is then able to determine whether a connection between a first gateway and a second gateway is between two transit gateways. In some embodiments, as will be discussed below, the connections between transit gateways and the connections between a spoke gateway and a transit may be represented visually in two distinct methods. In addition to receiving the construct metadata from the controller102, the topology system logic138may also receive network data from one or more gateways managed by the controller102. For example, the network data may include for each network packet, but is not limited or restricted to, an ingress interface, a source IP address, a destination IP address, an IP protocol, a source port for UDP or TCP, a destination port for UDP or TCP, a type and code for ICMP, an IP “Type of Service,” etc. In one embodiment, the network data may be transmitted to the topology system logic138from a gateway using an IP protocol, for example, UDP. In some embodiments, the network data is collected and exported via the NetFlow network protocol. In order to configure a gateway to transmit the network data to the topology system logic138, the topology system logic138may provide instructions to the controller102, which in turn provides the instructions to each gateway managed by the controller102. The instructions provide the IP address of the topology system logic138, which is used as the IP address for addressing the transmission of the network data. As will be discussed in detail below, the topology system logic138may generate a visualization platform comprising one or more interactive display screens. These display screens may include a dashboard, a topology mapping and a network flow visualization. Additionally, the visualization platform may be configured to receive user input that causes filtering of the of the displayed data. For example and still with reference toFIG.1, the topology system logic138may generate a topology mapping visualization of the connections linking the constructs detected by the controller102, which are illustrated by the constructs within a logical region132represented by Cloud A104and Cloud B106. Additionally, the topology system logic138may generate various graphical user interfaces (GUIs) that illustrates network traffic flows, traffic flow heat maps, packet capture, network health, link latency, encryption, firewalls, etc., of network traffic flowing between, to and from constructs managed by the controller102as illustrated by a second logical region134. Embodiments of the disclosure offer numerous advantages over current systems that provide a dashboard illustrating parameters of a controller as current systems do not provide the ability to visualize connections between constructs deployed across multiple cloud networks, the state of resources and connections between resources for multiple clouds and the flow of network data through constructs spanning multiple clouds. As one example, an enterprise network may utilize resources deployed in a plurality of cloud networks and an administrator of the enterprise network may desire to obtain visualization of the status of all constructs and connections associated with these resources. However, because the enterprise network spans multiple cloud networks, conventional systems fail to provide such a solution. By merely obtaining a textual representation of a status of each construct within a single cloud (e.g., through a command line interface), an administrator is unable to obtain a full view of the constructs, connections therebetween and the status of each for the entire enterprise network. Further, detection of anomalous or malicious network traffic patterns may not be detectable in the manner provided by current systems. As used herein, a visualization (or visual display) of the constructs, connections therebetween and the status of each is referred to as a topology mapping. Current systems fail to provide a topology mapping across multiple cloud networks and fail to allow an administrator to search across multiple cloud networks or visualize how changes in a state of a construct or connection in a first cloud network affects the state of a resource or connection in a second cloud network. In some embodiments, the topology mapping may automatically change as a state of a construct or connection changes or upon receipt of construct metadata updates in response to certain events such as at periodic time intervals (e.g., a “dynamic topology mapping”). In some embodiments, a network may be deployed across multiple cloud networks using a plurality of controllers to manage operability of the network. In some such embodiments, each controller may gather the information from the network and constructs which it manages and a single controller may obtain all such information, thereby enabling the visualization platform to provide visibility across a network (or networks) spanning multiple controllers. Referring toFIG.2A, an exemplary illustration of a logical representation of the controller102deployed within the cloud management system100is shown in accordance with some embodiments. The controller102, as noted above, may be a software instance deployed within the cloud network to assist in managing operability of constructs within multiple public cloud networks. According to this embodiment, the controller102may be configured with certain logic modules, including, a VPC gateway creation logic200, a communication interface logic202and a data retrieval logic204. The controller102may also include a routing table database206. In some embodiments, the gateway creation logic200performs operations to create a gateway within a VPC including creating a virtual machine within a VPC, provide configuration data to the virtual machine, and prompt initialization of the gateway based on the configuration data. In one embodiment in which the cloud computing resources utilized are AWS®, the VPC gateway creation logic200launches a virtual machine within a VPC, the virtual machine being an AMAZON® EC2 instance. The virtual machine is launched using a pre-configured virtual machine image published by the controller102. In the particular embodiment, the virtual machine image is an Amazon Machine Image (AMI). When launched, the virtual machine is capable of receiving and interpreting instructions from the controller102. The communication interface logic202may be configured to communicate with the topology system logic138via an API. The controller102may receive queries from the topology system logic138via one or more API calls and respond with requested data via the API. The data retrieval logic204may be configured to access each construct managed by the controller102and obtain construct metadata therefrom. Alternatively, or in addition, the data retrieval logic204may receive such construct metadata that is transmitted (or “pushed”) from the constructs without the controller102initiating one or more queries (e.g., API calls). The routing table database206may store VPC routing table data. For example, the controller102may configure a VPC routing table associated with each VPC to establish communication links (e.g., logical connections) between a transit gateway and cloud instances associated with a particular instance subnet. A VPC routing table is programmed to support communication links between different sources and destinations, such as an on-premise computing devices, a cloud instance within a particular instance subnet or the like. Thus, the controller102obtains and stores information that reveals certain properties of resources (e.g., constructs such as gateways, subnets, VPCs, instances within VPCs, etc.) within the purview of the controller102as well as status information pertaining to the connections (communication links) between with these resources. Referring toFIG.2B, an exemplary illustration of a logical representation of the topology system logic138deployed within a cloud computing platform is shown in accordance with some embodiments. The topology system logic138may be a software instance deployed using the cloud computing resources136and is configured to communicate with the controller102and each of the gateways managed by the controller102. The topology system logic138is configured with certain logic modules, including, a tagging logic208, a tags database210, an interface generation logic212, a communication interface logic214, a topology snapshot logic216. Additionally, the topology system logic138may include a snapshot database218, a construct metadata database220and a network data database222. In some embodiments, the tagging logic208, upon execution by one or more processors, performs operations as discussed below with respect toFIGS.4A-5A and7A-7B. In some embodiments, the tags generated by the tagging logic208may be stored in the tags database210. In some embodiments, the interface generation logic212, upon execution by one or more processors, performs operations as discussed below and that cause generation of exemplary interactive user interfaces as illustrated inFIGS.4A-5G. In some embodiments, the tags generated by the tagging logic208 In some embodiments, the communication interface logic214, upon execution by one or more processors, performs operations as discussed herein pertaining to querying a controller for construct metadata, receiving the requested construct metadata and receiving the network data from one or more gateways managed by the controller. In some embodiments, the received construct metadata and network data may be stored in the construct metadata database220and the network data database222(which may be separate or a combined database). In some embodiments, the topology snapshot logic216, upon execution by one or more processors, performs operations as discussed below with respect toFIGS.4G-4H and8. In some embodiments, the snapshots (recorded data) generated by the topology snapshot logic216may be stored in the snapshot database218. III. Exemplary User Interfaces—Topology System Visualization Platform The exemplary user interfaces illustrated inFIGS.3A-5Gmay be configured by the topology system logic138to be rendered and displayed on various display screens and via various applications. For example, each of the user interfaces illustrated inFIGS.3A-5Gmay be configured to be displayed through a web browser on a computer display screen, a laptop, a mobile device, or any other network device that includes a web browser. Additionally, each of the user interfaces illustrated inFIGS.3A-5Gmay be configured to be displayed through a dedicated software application installed and configured to be executed on any of the network devices described above. For example, the topology system logic138may be configured to provide the data and user interfaces described herein to a software application (known in the art as an “app”) that may be installed and configured to be executed by one or more processors of a network device. Thus, upon execution, the app causes the user interfaces described herein to be rendered on the display screen of the network device (or an associated display screen). 1. Dashboard Referring now toFIGS.3A-3C, graphical user interface (GUI) screens (or “interface screens”) displaying portions of a dashboard of a Topology System visualization platform (“visualization platform”) with each portion configured to illustrate information obtained or determined by the Topology System are shown according to some embodiments. The interface screens ofFIGS.3A-3Cmay collectively comprise a “dashboard”300that displays various attributes pertaining to a network that is deployed across one or more cloud providers, and notably across multiple cloud providers. For example, the dashboard300as shown inFIG.3Ainclude several display portions302,306, and308. The navigation panel304is also shown as part of the visualization platform generated by the topology system logic138. The display portion302displays information pertaining to constructs managed by a controller, e.g., the controller102ofFIG.1, with the constructs deployed in one or more cloud networks. The information displayed may include, but is not limited or restricted to, the number of gateways deployed, the number of current virtual private network (VPN) users, the number of user accounts, the number of transient gateways (TGWs), the number of network connections (optionally filtered according to cloud computing service), etc. The display portion306ofFIG.3Aincludes a listing of virtual data centers comprising resources of the network, again optionally spanning multiple cloud networks. Specifically, the display portion306includes user input fields (e.g., checkboxes) configured to receive user input indicating how whether displayed by the dashboard300is filtered by one or more particular cloud networks (e.g., AWS®, GOOGLE® CLOUD PLATFORM® (GCP), AZURE® ORACLE CLOUD INFRASTRUCTURE® (OCI)). In some embodiments, a virtual data center is a pool of cloud computing resources that may be hosted on a public cloud. Further, display portion308illustrates a world map including a graphical representation, e.g., such as the icon309, for each virtual data center listed in the display portion306and a position on the world map to signify its geographical location. The display portion308may be filtered in accordance with the selection of “Filter By Cloud” provided in the display portion306and may be configured to receive user input to adjust the magnification of the map (e.g., “zoom in” or “zoom out”). The navigation panel304includes links to each of the general visualizations provided by the visualization platform including the dashboard300, the topology mapping400(ofFIGS.4A-4E) and network flow visualization500(ofFIGS.5A-5G). Referring now toFIG.3B, an illustration of a portion of the dashboard300displaying a plurality of graphs and charts is shown through a plurality of display portions310and312. Each of the display portions310and312each display a distribution of resources throughout a multiple cloud deployment. For instance, as an illustrative embodiment, the display portion310features a number of bar graphs illustrating metrics directed to resources managed by the controller; however, as should be understood by review of the drawings accompanying this disclosure, bar graphs are merely one type of illustration that may be utilized to present data and the disclosure is not intended to be so limited to the specific graphical representation types shown. Display portion310illustrates that the data displayed on the dashboard corresponds to constructs and network traffic spanning multiple cloud networks by specifically displaying “Accounts by Cloud,” “Gateways by Cloud” and “Transit Gateways by Cloud.” Similarly, the display portion312provides graphical representations directed toward gateway metrics, including “Gateways by Type,” “Gateways by Region” and “Gateways by Size.” In some embodiments, the gateway metrics include one or more of a total of gateways deployed, a number of virtual private network (VPN) users, a number of user accounts associated with one or more gateways, a number of transit gateways, a number of gateways deployed by a specific cloud computing resource provider, a number of Border Gateway Protocol (BGP) connections, or a number of transient gateway attachments. FIGS.3A-3Billustrate various metrics and characteristics of gateways, where the metrics may include one or more of: a total of gateways deployed, a number of virtual private network (VPN) users, a number of user accounts, a number of transit gateways, a number of gateways deployed by a specific cloud computing resource provider, a number of Border Gateway Protocol (BGP) connections, or a number of transient gateway attachments. Further, one or more metrics may be derived from or based on gateway characteristics, which may include one or more of a cloud computing network in which each gateway is deployed, a type of each gateway, a size of each gateway, or a geographic region in which each gateway is deployed. Referring now toFIG.3C, an illustration of another graphical representation of network functionality or operations or operability, based on data gathered and processed by the topology system logic138and displayed as part of the dashboard300, is shown. More specifically, according to this illustrative embodiment, the display portion314provides a graphical representation of network traffic between resources spanning multiple cloud networks for an adjustable time period (e.g., 24 hours). The time period may be adjusted by the topology system logic138based on receipt of user input. For example, user input may be received corresponding to selection of a portion of the graph shown by the user. In response to such received user input, the topology system logic138may alter the graphical representation to target the selected portion that now may be represented by a smaller time interval, e.g., 15 minutes, 30 minutes, one hour, etc. In some embodiments, the dashboard300(and other visualizations discussed inFIGS.4A-5G) are generated are a result of user input requesting such visualizations. In some embodiments, in response to receiving the request, the topology system logic138will request the construct metadata as discussed above, and store the construct metadata and the latest network data received from the gateways in a data store (such as the construct metadata database220and/or the network data database222, which as noted above, may be a single database). Additionally, the topology system logic138then generates the requested visualization based on the stored data. In some embodiments, the topology system logic138will automatically update the visualizations (e.g., generate an updated visualization and cause the re-rendering of the display screen) at periodic time intervals (e.g., every 30 seconds, every 1 minute, etc.). In some embodiments, an updated visualization will be generated and displayed upon occurrence of a triggering event, such as receipt of user input requesting a refresh of the display screen. The updated visualizations will be updated based on newly received or obtained construct metadata and/or network data since the previous rendering. 2. Topology Mapping Referring now toFIGS.4A-4E, interface screens displaying portions of a topology mapping400of the visualization platform generated by the topology system logic138are shown according to some embodiments. Specifically,FIGS.4A-4Eillustrate a plurality of constructs that are deployed in one or more cloud networks managed by the controller102and connections between various constructs. Referring toFIG.4A, an exemplary illustration of a topology mapping400generated by the topology system logic138is shown in accordance with some embodiments. As shown, the topology mapping400includes a graphical representation of the constructs managed by a controller, e.g., the controller102ofFIG.1. The topology mapping400enables a user to visualize all known constructs, which may be deployed on a single cloud or across multiple cloud networks. In the exemplary illustration, it is seen that the constructs displayed are deployed on a plurality of cloud networks including AZURE®, GCP and AWS®. The topology mapping400is an interactive display screen configured to receive various forms of user input (e.g., drag and reposition of constructs, select construct(s) to view construct parameters, input text, select settings, or activate buttons, etc.). The received user input may be configured reposition constructs, cause a display of construct parameters, apply filters, search for constructs, run diagnostics, apply tag(s), etc. As illustrated inFIG.4A, the construct402, e.g., the gateway402, is illustrated as being selected. In the embodiment shown, the selection of the gateway402results in the display of the parameters of the gateway402in the display portion422. For instance, the display portion422may provide the name of the selected construct, “Gateway,” (gateway402) user input buttons424and426configured to receive user input, and a listing428of construct parameters including whether the construct is encrypted, the cloud provider of the construct, the gateway name associated with the construct, the VPC identifier associated with the construct, the cloud type of the construct, the VPC region of the construct, whether the construct is a transit VPC, etc. It should be noted that the construct parameters may correspond to the construct metadata received by the controller102as discussed above and is not limited to the parameters illustrated in the drawings. Instead, the disclosure includes all parameters of the selected construct. As is known in the art, a transit VPC operates to connect multiple VPCs and/or remote networks. The topology mapping400also illustrates a plurality of connections between various constructs (e.g., illustrated as nodes or vertices). With reference to the selected gateway402, several connections (communication links) are illustrated, including but not limited to: link412to the gateway404, link414to the transit gateway406, a link416indirectly linking the gateway402to the transit gateway410, etc. Additionally, the varying graphical indicia may indicate a difference in the link. For example, in some embodiments, a solid link may indicate a link between two spoke gateways or a gateway-to-transit gateway link. Additionally, in some embodiments, a dotted line may indicate a link between two transit gateways. Further, in some embodiments, the links may be color-coded to provide a visual indication as to the state of the link, e.g., green for active, and red for inactive. The topology mapping400also illustrates constructs other than a gateway including subnets, such as the subnet418, and virtual data centers, such as the virtual data centers408,420(e.g., representing an AWS® resource and an AZURE® resource, respectively). Referring toFIG.4B, an exemplary illustration of a topology mapping400generated by the topology system logic138illustrating a diagnostic function is shown in accordance with some embodiments. As shown inFIG.4A, the topology mapping400may display a button426(e.g., labeled “diagnostic”) configured to receive user input activating the button26, which causes initiation of a diagnostic procedure to be performed by the topology system logic138. Upon activation of the button426, the topology system logic138causes rendering of the display box430, which includes several input fields432-438each configured to receive user input that dictates aspects of the diagnostic procedure. As shown, the topology system logic138is configured to perform the diagnostic procedure on the selected construct (gateway402); however, in other embodiments, the display box430may include an additional user input field configured to receive a construct on which the diagnostic procedure will be performed (or initiated from). Additionally, the topology system logic138may be configured to provide a value indicating link latency located on or adjacent to one or more of the links illustrated inFIG.4A. The topology system logic138may be configured to automatically send data packets (e.g., a ping) and determine the time the packet spent in transmission by analyzing the time the data packet was sent and the time the data packet was received (included in a response packet from the data packet's destination). The link latency for each link may be updated at periodic intervals, e.g., every 30 seconds, 60 second, etc., or in response to a triggering event (e.g., receipt of user input indicating a refresh of the visual). Although a subset of the links illustrated inFIG.4Ainclude an indication of link latency, such may be provided for all links. Advantageously, a visual of link latencies may be used by network administrators to reposition constructs (e.g., terminate and re-launch a virtual machine in a different subnet) in order to improve link latencies. Additionally, the visual of link latencies may be utilized to assess non-compliance with certain Quality of Service (QoS) levels (e.g., as set forth in a contract). Further, the topology system logic138may set latency thresholds and monitor link latencies such that a notification is generated or alteration of the topology mapping400when a link latency meets or exceeds a latency threshold (which may correspond to the certain QoS levels referenced above). Referring toFIG.4B, the input fields432-434are configured to receive user input of: a destination, and an interface, respectively. Buttons436-438are configured to be activated via user input corresponding to selection of a diagnostic procedure of either: a ping, or trace route, respectively. Although not shown, in other embodiments, input fields corresponding to selection of other diagnostic procedures may be provided, including a TCP dump and/or a link latency check, for example. As shown inFIG.4B, in response to user input activating the button436(ping), the topology system logic138initiates a procedure in which a ping is transmitted from the selected construct (gateway402) to the destination address provided in field432(e.g., IP address, 8.8.8.8). The result440of the ping are illustrated in real-time. Thus, via the diagnostic procedures provided by the topology system logic138through the topology mapping400, a user may be provided with results of various diagnostic procedures performed on constructs spanning multiple cloud networks (e.g., a ping procedure may be performed between a first gateway deployed in a first cloud and a second gateway deployed in a second cloud) with a visual of the results provided via the visualization platform generated by the topology system logic138. Referring toFIG.4C, an exemplary illustration of a topology mapping400′ generated by the topology system logic138illustrating search and filter functions is shown in accordance with some embodiments. Another functionality provided by the topology system logic138a search within the topology mapping400and a filter of the displayed constructs based on the received user input, e.g., the search term. As shown, the display portion422may include an input field442, e.g., a text box, configured to receive user input corresponding to a search term. In response to receiving user input at the input field442, the topology system logic138performs a search of the constructs displayed in the topology mapping400. The search may be of the stored construct metadata, wherein the search includes one or more queries to the construct metadata database220ofFIG.2B. The data returned from the one or more queries is then used to generate the topology mapping400′, which is a filtered view of the topology mapping400being a display of only the constructs associated with the search term. For example, the search term may correspond to any of the construct parameters discussed above. The topology system logic138need not receive a specified parameter but may instead just search all construct parameters within the database220for a value corresponding to the search term. It should be understood that the system of the disclosure may be filtered according to multiple search terms or parameters. Further, it will be understood that, as discussed throughout, the topology system logic138advantageously stores the construct metadata for constructs spanning multiple cloud networks. Thus, the filtered view provided as a result of the topology system logic138receiving one or more search terms may correspond to a plurality of constructs spanning multiple cloud networks. Referring toFIG.4D, an exemplary illustration of a topology mapping400generated by the topology system logic138illustrating a tagged construct is shown in accordance with some embodiments. As shown, the display portion422includes a user input button424(e.g., labeled “Add tag”), which corresponds to tagging function performed by the topology system logic138. Responsive to receiving user input activating the button424, a display box444is generated by the topology system logic138and configured to receive further user input corresponding to a tag (e.g., alphanumeric text) that is to be associated with one or more selected constructs. In the illustrative example ofFIG.4D, a single node, i.e., construct, is selected (gateway402). Upon activation of the “add” button via user input included within the display box444, the tag “ccdata” will be generated and associated with the selected construct, the gateway402. The generation and association of the tag with the selected includes several operations performed by the topology system logic138. For example, the topology system logic138generates and stores a table, wherein the table may be stored in the tags database210ofFIG.2B. The table includes an association of the tag “ccdata” and a unique identifier of each of the selected constructs, a unique identifier of the gateway402. Therefore, and discussed in more detail below, the tagging of a construct enables a user to search the topology mapping400for constructs by their associated tag or tags, which is advantageous to users as they no longer need to remember or search by unique identifiers, which are often long alphanumeric strings. Additionally, when a plurality of constructs are tagged with the same tag, a user may search for the tag, and the topology system logic138will in turn generate a display showing the plurality of constructs associated with the tag provided as a search term. In some embodiments, as shown inFIG.4E, the topology mapping400may be filtered to display only the plurality of constructs associated with the tag provided as a search term. However, in other embodiments, although not shown, the entirety of constructs may be displayed while the plurality of constructs associated with the tag provided as a search term are displayed in a visually-different manner than those constructs not associated with the tag (e.g., highlighted, color-coded, etc.). Referring toFIG.4E, an exemplary illustration of a topology mapping400″ generated by the topology system logic138illustrating the tagging function illustrated inFIG.4Din conjunction with a search by tag function is shown in accordance with some embodiments. As was discussed above, the topology system logic138performs operations to generate a tag based on received user input and associate the tag with one or more selected constructs (e.g., the gateway402as discussed with respect toFIG.4D). In the illustration shown inFIG.4E, it is assumed that a similar set of operations as discussed inFIG.4Dhave been performed and the constructs450, including the gateway402, have been tagged with the same tag, e.g., “ccdata.” The topology mapping400″ illustrates a view of the topology mapping400where the displayed constructs have been filtered by a search term, “ccdata,” as illustrated by the text448provided to the display box422as user input. Further, upon receiving the text448as user input, the topology system logic138illustrates the search term in the display box422(e.g., search term446) and further filters the topology mapping400to display the topology mapping400″, which illustrates the constructs associated with the search term446. As referenced above, the grouping of constructs450are assumed to have been tagged with “ccdata”; thus, upon receiving the search term446as user input, the topology system logic138queries the tags database210to retrieve the unique identifier of each construct tagged by with “ccdata.” Subsequently, the topology system logic138generates and causes display of the topology mapping400″, which includes the grouping of constructs450. In addition,FIG.4Eillustrates a second aspect of the tagging functionality of the topology system logic138, which is to perform operations to tag constructs in a single instance. As shown, the grouping of constructs450appear selected via user input. The topology system logic138is further configured to tag the selected constructs with the user input provided in the display box452, the display of which is a result of the activation of the “add tag” button424, as discussed above. Specifically, the display box452is shown to receive the user input454(Company 1). Therefore, in response to activation of the “add” button within the display box452, the topology system logic138will generate a tag of “Company 1” and associate each of the constructs within the grouping450to the tag of “Company 1.” It should be noted that the illustrated example ofFIG.4Ediscloses yet another aspect of the tagging functionality of the topology system logic138, which is that a construct may be associated with multiple tags. As shown, each construct of the grouping450is associated with at least two tags: “ccdata” and “Company 1.” It should be understood that a tag is not merely replacing a construct identifier (such as an IP address) but as multiple resources may have the same tag or tags, tagging constructs allows a user to visualize where a specified subset of constructs is deployed throughout the entire network, which may span across multiple cloud networks. As is illustrated in and will be discussed with respect toFIGS.5A-5G, a search by a tag or tags enables a user to visualize specific network data for constructs associated with the tag or tags. Referring toFIG.4F, an exemplary illustration of a topology mapping400generated by the topology system logic138illustrating an active user tracking function is shown in accordance with some embodiments. The view of the “topology” aspect of the topology mapping400ofFIG.4Fincludes the input buttons454and a display portion456. The input button454(e.g., labeled “get active users”) may be configured to receive user input corresponding to a user request to visualize active users, e.g., associated with a selected gateway such as the gateway452. An active user may be a user that is logged into a virtual private network (VPN) having access to resources provided be a cloud network. The display portion456may be configured to receive user input corresponding to a selection and initiation of a tracking function of a selected active user, e.g., the active user458, and whether the tracking function is to track network traffic of the selected active user with the selected active user being the source or destination. More specifically, as part of the construct metadata, the topology system logic138receives information pertaining to active users utilizing resources managed by the controller102. Upon activation of the input button454via user input, the topology system logic138may query the construct metadata database220for active users pertaining to a selected gateway. Upon retrieving the active users of the selected gateway, the topology system logic138causes alteration of the topology mapping400by displaying a graphical representation of other indicia of the active users associated with the selected gateway. However, in some embodiments, a gateway need not be selected such that the topology system logic138retrieves active users for each gateway managed by the controller102. In the exemplary embodiment illustrated inFIG.4F, it is assumed that the gateway452is selected and that the input button454has been activated via user input. a plurality of active users are shown logged into a VPN associated with the gateway452including the active user458. Further,FIG.4Fillustrates that the display portion456has received user input corresponding to a selection of tracking network work such that network traffic having a destination address of the active user458will be tracked by the topology system logic138. Tracking of the network traffic may include the monitoring the source IP address or the destination IP address of each data packet entering or exiting the selected gateway. In some embodiments, the tracked network traffic may be displayed using a graphical representation adjacent and/or connected to the graphical representation of the active user458. As shown, the tracked network traffic having a destination IP address equal to the IP address of the active user458is illustrated via graphical representations4601-4603with each including the source IP address of the incoming data packet. In alternative embodiments, the tracked network traffic may be provided in a separate display portion adjacent to the topology mapping and/or in a log stored by the topology system logic138. Referring toFIGS.4G-4H, exemplary illustrations of the topology mappings400′″ and400″″ generated by the topology system logic138illustrating a replay function are shown in accordance with some embodiments. The topology system logic138may configured with the functionality to save a state of each construct and connection managed by the controller102at a given time instance, either periodically or when user input is received to indicate such a save operation. The record of the state each construct and connections for a given time instance may collectively be referred to as a “snapshot.” The state of each construct may include a record of each parameter associated with the construct as discussed herein. Further, the topology system logic138may be configured to determine a difference, if applicable, of states for corresponding constructs and corresponding connections between a first snapshot and a second, subsequent snapshot. The topology system logic138may then generate an interface screen that illustrates the differences in state. The illustration ofFIG.4Gillustrates a snapshot of the topology mapping400at time instance tt(the topology mapping400′″) and the illustration ofFIG.4Hillustrates an interface, the topology mapping400″″, displaying a difference between a snapshot of the topology mapping400at time instances ttand t2. For example, the topology mapping400″″ provides a visual distinction on constructs466-470indicating a change has taken place between the two time instances. Additionally, the display box464may be included that provides a listing of the differences by construct. 3. Network Flow Visualization Referring now toFIGS.5A-5G, interface screens displaying portions illustrating the flow of network traffic generated by the topology system logic138are shown according to some embodiments. Specifically,FIGS.5A-5Gillustrate visualizations representing the flow of network traffic (“network flow visualization500”) among a plurality of constructs that are deployed in one or more cloud networks managed by the controller102and connections between various constructs. As a brief recap, the dashboard300discussed above is configured to provided visualizations of the cloud computing environment parameters such as the number of active gateways, number of VPN users, details of the virtual data centers associated with the cloud computing environment managed by a controller, locations of the virtual data centers, etc. Additionally, the topology mapping400is configured to provide a visualization of how each construct managed by the controller is connected; thus, providing a visual of interactions of the entire cloud computing environment managed by the controller. Finally, the network flow visualization500is configured to provide visualizations of the network traffic flowing among constructs managed by the controller. Therefore, the visualization platform generated by the topology system logic138that includes the dashboard300, the topology mapping400and the network flow visualization500provides a holistic view of the entire cloud computing environment managed by the controller, which as discussed throughout the disclosure, may span a plurality of cloud networks. i. Overview Referring now toFIG.5A, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138is shown in accordance with some embodiments. As shown, the network flow visualization500includes a plurality of display portions502-508with the display portion508including a plurality of charts5101-5101(wherein i≥1). Generally, the network flow visualization500is directed to providing various filterable views of how network traffic is flowing (or has flowed) through the cloud computing environment managed by a controller, such as the controller102ofFIG.1. The display portion502represents a header for the network flow visualization500configured to receive user input corresponding to a selection of a redirection to a particular aspect of the network flow visualization500such as: overview, trends, geolocation, flows and records, wherein each will be discussed further below. In particular, the display portion502indicates that the “overview” is the aspect of the network flow visualization500currently being displayed. The display portion504provides several filtering options directed to time periods for which network traffic flow is to be displayed throughout the network flow visualization500. For example, the display portion includes input field comprising date selectors configured to receive user input corresponding to a start time and an end time. Additionally, buttons may be provided that enable quick selections via a single click, that upon activation cause the topology system logic138to filter the displayed network traffic flow by a predetermined time period, such as but not limited or restricted to: “last hour,” “last day,” “last week,” “last month,” etc. The display portion506is configured to provide additional filter options including filter by a specific category such as, but not limited or restricted to, source address, destination address, flow export (host), source port, destination portion, etc., and a corresponding search term (“filter item”). Further, the display portion506displays the active filters, when applicable. In the embodiment shown, the filter446(“ccdata”) previously discussed inFIG.4Eis currently being applied, which corresponds to a filtering of the data illustrated in the charts5101-510iof the display portion508. Importantly, asFIG.5Aillustrates, the tags generated via the topology mapping400may be utilized as search terms and applied as filters in the network flow visualization500. Therefore, a user may provide user input via the topology mapping400causing the topology system logic138to generate a tag and associate the tag with one or more selected constructs that may be deployed in multiple cloud networks. Further, following the generation and association of the tag, e.g., “ccdata,” the topology system logic138may receive further user input via the network flow visualization500causing the topology system logic138to filter the displayed flow of the network traffic and display only the flow of network traffic among (e.g., between, to and from) the constructs tagged with “ccdata.” In addition, as shown inFIGS.5B-5C, the illustrations of the charts5101-510iof the display portion508may be filtered according to input received via the display portion504, the category selection and search term input fields of display portion506, and/or selection of data displayed in one or more of the charts5101-510i. Referring now toFIG.5B, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138is shown in accordance with some embodiments. As shown, the network flow visualization500is filtered according to the filter514, which corresponds to the selection of a portion of the chart5102ofFIG.5A(e.g., the destination IP address 10.101.0.52). In response to receiving user input corresponding to the selection of the destination IP address 10.101.0.52, the topology system logic138filters the data displayed in each of the charts5101-510ito display network traffic information pertaining to the selection. As shown, the chart5102′ is displayed in an altered visual in comparison to the chart5102ofFIG.5A. It should be understood that multiple filters may be applied simultaneously. ii. Trends Referring toFIG.5C, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138directed to illustrating “trends” of the flow of the network traffic is shown in accordance with some embodiments.FIG.5Cillustrates the “trends” aspect of the network flow visualization500, which includes the display portions504-506discussed with respect toFIGS.5A-5B. Additionally, the “trends” aspect may include display portions516-518, which include a graph over time of network traffic in bytes according to destination port name and a chart5181illustrating the network traffic in bytes of a plurality of destination port names. Although not shown, additional graphs and charts similar to those of display portion516may be also be displayed for data categories (e.g., those illustrated inFIG.5Bsuch as source IP, destination IP, destination ports and IPs, source ports and IPs, source port, etc.). As shown, the network flow visualization500is filtered according to the filter514, which was generated and applied via the “overview” aspect of the network flow visualization500and discussed above. Thus, the topology system logic138is configured to apply filters that are persistent throughout the network flow visualization500, which means that a filter applied in one aspect, e.g., “overview,” will be maintained across other aspects, e.g., “trends” and filter the data illustrated therein. Referring now toFIG.5D, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138directed to illustrating a filtered view of the graph shown inFIG.5Cis shown in accordance with some embodiments. As illustrated inFIG.5C, a portion of the graph of the display portion516was selected via the indicators5D-5D.FIG.5Dillustrates that topology system logic138is configured to receive user input corresponding to selection of a portion of the graph, such as5D-5D, and alter the magnification of the graph (e.g., zoom in) to highlight the selected portion. As the chart5181ofFIG.5Ccorresponds to the graph of the display portion516, the chart5181is shown inFIG.5Das the filtered version5181′ displaying network traffic data according to destination port name filtered by the selection5D-5D. iii. Geolocation Referring toFIG.5E, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138directed to illustrating “geolocation” of the flow of the network traffic is shown in accordance with some embodiments.FIG.5Eillustrates the “geolocation” aspect of the network flow visualization500, which includes at least the display portion506discussed with respect toFIGS.5A-5B(in some embodiments, the display portion504may be included as well). Additionally, the “geolocation” aspect may include display portions520-522. The display portion520illustrates a “heat map,” which includes a map of a geographic region, e.g., a world map, that includes visual indicators as to a density of network traffic at various locations on the map. As shown in the illustrative embodiment ofFIG.5E, the heat map520includes visual indicators representing a heat map to illustrate the varying density of network traffic flowing among the constructs managed by the controller102, where the density of network traffic flowing among constructs may comprise heat map information. Further, the charts5241-5243provide additional graphical representations of the network traffic data shown in the heat map520(e.g., network traffic in bytes per country and/or city, network traffic in bytes per destination port and source port, network traffic in bytes per destination IP and source IP). It should be understood that although three (3) charts are illustrated, an alternative number may be illustrated such as one (1), two (2) or more than three (3). Additionally, heat map information may include results of applying various filters to the network traffic as illustrated in at leastFIG.5E. Specifically, the topology system logic138determines the density of the network traffic flowing among the constructs based on the construct metadata and the network data received from the controller102and the gateways managed by the controller102, respectively. In some embodiments, the illustration of shown inFIG.5Emay be updated on a periodic or aperiodic basis (e.g., in response to a triggering event such as received user input initiating a refresh). In a similar manner as was discussed above with respect to the user interfaces ofFIGS.3A-5D, the heat map520and the charts of the display portion522may be filtered in various manners such as through receipt of user input via the display portion506, selection of a portion of any chart of the display portion522, or via a persistent filter applied via a different “aspect” of the network flow visualization500. iv. Flows Referring toFIG.5F, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138directed to illustrating “flows” aspect of the network traffic is shown in accordance with some embodiments.FIG.5Fillustrates the “geolocation” aspect of the network flow visualization500, which includes at least the display portion506discussed with respect toFIGS.5A-5B(in some embodiments, the display portion504may be included as well). Additionally, the “flows” aspect may include display portions526,530and532. The display portion526illustrates a plurality of charts5281-5282, which may be similar to charts of the display portion522but display content such as network traffic in bytes per source address and network traffic in bytes per destination address. It should be understood that although two (2) charts are illustrated, an alternative number may be illustrated such as one (1) or more than two (2). The display portion530may be a visual of the number of source IPs and destination IPs managed by the controller102. Additionally, the display portion532illustrates a graphical representation of network traffic flowing from source IPs to destination IPs. In some embodiments, such as that ofFIG.5F, the graphical representation may display a series of flow lines from a source IP address to a destination IP address, wherein each flow line is illustrated in a visually distinct manner (e.g., different colors for each flow line). The graphical representation may be configured to receive user input selecting a source or destination IP address, and responsive to receiving such user input, the topology system logic138is configured to alter the graphical representation to emphasis (or singularly display) the flow(s) associated with the selected IP address. Additionally, the content displayed in each of the display portions524and520may be filtered and adjusted in accordance with the selected IP address to provide network data corresponding to the selected IP address. v. Records Referring toFIG.5G, an exemplary illustration of a visualization of the network flow visualization500generated by the topology system logic138directed to illustrating “records” aspect of the network traffic is shown in accordance with some embodiments.FIG.5Fillustrates the “records” aspect of the network flow visualization500, which includes at least the display portions504-506discussed with respect toFIGS.5A-5B. Additionally, the “records” aspect of the network flow visualization500includes the display portion534, which illustrates a graphical representation, for example, a table format, of details regarding the network traffic flows illustrated inFIG.5F. For example, the table may include columns providing data pertaining to: a timestamp, a host, a destination IP address, a source IP address, the number of bytes in the flow at the time indicated by the timestamp, the direction of the flow of network traffic (ingress or egress), the number of data packets transmitted at the time indicated by the timestamp, etc. IV. Logical Flow Referring now toFIG.6, a flowchart of an exemplary method of communications between the topology system logic, a controller and one or more gateways managed by the controller is shown according to some embodiments. Each block illustrated inFIG.6represents an operation performed in the method600of exchanging communications with a controller and receiving data from one or more gateways managed by the controller. Prior to the initiation of the method600, it may be assumed that a distributed cloud management system, such as that illustrated inFIG.1, has been deployed. The method600is initiated when a topology system logic, such as the topology system logic138ofFIG.1, queries a controller, such as the controller102ofFIG.1for construct metadata (block602). As discussed above, the queries may be via one or more API calls. Subsequently, the topology system logic138receives the requested construct metadata and stores the received data in a database, such as the construct metadata database220ofFIG.2B(block604). Further, the topology system logic receives network data from one or more gateways that are managed by the controller (block606). The topology system logic proceeds to store the received network data in a database, such as the network data database222ofFIG.2B. Following receipt of the construct metadata and the network data, the topology system logic generates one or more visualizations based on the received data (block608). Exemplary visualizations that may be generated are illustrated inFIGS.3A-5G; however, the visualizations that may be generated by the topology system logic are not limited to those illustrated. Referring toFIGS.7A-7B, a flowchart of methods of tagging and searching performed by the topology system logic and illustrated inFIGS.4A-5Ais shown according to some embodiments. Each block illustrated inFIGS.7A-7Brepresents an operation performed in the method700of tagging and searching operations performed by the topology system logic. Prior to the initiation of the method700, it may be assumed that a distributed cloud management system, such as that illustrated inFIG.1, has been deployed. The method700is initiated when a topology system logic, such as the topology system logic138ofFIG.1, generates a topology mapping visualization and causes rendering via an interface user interface (block702). Following generation of the topology mapping visualization, the topology system logic receives user input via the topology mapping visualization corresponding to selection of one or more constructions and further indicates a tag name (block704). Responsive to the received user input, the topology system logic generates a table associating the unique identifier of the selected one or more constructs with the tag name (block706). Following the generation of the table, the method700may proceed to either block708and/or block714. Referring to block708, the topology system logic receives further user input indicating the tag name as a search term via the topology mapping visualization. Responsive to receiving the search term, the topology system logic queries a tag database storing the previously generated table to retrieve a unique identifier of each of the one or more tagged constructs associated with the search term (block710). Following retrieval of the unique identifiers, the topology system logic performs operations causing alteration of the topology mapping visualization that visually distinguish the one or more tagged constructs associated with the search term from constructs not associated with the search term (block712). For example, the alteration may include providing a visualization that only displays graphical representations of the tagged constructs associated with the search term and corresponding network data including links therebetween. However, other alternations have been considered such as increasing the size of the tagged constructs (and corresponding network data) associated with the search term relative to the other constructs and network data (or decreasing the size of the non-tagged constructs). Referring now toFIG.7Band block714, following the generation of the table, the topology system logic generates a visualization of the flow of the network traffic among (between, to and/or from) one or more constructs managed by the controller. It should be noted that the operations of block714may be performed prior to the generation of the table in some embodiments. The topology system logic receives further user input via the visualization of the flow of the network traffic that indicates the tag name as a filter term (block716). Responsive to receiving the filter term, the topology system logic queries a tag database to retrieve the unique identifiers of each of the one or more constructs associated with the filter term (block718). Following retrieval of the unique identifiers, the topology system logic performs operations causing alteration of the visualization of the flow of network traffic to display illustrations of only the flow of network traffic among the one or more constructs associated with the filter term (block720). For example, an exemplary visualization is shown inFIG.5A. Referring now toFIG.8, a flowchart of an exemplary method of the replay function performed by the topology system logic and illustrated inFIGS.4G-411is shown according to some embodiments. Each block illustrated inFIG.8represents an operation performed in the method800of operations performed by the topology system logic comprising a replay functionality. Prior to the initiation of the method800, it may be assumed that a distributed cloud management system, such as that illustrated inFIG.1, has been deployed. The method800is initiated when a topology system logic, such as the topology system logic138ofFIG.1, records, at a first time instance, construct metadata and optionally at least a portion of network data for all constructs managed by a controller (block802). Further, the topology system logic records, at a second, subsequent time instance, construct metadata and optionally at least a portion of network data for all constructs managed by the controller (block804). Following recording (e.g., storing in a database such as the snapshot database218ofFIG.2B) of the data at the first and second time instances and in response to user input indicating an initiation of a comparison between the state of the constructs managed by the controller (and links therebetween), the topology system logic performs a comparison between the recorded data at the first time instance with the recorded data at the second time instance (block806). In further response to the user input, the topology system logic generates a topology mapping visualization illustrating the one or more differences, if any, between the recorded data at the first and second time instances and causes a rendering via an interface user interface (block). Exemplary visualizations are shown inFIGS.4G-4G; however, the visualizations that may be generated are not limited to those shown. In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. | 79,108 |
11863411 | DETAILED DESCRIPTION The present disclosure is here described in detail with reference to embodiments illustrated in the drawings, which form a part here. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented here. Reference will now be made to the illustrative embodiments illustrated in the drawings, and specific language will be used here to describe the same. It will nevertheless be understood that no limitation of the scope of the claims or this disclosure is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the subject matter illustrated herein, which would occur to one ordinarily skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the subject matter disclosed herein. A network monitoring device for a supercomputer system may incorporate a plurality of network interconnect models. The network interconnect models may include a three dimensional torus, a global tree and a global asynchronous signal network. Analysis of real-time data obtained from the network monitoring device allows parallel processing algorithms to exploit these network interconnect models individually or simultaneously, resulting in high performance levels of the operation of the supercomputer system. Additional interactions may derive from the simultaneous use of the multiple processing elements within each supercomputer node of the supercomputer system, which can simultaneously access any or all of these network interconnect models, employing each of the network interconnect model at peak capacity. A network monitoring device may monitor network activities of a supercomputer system, which is a cluster of parallel, distributed-memory scalable and high performance computer node architectures for achieving high scale computing at decreased cost, power, and footprint. The network monitoring device may correspond to a software suite that may provide an efficient supercomputer system network monitoring tool that enable users, mapping tools, and workload management systems to map software applications into distributed supercomputer nodes of the supercomputer system to minimizes cross-node communication between the distributed supercomputer nodes of the supercomputer system while balancing the computational load between the distributed supercomputer nodes. Non-limiting examples of various applications that may utilize the network monitoring tool of the supercomputer system are physical simulations, climate research, financial modeling, data mining, and automotive and aerospace design. A network monitoring device may monitor network activities of computer nodes architecture, which allows for a maximum packing density of processing nodes from an interconnect point of view. The network monitoring device may utilize plug-in software modules to provide network monitoring capabilities related to discovering network topologies of the computer nodes, determining network and computing resources that are available for new applications in the computer nodes, collecting network and computing resources that are being used by running software applications in the computer nodes, and monitoring running software applications on the computer nodes. The network monitoring device may further enable third-party tools to access data of the computer nodes that is being monitored and collected through an API by the network monitoring device of the computer nodes. A network monitoring device may monitor a supercomputer system by directly tapping into one or more switches of the supercomputer system. An adapter may be developed for each type of switch of the supercomputer system. For example, one or more InfiniB and switches may be utilized by the supercomputer system to build different network topologies, such as fat-tree, 2D mesh, 2D/3D torus, and Dragonfly. The InfiniB and switches may include a management tool, such as Mellanox's Unified Fabric Management (UFM), which may be utilized by the network monitoring device to gather all data needed to enable efficient topology-aware mapping for the supercomputer system. UFM may provide comprehensive monitoring of host and switch parameters to gather the data that may include network traffic characteristics, physical information, health counters, and error counters. Such data may be aggregated from multiple supercomputer nodes of the supercomputer system, and then correlated to physical or logical objects of the supercomputer system. In some instances, UFM may aggregate data per application, per specific fabric tenant server group, per switch ports, or any other combination of these. Also, the UFM may enable easy integration of the gathered data with existing third-party management tools via web services API. A network monitoring device may monitor a network of a cluster of supercomputers designed in parallel, in order to execute several tasks and/or applications simultaneously, and for attaining the highest performances as possible with the known technologies upon its design, in particular in terms of computing rate. The supercomputer may have rates of several peta-flops where the flops (Floating Point Operations Per Second) is a measurement unit for estimating the processing speed of a computer processor node in the supercomputer. The network monitoring device may include one or more software and/or hardware modules such as an application monitoring module, a traffic monitoring module, and a topology mapping module to monitor the network of the supercomputers. Each of the application monitoring module, the traffic monitoring module, and the topology mapping module may include one or more sub-modules to monitor the network of the supercomputers. The application monitoring module may monitor communication of each application being executed by nodes within the supercomputer to build a virtual topology that displays how processes of the supercomputer communicate with each other. The application monitoring module may further compute a number of messages and bandwidth that passes via each virtual link interconnecting the nodes of the supercomputer. The information gathered by the application monitoring module may then be stored in a database, so that the information may be used to map a new application into multiple topologies of the supercomputer. The traffic monitoring module may monitor traffic on each link in the network to determine congestion in the network, and then select the nodes with a lowest traffic to avoid hot spots. The topology mapping module may compute a topology of the network and then display which nodes are currently being used by the running applications within the supercomputer. Using all the data gathered by various modules of the network monitoring tool, the network monitoring tool thereby enables viewing of the network topology of the supercomputer, available bandwidth of the supercomputer, and hot spots within the supercomputer. Such gathered data may also be utilized by topology aware mapping tools to optimally map applications. Also, such data may provide analysts of the supercomputer with a global view of the network of the supercomputer to monitor and manage the efficient and effective operation of the nodes of the supercomputers while running the one or more applications. FIGS.1A and1Billustrate components of an enterprise system100, according to an exemplary embodiment. The enterprise system100may include cluster of supercomputers102(such as supercomputer102aand supercomputer102b), network monitoring devices104, analyst computers106, and databases108. The supercomputers102, the network monitoring devices104, the analyst computers106, and the databases108are connected to each other through one or more communication network platforms to exchange data. The examples of the communication network platform may include, but are not limited to, private or public LAN, WLAN, MAN, WAN, and the Internet. The communication network platform may include both wired and wireless communications according to one or more standards and/or via one or more transport mediums. The communication over the communication network platform may be performed in accordance with various communication protocols such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and IEEE communication protocols. In one example, the communication network platform may include wireless communications according to Bluetooth specification sets, or another standard or proprietary wireless communication protocol. In another example, the communication network platform may also include communications over a cellular network, including, e.g. a GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), EDGE (Enhanced Data for Global Evolution) network. The enterprise system100described herein operates in a cloud-computing environment where the analyst computers106are cloud-optimized and transmit a request for monitoring network data associated with one or more processes being executed by the supercomputers102. The data and application programs of the analyst computers106may be stored and executed on the supercomputers102accessed over a network cloud. In the cloud computing environment, a web browser on the analyst computers106may interface with an application program and/or a process that is executed on the supercomputers102and/or the network monitoring devices104. Through the browser on the analyst computers106, an analyst user may generate a request for receiving network data associated with execution of the one or more processes and/or applications, and transmit the request to the network monitoring devices104and/or the supercomputers102via the application program. In some embodiments, the enterprise system100described herein operate in a cloud computing environment where the analyst computers106may transmit to the network monitoring devices104and/or the supercomputers102a request for receiving the network data associated with the execution of the one or more processes and/or applications. The data and application programs received from the network monitoring devices104and/or the supercomputers102to the analyst computers106may be stored locally in the analyst computers106and executed on local computing resources of the analyst computers106. In operation, a network monitoring device104comprising one or more software and/or hardware modules may be directly or indirectly connected to a plurality of nodes of a supercomputer102to monitor network data such as communication messages between a plurality of processes being executed by the plurality of nodes of the supercomputer102. Upon analysis of the monitored network data, the network monitoring device104may first generate a virtual network topology containing a plurality of virtual communication links between the plurality of processes being executed by the plurality of nodes, and then determine a number of communication messages being transmitted on each of the plurality of virtual communication links and a bandwidth value for each of the plurality of virtual communication links. The network monitoring device104may further monitor network traffic in a plurality of communication links interconnecting the plurality of nodes, and then generate a global networking view122of the network traffic of the plurality of the nodes and the interconnecting plurality of communication links on a graphical user interface (GUI) of an analyst computer106. An analyst operating the analyst computer106on receiving an API call for mapping a new application to the plurality of supercomputer nodes, the analyst may view GUI of the analyst computer106displaying the global networking view122of the network traffic to determine optimal and unoccupied subset of physical nodes of the supercomputer102determined from an analysis of data in the global networking view122of the network traffic that meet the requirements of attributes associated with the new application. Then, the analyst may generate and execute a request for an allocation of the determined subset of the physical nodes to the new application, using for example, a message passing interface (MPI) or a batching system such as the portable batch system (PBS). The GUI of the analyst computer106may further be updated to show the modified global networking view122displaying the nodes of the supercomputer102that are now allocated to the new application. At any time, the analyst may request reallocation of the nodes if the analyst determines that a better allocation of the nodes of the supercomputers102may be possible in the network for the new application or any other application being executed by the nodes of the supercomputer102. Supercomputers102may be any computing and/or telecommunications devices formed by a network of nodes (or supercomputer nodes) interconnected by one or more switches. The network of nodes may be interconnected, in the form of structures, such as grids, lattices or torus configurations via one or more internal or external networks. In some embodiments, a node may be a computer, a server or any other computerized terminal comprising a plurality of processors and/or microprocessors, as well as means of data transmission/reception. The nodes allow receiving or transmitting data (e.g., messages, packets, datagram) by means of one or more network peripherals, such as a network card. The function of the switches is to route the data from or to the nodes to which they are connected. The nodes and the switches comprise a computer network or a graph according to a predetermined topology. The supercomputer102may include a thread, which is a part of a program (such as a user application program, an operating system program or a software development program) that is logically independent from another part of the program and can therefore be executed in parallel with other threads of the program by the nodes of the supercomputer102. In compiling a program to be run on the supercomputer102, some compilers of the supercomputer102create multiple threads for a program automatically, in addition to those threads that are explicitly identified as portions of the program specifically coded for parallel execution. The supercomputer102may include a compiler, which will produce an object code file for each program module. A program module such as a program source code file contains the source code version for all or part of the program. The object code files from different program modules are linked together into an executable file for the program. The linking of programs together is a common and part of large scale application programs which may consist of many program modules. Within the supercomputer102, the executable form of a multithreaded program consists of multiple threads that can be executed in parallel. In the operating system of the supercomputer102, the representation of the executable form of a program is a process. A process executes a single thread of a program during a single time period. Multiple processes can each execute a different thread or the same thread of a multithreaded program. When multiple processes executing multiple threads of a multithreaded program are simultaneously executing on multiple processors, then parallel processing of a program is being performed. When multiple processes execute multiple threads of a multithreaded program, the processes may share process image. A process image may be the representation in the operating system of the resources associated with process. The process image includes the instructions and data for the process, along with the execution context information for the processor, such as the values in all of the registers, both control registers and data registers, e.g., scalar registers, vector registers, and local registers, and the execution context information for operating system routines called by the process. In the supercomputer102, the operating system is configured for assigning processes to the different nodes to execute applications, such as physical simulations, climate research, financial modeling, data mining, automotive design, and aerospace design. Network monitoring devices104may be any computing device capable of generating and/or storing network logs, sometimes referred to as log files corresponding to data associated with a network of nodes of the supercomputers102. The logs may be stored in any machine-readable format (e.g., TXT, XML, HTML, Access Log, Common Log, W3C Log, WAS Log) and may comprise various node data fields containing node data at various OSI layers from inbound IP packets (e.g., source IP address, source Domain Name, source MAC address, source device identifier). In some implementations, the network logs may be stored locally in the particular network appliance, the network monitoring device104, or any other device that generated the network logs, such as network monitoring software applications configured to detect, manage, and track the network data of the enterprise system100. In some implementations, the network logs may be stored into a database108that is accessible to an analyst computer106or the supercomputer102via a network. In some embodiments, the network monitoring device104may be directly or indirectly connected and/or tapped into one or more switches utilized by the plurality of nodes of the supercomputer102to monitor network data of the supercomputer102and then build one or more supercomputer topologies. The one or more supercomputer topologies may be selected from a group comprising superconductor topologies such as a fat-tree, a 2D mesh, a 2D/3D torus, and a Dragonfly. The one or more switches may be connected to each other via one or more adapters such as an InfiniBand switches adapter116and an IP switches adapter118. In some embodiments, the one or more switches may include a management tool to monitor and aggregate network data associated with parameters of the one or more switches of the supercomputer102and the plurality of nodes of the supercomputer102. In some embodiments, the network monitoring device104may include multiple modules to monitor data associated with a network between the plurality of nodes of the supercomputers102. The modules may be software or hardware modules. In some embodiments, the modules may be a combination of the software modules and the hardware modules. In some embodiments, the modules of the network monitoring device104may include an application monitoring module110, a traffic monitoring module112, and a topology mapping module114. Each of these modules of the network monitoring device104are configured to perform one or more activities to monitor network data associated with the network between the nodes and the switches of the supercomputers102. For instance, the application monitoring module110is configured to monitor communication between a plurality of processes being executed by the plurality of nodes. During the processing of each of these processes, the processes and/or the plurality of nodes of the supercomputers102may communicate to each other. The communication between the processes and/or the plurality of nodes of the supercomputers102may include one or more communication messages exchanged between the processes and/or the plurality of nodes of the supercomputers102. The application monitoring module110may further be configured to generate a virtual network topology. The virtual network topology may contain a plurality of virtual communication links between the plurality of processes being executed by the plurality of supercomputer nodes. The application monitoring module110may then determine a number of communication messages being transmitted on each of the plurality of virtual communication links and a bandwidth value for each of the plurality of virtual communication links. In some embodiments, a traffic monitoring module112may be configured to monitor network traffic in a plurality of communication links interconnecting the plurality of nodes of the supercomputer102. The network traffic may correspond to an amount of data moving across the network of the plurality of nodes of the supercomputer102at a given point of time. The network data may be encapsulated in network packets, which provide the load in the network of the supercomputer102. The network traffic data may be used by the traffic monitoring module112to generate a global networking view122of network data associated with the plurality of the nodes and the interconnecting communication links. In some embodiments, the global networking view122may include a weighted undirected graph of the network of the nodes of the supercomputer102, where vertices of the weighted undirected graph represent physical computational nodes of the supercomputer102and edges of the weighted undirected graph represent the network links of the supercomputer102. In some embodiments, the network monitoring device104may assign weights to the edges based on available bandwidth of the associated link within the supercomputer102. To generate the global networking view122of data associated with the plurality of the nodes and the interconnecting communication links, the traffic monitoring module112may analyze an amount and type of network data traffic measured on a particular network in the supercomputer102. Upon analyzing the amount and the type of traffic on the particular network, the traffic monitoring module112may determine congestion in the network of the supercomputer102. The congestion information may be used by the traffic monitoring module112to identify one or more hot spots within the network of the supercomputer102. Upon the analysis of the network data of the supercomputer102, the traffic monitoring module112may generate the global networking view122(in a tabular or graphical format) of the plurality of the nodes and the interconnecting communication links. A topology mapping module114may receive an API call for mapping a new application to the plurality of nodes of the supercomputer102. Upon receiving the API call, the topology mapping module114may process the data/information presented within the global networking view122displaying the current network data and traffic to identify currently available nodes and busy nodes of the supercomputer102. The topology mapping module114may then map the new application to the nodes of the supercomputer102that are currently available determined from an analysis of the information retrieved from the global networking view122of the network data of the supercomputer102. For instance, the topology mapping module114may select one or more available nodes of the plurality of nodes having lowest network traffic to execute the new application such that the bandwidth is maximized and the network latency of the supercomputer102is minimized. In some embodiments, upon receiving the API call for mapping the new application to the plurality of nodes, the traffic monitoring module112may generate a graphical user interface on an analyst computer106to display a global networking view122of the network traffic data showing available nodes and currently busy nodes of the supercomputer102. In some embodiments, upon receiving the API call for mapping the new application to the plurality of nodes, the topology mapping module114may generate a graphical user interface on an analyst computer106to display a global networking view122of the network traffic data showing available nodes and currently busy nodes of the supercomputer102. The traffic monitoring module112or the topology mapping module114may also transmit data associated with the new application to the analyst computer106. An analyst operating the analyst computer106may then select the one or more available nodes having the lowest network traffic based on the analysis of the information retrieved from the global networking view122of the network data to execute the new application such that the bandwidth is maximized and the network latency of the supercomputer102is minimized. In some embodiments, upon receiving the API call for mapping the new application to the plurality of nodes, the topology mapping module114may execute one or more functions. The topology mapping module114may execute a first function that returns an entire weighted undirected graph. The topology mapping module114may use the first function to map the new application into the nodes of the supercomputer102such that the bandwidth is maximized and the network latency of the supercomputer102is minimized. In some embodiments, upon receiving the API call for mapping the new application to the plurality of nodes, the topology mapping module114may execute a second function, which can be used to request a portion of the network of the nodes of the supercomputer102. For instance, the topology mapping module114may generate instructions to search a weighted undirected graph to find an optimal subset of physical computational nodes from all the nodes of the supercomputer102that meets the request for requirements associated with the new application. When executing the second function, the topology mapping module114may enter a number of nodes and a topology that the second function needs to return in response to the request. The topology mapping module114may also include specialized search functions for different type of applications and network topologies. For example, the topology mapping module114may map a 2D mesh request into a physical fat-tree network or a hypercube network. The topology mapping module114may also leverage one or more algorithms that map topologies to each other in the search process. Thus, a search engine will be able to find an optimal subset of nodes of the supercomputer102that meets the requirements of the user request for executing the new application such that the bandwidth is maximized and the network latency of the supercomputer102is minimized. Analyst computers106may be computing devices that analysts may use to monitor data associated with networks between nodes of supercomputers102. An analyst computer106may be any computing comprising a processor and capable of performing the various tasks and processes described herein. Non-limiting examples of the analyst computer106may include laptops, desktops, servers, tablets, and smartphones. The analyst computer106may be coupled via one or more internal or external networks to a database108and/or the supercomputers102. Software executed by the analyst computer106permits the analyst to select a record of network and/or traffic data from the database108and then review or update network and/or traffic data stored in the database108for the associated node of the supercomputer102. The analyst computer106GUI120(as shown inFIG.1C) may receive a global networking view122indicating network topology of the supercomputer102and network and/or traffic data associated with switches124(FIG.1Cshows exemplary switches124a-124c) and nodes126(FIG.1Cshows an exemplary node126a) of the supercomputer102. The network and/or traffic data may indicate bandwidth values and hot spots corresponding to each of the plurality of virtual communication links and/or communication links interconnecting the switches124and the nodes126of the supercomputer102. Such network and/or traffic data may be used by the analyst computer106to measure the performance of topology-aware mapping tools, for debugging network problems associated with the nodes of the supercomputer102, and/or generate and prioritize alerts associated with the network and/or traffic data. In some embodiments, the analyst computer106GUI may receive alerts associated with the network and/or traffic data that is related to subject matter (e.g., type of the node of the supercomputer102) or procedural role (e.g., time-sensitive alert based on hot spots or bandwidth value) of the respective analyst. In some implementations, an alert associated with the network and/or traffic data may have a data field identifying a nature of the potential traffic risk and another data field indicating a time-sensitive nature or customer-sensitive nature of the potential traffic risk. Based on these data fields, the analyst computer106may receive alerts having subject matter or procedural data fields associated with the analyst credentials. For instance, the analyst credentials of an analyst specializing in time sensitive alerts would indicate to the analyst computer106that the analyst computer106should retrieve and present the alerts having a data field indicating that the particular alert is time sensitive. In some implementations, the alerts may be stored into dedicated databases or sub-databases of the database108, where each sub-database is configured to store alerts with certain types of alerts. In such implementations, the analyst computer106may be limited to accessing certain sub-databases according to the analyst credentials of the analyst operating the analyst computer106. Similarly, the analyst computer106may receive updates or notification messages that the analyst computer106presents on a GUI120to the analyst. A node126aof the supercomputer102, the database108, or other server of the system100may trigger and transmit the notification to each analyst computer106having analyst credentials with access attributes indicating the role of the analyst. For instance, an analyst may have analyst credentials with attributes that indicate the analyst specializes in handling time-sensitive alerts associated with a particular type of a node126a. When a new alert is generated or an existing alert is updated with a data field indicating the alert is time sensitive, the node126aof the supercomputer102, the database108, or other server of the system100may transmit a notification message to the analyst computer106of the analyst. In some implementations, an analyst computer106may have a GUI that allows an analyst to mark or tag the alert associated with the network data. A data field in the record of the alert is then updated to reflect the tag inputted by the analyst computer106. In some instances, the tag reflects an analyst's concern that the alert may contain data fields that could be cross-referenced and found in another alert. The node126aof the supercomputer102or other server of the system100may then perform various forms of processing on the data fields, such as identifying which, if any, other alerts contain the same data in corresponding data fields. In some embodiments, the node126aof the supercomputer102, the analyst computer106, or other device of the system100may execute various models that indicate to the node126aof the supercomputer102that the alert should be tagged. Alerts may be tagged automatically when data fields in the alert matches a threshold number of data fields of a given model. Databases108may be hosted on one or more computing devices such as supercomputers102, where the database108may store data records associated with various aspects of the application services offered to end users and/or analysts operating the supercomputer102. Non-limiting examples of what may be stored in the database108may include analyst user records that may comprise data fields describing analyst users, e.g., user data, such as user credentials (e.g., username, passwords, biometrics, encryption certificates), user account data, user roles, or user permissions; network records that may comprise machine-readable computer files (e.g., word processing files), parsed portions of such computer files, or metadata associated with computer files; and application data that may include software instructions executed by nodes of the supercomputer102or data used by the such applications executed by the supercomputer102. The database108may be hosted on any number of supercomputers102comprising a non-transitory machine-readable storage medium and capable of performing the various tasks described herein. As shown inFIG.1A, the database108may be accessed by the nodes126aof the supercomputer102and/or other servers and devices of the system100via one or more networks. The database108may be hosted on the same physical computing device functioning as the supercomputer102and/or functioning as other servers and devices of the system100. The databases108may include a non-transitory machine-readable storage media capable of receiving, storing, updating network data associated with the nodes126aof the supercomputer102. The databases108may have a logical construct of data files that are stored in non-transitory machine-readable storage media, such as a hard disk or memory, controlled by software modules of a database program (for example, SQL), and a related database management system (DBMS) that executes the code modules (for example, SQL scripts) for various data queries and other management functions generated by the nodes of the supercomputer102and/or analyst computers106. In some embodiments, a memory of the databases108may be a non-volatile storage device for storing alert element data and instructions, to be used by a processor of the nodes126aof the supercomputer102. The memory may be implemented with a magnetic disk drive, an optical disk drive, a solid-state device, or an attachment to a network storage. The memory may include one or more memory devices to facilitate storage and manipulation of program code, set of instructions, tasks, data, PDKs, and the like. Non-limiting examples of memory implementations may include, but are not limited to, a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a secure digital (SD) card, a magneto-resistive read/write memory, an optical read/write memory, a cache memory, or a magnetic read/write memory. In some embodiments, a memory of databases108may be a temporary memory, meaning that a primary purpose of the memory is not long-term storage. Examples of the volatile memories may include dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some embodiments, the memory may be configured to store larger amounts of information than volatile memory. The memory may further be configured for long-term storage of information. In some examples, the memory may include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. FIG.2illustrates a system200for monitoring a fat-tree network of a supercomputer, according to an exemplary embodiment.FIG.2will be explained in conjunction with theFIG.1. The system200is configured to improve load distribution and/or spreading in the fat-tree networks or other highly regular switching hierarchies that have multiple paths between nodes (processors) of the supercomputer in the network. The load distribution or load spreading may be a technique by which bandwidth is more effectively utilized in the nodes of the supercomputer and overall performance of the supercomputer is improved in a network of the supercomputer. The load distribution and load spreading techniques may consider a number of next hops on a shortest path to a given destination node in the network of the supercomputer as well as the overall distribution of traffic between the nodes in the network of the supercomputer. A fat-tree network is a network where the nodes are hierarchically organized into a series of levels. One or more core nodes may reside at a top level of the hierarchy, and several host nodes may reside at a lowest level of the hierarchy. In the fat-tree network, the bandwidth is allocated among the levels of a tree topology such that the nodes at higher levels in the tree have access to greater amounts of bandwidth for data transmission through the network. Multiple nodes may be used to emulate fat links at the higher levels of a fat-tree network, thus creating multiple paths between the host nodes. By having multiple paths between the host nodes, more bandwidth may be available between the host nodes. In one non-limiting example case, in the fat-tree network, the nodes may be connected to a bottom layer. The nodes may be interconnected to each other via switches202a-202f(hereinafter202). For each switch202interconnecting the nodes, a number of links going down to its sibling switches202is equal to the number of links going up to its parent switch202in the upper level. As a result, the links between the nodes get “fatter” towards a top of the fat-tree network, and the switch202in the root of the fat-tree network has most links compared to any other switch below it. The switches202may be InfiniB and switches, which are specified by the InfiniBand™ architecture. In some embodiments, the InfiniBand switches202may be implemented within a single switching entity, for example, a single switching chip, a physical switching unit, and the like. In some embodiments, the fat-tree network may be built using any number of InfiniBand switches202, where the InfiniBand switch202may be a 24-port Mellanox Anafa-II InfiniBand Switch, manufactured by Mellanox Technologies. The present disclosure is not limited to the use of this InfiniBand switch202and another type or model of InfiniBand switch may be used and be within the scope of the invention. In some embodiments, each of plurality of InfiniBand switches202may be coupled to the nodes, via node ports. For example, the InfiniB and switch202may include a plurality of node ports via which the InfiniB and switch202may be coupled to one or more of a plurality of nodes. An adapter204(such as InfiniBand Host Channel Adapter (HCA)) may be connected to the switches202(such as InfiniB and switches) to provide a high performing interconnect solution for the nodes of the supercomputer. The adapter204may be a low latency and high bandwidth interconnector for the nodes of the supercomputer to achieve significant performance improvements resulting in reduced completion time and lower cost per operation for parallelized applications of the supercomputer. Management tools (such as Unified Fabric Management (UFM) software of Mellanox) for the switches202may be used to collect network data from the switches202of the supercomputer in order to monitor communications which occur in a network of the nodes of the supercomputer where each communication being effected by a transmission of one or more packets among two or more communicating nodes of the supercomputer. The management tools may passively detect the contents of packets and in real time from the supercomputer, and communication information associated with multiple protocols may be derived from the packet contents within the supercomputer. As an illustration of an embodiment of the present disclosure, traffic may traverse fat-tree network. Traffic (for example, a packet) originating at any node can enter a first InfiniB and switch202through a node port, passing through an internal switch link. The packet then proceeds to a second InfiniB and switch202. The packet crosses through internal switch link at the second InfiniBand switch202, and back to the first InfiniB and switch202via one of a plurality of links. The packet can then proceed to another node coupled to the first InfiniBand switch202. In order to monitor network links and application traffic between the nodes of the supercomputer, a network monitoring device may be used within the supercomputer to gather data that is needed to monitor the network links and the application traffic between the nodes of the supercomputer. The network monitoring device may use a simple network management protocol (SNMP) to monitor network links and application traffic between the switches202(such as InfiniBand switches and IP switches) and the nodes of the supercomputer. SNMP may be supported by an Internet User Datagram Protocol (UDP) and Internet Protocol (IP) over communications environments such as serial links, Ethernet, etc. within the nodes of the supercomputer. The SNMP Network Management Framework may consists of three major components, such as, (1) the mechanisms used for describing and naming objects for the purpose of management; (2) the core set of managed objects for the Internet suite of protocols; and (3) the protocol for accessing managed objects to monitor the network links and the application traffic between the switches202and the nodes of the supercomputer. FIG.3illustrates a network of nodes of a supercomputer300, according to an exemplary embodiment. The parallel computing structures referred to as high performance computing (HPC) or the supercomputer300interconnect large numbers of compute nodes/processors (shown as P0-P7), in the form of structures, such as mesh, torus, and tree configurations. The compute nodes/processors (shown as P0-P7) may be interconnected to each other via switches302. The switches302may be implemented within a switching entity, for example, a switching chip, a physical switching unit, and the like. The supercomputer300may be capable of achieving petaflop with up to million cores, or thousands nodes, or hundreds racks, and may be based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a network that optimally maximize packet communications throughput and minimize latency. The network may include a direct memory access network interface. A network monitoring device may detect, monitor, report, and manage network and congestion data in the supercomputer300. The network monitoring device may use software modules and/or multi-port switches in the supercomputer300with port controllers that collect port traffic statistics and/or network data statistics. The network monitoring device may periodically gather the port statistics and/or the network data statistics, and then processes the gathered statistics to identify bandwidth value, hot spots, and congestion at the ports and/or within the network. A database is maintained within the network with an entry for each port and contains counters for the types of network traffic/congestion. The counters for ports in the network that are identified as congested are incremented to reflect the detected traffic congestion. The network monitoring device may further include a management platform that periodically requests copies of the port traffic data from the switches. In some embodiments, the network monitoring device may include a software module such as an application monitoring software, which will generate for each running application on the processors (P0-P7) of the supercomputer300, tables that summarize the communication between the processors (P0-P7). The tables will display a bandwidth value and a number of messages that the processors (P0-P7) exchanged between them. The network monitoring device may store the generated table in a database so that information within the table may be used by an analyst to map new applications onto different topologies of the supercomputer300. The Table 1 shows an example of a table generated by the application monitoring software displaying a number of messages that an application's processors (P0-P7) exchanged between them. TABLE 1ProcessorP0P1P2P3P00300030001200P1300006000P210005000400P3500002000 Based on analysis of the information in the Table 1, the network monitoring device may specify that the processors P0, P1, and P2 are busy and processor P3 has limited bandwidth. In some embodiments, an analyst may perform its own analysis of the information in the table 1 to identify one or more processors from a list of the processors (P0-P7) that are busy and available. In some embodiments, the network monitoring device may generate a global view of a network of the supercomputer300in a graphical or tabular format showing a topology of the supercomputer300, link utilization of the processors (P0-P7), a list of the processors (P0-P7) that are free, a list of the processors (P0-P7) that are busy, available bandwidth between the processors (P0-P7), and a number of hops that separates any two processors (P0-P7). In some embodiments, the network monitoring device may store all information of the global view in the database so that information within the global view may be used by the analyst to map new applications onto different topologies of the supercomputer300. In some embodiments, the supercomputer300may be provided with an application programming interface (API) to allow third-party tools and libraries to access the data available in the global view from the database that is generated by the network monitoring device. In some embodiments, the network monitoring device may analyze the network and congestion data available within the global view, and then determine an optimal number of physical computational processors (P0-P7) to be allocated for each current application running within the supercomputer300that maximizes bandwidth and minimizes latency. For instance, upon the reviewing the global view data of the network, the network monitoring device may determine that currently processors (P3, P4, P5, and P6) may be executing a first application, but based on the analysis of the network and congestion data, the network monitoring device may determine a new combination of the processors (P4, P5, P6 and P7) for execution of the first application instead of the current processors (P3, P4, P5, and P6). Then the network monitoring device may generate instructions to replace the processor P7 with the processor P6 for execution of the first application, and thereby maximizing bandwidth and minimizing latency of resources of the supercomputer300. FIG.4shows execution steps of monitoring a network between nodes of a supercomputer, according to an exemplary method400. The exemplary method400shown inFIG.4comprises execution steps402,404,406,408,410, and412. However, it should be appreciated that other embodiments may comprise additional or alternative execution steps, or may omit one or more steps altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another. In addition, the exemplary method400ofFIG.4is described as being executed by a single monitoring tool, referred to as a network monitoring device having one or more processors and/or software modules in this exemplary embodiment. However, one having skill in the art will appreciate that, in some embodiments, steps may be executed by any number of monitoring tools operating in a distributed cloud computing environment. In some cases, a monitoring tool executing one or more steps may be programmed to execute various other, unrelated features, where such monitoring tool does not need to be operating strictly as the network monitoring device described herein. At step402, an application monitoring module of a network monitoring device monitors communication messages between a plurality of processes being executed by a plurality of supercomputer nodes. In some embodiments, each of the plurality of supercomputer nodes may include one or more switches. In some embodiments, each of the plurality of supercomputer nodes may be connected to the one or more switches. In some embodiments, each of the plurality of supercomputer nodes may be wirelessly or physically connected to the one or more switches. The one or more switches may be utilized by the plurality of supercomputer nodes to build one or more network topologies. The one or more network topologies may be selected from a group comprising network topologies such as a fat-tree, a 2D mesh, a 2D/3D torus, and a Dragonfly. In some embodiments, the network monitoring device may be tapped into the one or more switches of the plurality of supercomputer nodes to monitor the network and/or the plurality of processes being executed by the plurality of supercomputer nodes. At step404, an application monitoring module generates a virtual network topology. The virtual network topology may contain a plurality of virtual communication links. The plurality of virtual communication links may be between the plurality of processes being executed by the plurality of supercomputer nodes. In some embodiments, a virtual network configuration may be of multiple types. One type of virtual network configuration may remain completely in the cloud, and known as cloud-only configuration, and the other type of virtual network configuration may allow both cloud-based and on-premises nodes to communicate. The cloud-only virtual network may be useful when an entire supercomputer and its various tiers that reside in cloud, and there is no need for the supercomputer virtual nodes to communicate with other supercomputer nodes in different networks. The cloud-only virtual networks are virtual networks that reside entirely in cloud. The virtual network reconfiguration may accommodate the traffic that changes significantly between the nodes. By reconfiguring the virtual network, the network accommodates the traffic between the nodes even when the traffic pattern between the nodes changes significantly. The reconfigure may have a large impact on the traffic passing the reconfigured paths. The number of reconfigured paths may depend on the generated virtual network topology before the reconfiguration. At step406, an application monitoring module determines a number of communication messages being transmitted on each of the plurality of virtual communication links and a bandwidth value for each of the plurality of virtual communication links. In some embodiments, the application monitoring module may work in conjunction with tools of one or more switches to gather data associated with each of the plurality of virtual communication links. For instance, the one or more switches may include a management tool, and the management tool may be configured to monitor and aggregate data associated with parameters of the one or more switches and/or the parameters of the plurality of supercomputer nodes. The gathered data may include, but is not limited to, network traffic characteristics, physical information, health counters, and error counters. In some embodiments, the management tool may be configured to aggregate data per application running on the plurality of supercomputer nodes. In some embodiments, the management tool may be configured to aggregate data per specific fabric tenant node group of the plurality of supercomputer nodes. In some embodiments, the management tool may be configured to aggregate data per switch port of the one or more switches of the supercomputer. The application monitoring module upon the analysis of the aggregated data may determine a number of communication messages being transmitted on each of the plurality of virtual communication links and a bandwidth value for each of the plurality of virtual communication links. At step408, a traffic monitoring module of the network monitoring device monitors network traffic in a plurality of communication links interconnecting the plurality of supercomputer nodes. The network traffic may correspond to an amount of data moving across the network of the plurality of supercomputer nodes at a given point of time. The network data may be encapsulated in network packets, which provide the load in the network. The network traffic data may be used by a sub-module of the traffic monitoring module such as a network traffic measurement module to measure an amount and type of traffic on a particular network. Upon measuring the amount and the type of traffic on a particular network, the traffic monitoring module may then determine congestion in the network. The congestion information may then be used to identify one or more hot spots within the network. The network traffic data may be used by a sub-module of the traffic monitoring module such as a network traffic control module configured for managing, prioritizing, controlling, or reducing the network traffic. For instance, using the network traffic data, the traffic monitoring module may determine one or more supercomputer nodes of the plurality of supercomputer nodes currently being utilized by running one or more applications and one or more supercomputer nodes of the plurality of supercomputer nodes currently free. The traffic monitoring module may further determine a number of hops separating any two supercomputer nodes of the plurality of supercomputer nodes. The traffic monitoring module may then reallocate supercomputer nodes for running the one or more applications based on analysis of a location of currently utilized and free nodes such that the overall network traffic is then reduced and network latency is minimized. The network traffic data may be used by a sub-module of the traffic monitoring module such as a network traffic simulation module configured to measure an efficiency of the communications network based on a current output being produced by the supercomputer in response to utilization of current resources derived from the network traffic data. In some embodiments, the traffic monitoring module may store gathered network traffic data in a database. The traffic monitoring module may query the database to retrieve the gathered data by the traffic monitoring module, and then generate a global networking view of the network traffic of the plurality of the supercomputer nodes and the interconnecting plurality of communication links based on the gathered data. In some embodiments, the traffic monitoring module may generate the global networking view in a graphical format or a tabular format showing a topology of the supercomputer, link utilization of the supercomputer nodes, a list of the supercomputer nodes that are free, a list of the supercomputer nodes that are busy, available bandwidth between the supercomputer nodes, and a number of hops that separates any two supercomputer nodes. At step410, a network monitoring device receives an API call for mapping a new application to the plurality of supercomputer nodes. Upon receiving the API call, the traffic monitoring module the network monitoring device may generate a graphical user interface on an analyst computing device to display the global networking view of the current network data and traffic showing currently available and busy supercomputer nodes of the plurality of supercomputer nodes. At step412, a topology mapping module of a network monitoring device maps the new application to the plurality of supercomputer nodes that are currently available determined from an analysis of the information retrieved from the global networking view of the network data. For instance, the network monitoring device may select one or more available supercomputer nodes of the plurality of supercomputer nodes having lowest network traffic to execute the new application. In some embodiments, an administrator and/or an analyst of the supercomputer may select the one or more available supercomputer nodes of the plurality of supercomputer nodes having the lowest network traffic based on the analysis of the information retrieved from the global networking view of the network data to execute the new application such that the bandwidth is maximized and the network latency of the supercomputer is minimized. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. The steps in the foregoing embodiments may be performed in any order. Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, and the like. When a process corresponds to a function, the process termination may correspond to a return of the function to a calling function or a main function. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of this disclosure or the claims. Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the claimed features or this disclosure. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the embodiments described herein and variations thereof. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the subject matter disclosed herein. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. | 61,553 |
11863412 | DESCRIPTION OF THE EMBODIMENTS Some embodiments of the disclosure accompanied with the drawings will now be described in detail below. For reference numerals are used in the following description, the same reference numerals appearing in different drawings are considered to be the same or similar elements. These embodiments only form part of the disclosure and do not disclose all implementable manners of the disclosure. More specifically, these embodiments are only examples of the method and the device within the scope of the claims of the disclosure. FIG.1is a functional block diagram of a network traffic monitoring device according to an embodiment of the disclosure. With reference toFIG.1, a network traffic monitoring device10may include a processor11, a storage circuit12, and a network traffic capturing interface13. The processor11is coupled to the storage circuit12and the network traffic capturing interface13. The processor11is configured to handle all or some operations of the network traffic monitoring device10. For example, the processor11may be a central processing unit (CPU), or any other programmable general-purpose or special-purpose micro control unit (MCU), microprocessor, digital signal processor (DSP), programmable controller, application specific integrated circuit (ASIC), graphics processing unit (GPU), image signal processor (ISP), image processing unit (IPU), arithmetic logic unit (ALU), complex programmable logic device (CPLD), field programmable gate array (FPGA), or other similar elements or a combination of the above elements. The storage circuit12is configured to store data. The storage circuit12may be, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, hard disk drive (HDD), solid state drive (SSD), or similar elements or a combination of the above elements. The storage circuit12may also be configured to store programming codes or various applications executable by the processor11. The network traffic capturing interface13may be configured to obtain network flow data. For example, the network flow data may include network traffic data. For example, the network traffic capturing interface13may include a network interface card realized in the hardware form and/or a network traffic capturing program (or network traffic monitoring program) realized in the software form. In addition, the network flow data may include a plurality of network packets. The storage circuit12may be configured to store a plurality of mapping models (also referred to as candidate mapping models)101to103. The mapping models101to103may be used (e.g., queried) by the processor11to obtain a distribution status of at least one monitoring item in the plurality of network packets. For example, the monitoring item may include the header information in link layer, network layer, transport layer and application layer, such as at least one, or a combination, of a source Internet Protocol (IP) address, a destination IP address, a TCP/UDP source port, a TCP/UDP destination port of the plurality of network packets and a protocol number. In addition, the total number of the mapping models101to103may be more or less, and is not limited by the disclosure. In an embodiment, the processor11may obtain a three-dimensional mapping model. For example, the input of the three-dimensional mapping model may include variables U1and U2, and the output of the three-dimensional mapping model may include a parameter R(U1, U2). The variables U1and U2conform to the uniform distribution. The variables U1and U2are both values greater than 0 and less than 1. The parameter R(U1, U2) may be a maximally skewed stable distribution value calculated from the variables U1and U2. Moreover, the parameter R(U1, U2) may also be referred to as an R function. For example, the parameter R(U1, U2) may be obtained according to formulae (1.1) to (1.3) below. W1=π(U1-12)(1.1)W2=-logU2(1.2)R(U1,U2)=tan(W1)[π2-W1]+log(W2cosW1π2-W1)(1.3) In an embodiment, the parameter R(U1, U2) may also be obtained according to formulae (2.1) to (2.3) below. W1=πU1(2.1)W2=-logU2(2.2)R(keyt)=R(U1,U2)=sin(αW1)(sinW1)1/2(sinW1×ΔW2)Δ/α(2.3) In formula (2.3), a parameter R(keyt) may also be used to represent the R function, and Δ=1−α. In an embodiment, the processor11may establish the three-dimensional mapping model according to all possible results of the parameters R(U1, U2) computed in advance. After that, during the process of monitoring network traffic, the processor11may query the three-dimensional mapping model according to the currently obtained variables U1and U2to obtain the corresponding parameter R(U1, U2), to accordingly obtain the distribution status of the monitoring item. However, the data volume of the three-dimensional mapping model is massive. For example, when the decimal precision is 4 digits, the three-dimensional mapping model is stored in a form of 64-bit double-precision floating-point data type. Therefore, the three-dimensional mapping model occupies about 2.5 GB of memory space, which is inefficient in use. In an embodiment, the processor11may generate a two-dimensional mapping model according to the three-dimensional mapping model. For example, the processor11may employ the inverse probability integral transform to compress the three-dimensional mapping model into the two-dimensional mapping model. For example, the processor11may control the sampling of the three-dimensional mapping model and sort the sampling results through the inverse probability integral transform to accordingly generate the two-dimensional mapping model. Compared to the three-dimensional mapping model, the two-dimensional mapping model has a smaller data volume and occupies less memory space. After that, the processor11may generate the mapping models101to103according to a plurality of sample periods of the two-dimensional mapping model. In an embodiment, the input of the two-dimensional mapping model may include a variable x, and the output of the two-dimensional mapping model may include a parameter R(x). The variable x is also referred to as a sampling point of the two-dimensional mapping model. Different variables x may form a plurality of sampling points on the two-dimensional mapping model. Each sampling point may be mapped to the corresponding parameter R(x) via the two-dimensional mapping model. The parameter R(x) is also referred to as a mapping value corresponding to the variable x. FIG.2is a schematic diagram of a two-dimensional mapping model originated from the three-dimensional mapping model according to an embodiment of the disclosure. With reference toFIG.2, a three-dimensional plane21in a three-dimensional space may be used to represent or describe the three-dimensional mapping model. For example, the three axes in the three-dimensional space may respectively correspond to the variables U1, U2and the parameter R(U1, U2). After the variables U1and U2are input to the three-dimensional mapping model, the parameter R(U1, U2) may be obtained according to the output of the three-dimensional mapping model. In an embodiment, the processor11may compress the three-dimensional plane21in the three-dimensional space into a two-dimensional curve22in the two-dimensional space. For example, according to the inverse probability integral transform, the processor11may use a predetermined number of sampling points to sample the three-dimensional plane21and sort the sampling results. The sorted sampling results may be used to simulate or approximate the two-dimensional curve22. The two-dimensional curve22may be used to represent or describe the two-dimensional mapping model. For example, the two axes in the two-dimensional space may respectively correspond to the variable x and the parameter R(x). After the variable x is input to the two-dimensional mapping model, the parameter R(x) may be obtained according to the output of the two-dimensional mapping model. In an embodiment, the processor11may divide the two-dimensional curve22into sample periods201to203. For example, the sample period201covers the sampling range located between the sampling points 0 and x(1) on the two-dimensional curve22, the sample period202covers the sampling range located between the sampling points x(1) and x(2) on the two-dimensional curve22, and the sample period203covers the sampling range located between the sampling points x(2) and x(3) on the two-dimensional curve22. In an embodiment, the sample period201is also referred to as a span region, the sample period202is also referred to as a head region, and/or the sample period203is also referred to as a tail region. The processor11may generate the mapping models101to103ofFIG.1according to the mapping information reflected by the different sections of the two-dimensional curve22in the sample period201to203. FIG.3is a schematic diagram of a two-dimensional curve corresponding to a plurality of candidate mapping models according to an embodiment of the disclosure. With reference toFIG.2andFIG.3, two-dimensional curves301to303may be used to represent different parts of the two-dimensional curve22located in the sample periods201to203. In an embodiment, the processor11may respectively sample the two-dimensional curves301to303to generate the mapping models101to103according to at least part of the sampling points (also referred to as candidate sampling points) in the sample periods201to203. The generated mapping models101to103may respectively be reflected in mapping relations between the plurality of candidate sampling points and a plurality of mapping values (also referred to as candidate mapping values) in the sample periods201to203. In an embodiment, it is assumed that one of the mapping models101to103is a first mapping model, and another one of the mapping models101to103is a second mapping model. The first mapping model may be reflected in a mapping relation (also referred to as a first mapping relation) between a plurality of first candidate sampling points and a plurality of first candidate mapping values in a first sample period. The second mapping model may be reflected in a mapping relation (also referred to as second mapping relation) between a plurality of second candidate sampling points and a plurality of second candidate mapping values in a second sample period. In an embodiment, the total number of the candidate sampling points in a sample period may be controlled (e.g., reduced) to be less than the total number of predetermined sampling points in the sample period to reduce the data volume corresponding to the generated mapping model. TakingFIG.2andFIG.3as examples, assuming that the predetermined value of x(1) is 2 to the 10th power (i.e., 1024), it means that the sample period201is predetermined to include 1024 sampling points. According to the shape or value distribution of the two-dimensional curve301, the processor11may set the total number of the candidate sampling points in the sample period201to 256 (or other numbers less than 1024), and these candidate sampling points are located at critical positions in the two-dimensional curve301. The processor11may sample the two-dimensional curve301to obtain 256 (or other numbers less than 1024) candidate mapping values according to the candidate sampling points. The processor11may establish the mapping model101according to the mapping relation between the 256 candidate sampling points and the candidate mapping values. Similarly, assuming that the predetermined value of x(2) is 2 to the 15th power (i.e., 32768) and the predetermined value of x(3) is 2 to the 16th power (i.e., 65536), it means that the sample periods202and203are both predetermined to include more than 30,000 sampling points. According to the shape or value distribution of the two-dimensional curves302and303, the processor11may respectively set the total numbers of the candidate sampling points in the sample periods202and203to 6 and 15, and these candidate sampling points are respectively located at critical positions in the two-dimensional curves302and303. The processor11may respectively sample the two-dimensional curves302and303according to the candidate sampling points to establish the mapping models102and103. By greatly reducing the total number of the sampling points, the data volume in the mapping models101to103may be correspondingly reduced. In an embodiment, the processor11may generate an index parameter according to packet information of a certain network packet (also referred to as a first network packet) among the plurality of network packets. The packet information may include header information in the network packet. In an embodiment, in response to the monitoring item being the source IP address of the plurality of network packets, the packet information of the first network packet may include information of the source IP address of the first network packet. In an embodiment, in response to the monitoring item being the destination IP address of the plurality of network packets, the packet information of the first network packet may include information of the destination IP address of the first network packet. In an embodiment, in response to the monitoring item being the source port of the plurality of network packets, the packet information of the first network packet may include information of the source port of the first network packet. In an embodiment, in response to the monitoring item being the destination port of the plurality of network packets, the packet information of the first network packet may include information of the destination port of the first network packet. In an embodiment, the processor11may input the packet information (e.g., the source IP address, the destination IP address, the source port, or the destination port) of the first network packet to a random number generator. The random number generator may be configured to generate random numbers. The processor11may obtain the index parameter according to the output of the random number generator. The index parameter may include the variable x. For example, the random number generator may perform a hash operation on the packet information of the first network packet, and generate the index parameter according to an operation result of the hash operation. Accordingly, the index parameter exhibits (approximates) the properties of a random number. In addition, in an embodiment, the processor11may also generate the index parameter that exhibits (approximates) the properties of a random number by other software/hardware or other algorithms. According to the index parameter, the processor11may select one of the mapping models101to103and determine the selected mapping model to be a mapping model to be used (also referred to as a target mapping model). In particular, the index parameter may be between two adjacent sampling points (also referred to as a first sampling point and a second sampling point) of the target mapping model. Then, the processor11may obtain a reference value (also referred to as an interpolation mapping value) according to the index parameter, the first sampling point, the second sampling point, and the target mapping model. In an embodiment, the processor11may obtain a mapping value (also referred to as a first mapping value) corresponding to the first sampling point and a mapping value (also referred to as a second mapping value) corresponding to the second sampling point according to the target mapping model. Then, the processor11may perform an interpolation operation to obtain the interpolation mapping value according to the index parameter, the first sampling point, the second sampling point, the first mapping value, and the second mapping value. FIG.4is a schematic diagram of interpolation operation according to an embodiment of the disclosure. With reference toFIG.2toFIG.4, it is assumed that the index parameter is x(key) (or x(keyt)), and x(key) is in the sampling area202. In particular, x(key) is between two adjacent sampling points (i.e., candidate sampling points) x(i) and x(j) in the sampling area202, and the sampling points x(i) and x(j) both belong to the candidate sampling points in the sampling area202. Therefore, the processor11may determine the mapping model102corresponding to the two-dimensional curve302to be the target mapping model. Then, the processor11may obtain a mapping value R(i) corresponding to a sampling point x(i) and a mapping value R(j) corresponding to a sampling point x(j) according to the mapping model102. In an embodiment, the processor11may perform an interpolation operation to obtain the interpolation mapping value according to formula (3.1) below. R(keyt)=R(i)+(R(j)-R(i))(x(j)-x(i))×(x(j)-x(keyt))(3.1) In formula (3.1), the parameter R(keyt) represents the interpolation mapping value corresponding to an index parameter x(keyt). The parameter R(keyt) is between the mapping value R(i) and the mapping value R(j). By performing the interpolation operation, even if the index parameter does not belong to any one of the candidate sampling points, the interpolation mapping value corresponding to the index parameter may still be quickly obtained. After obtaining interpolation mapping value, the processor11may obtain an evaluation value according to the interpolation mapping value. In particular, the evaluation value may reflect the distribution status of the monitoring item in the plurality of network packets. For example, the evaluation value may include an evaluation value of entropy related to the monitoring item in the plurality of network packets. For example, when the monitoring item is the source IP address of the plurality of network packets, the evaluation value may reflect the distribution status of the source IP address of the plurality of network packets, and so on. In an embodiment, the processor11may obtain the evaluation value according to formulae (4.1) to (4.3) below. {circumflex over (H)}(φ)=−log[k−1Σj=0k-1exp(yj)] (4.1) yj=yj+Rj(keyt)×dt(4.2) yj=yj/Y(4.3) In formulae (4.1) to (4.3), a parameter Rj(keyt) represents an interpolation mapping value calculated corresponding to a network packet received at a time point t, a parameter Ĥ(φ) may be used to represent the evaluation value of entropy related to the plurality of network packets, dt=1 means that a network packet (i.e., the first network packet) is received at the point time t, and Y represents the total number of network packets received within the monitoring time ΔT. In an embodiment, formula (4.1) above may also be replaced by formulae (5.1) and (5.2) below. H^(φ)=-log()-1Δlog(Yα)(5.1)=Δk∑j=0k-1yj-α/Δ(5.2) In the embodiments above, the entropy of the network packet is estimated using one random number generator with one set of mapping models101to103. However, in an embodiment, the processor11may also be provided with multiple random number generators and/or multiple sets of mapping models101to103. In particular, the multiple random number generators may generate different index parameters according to the same seed (e.g., the packet information). The processor11may perform the interpolation operation described above to respectively obtain a plurality of interpolation mapping values according to the index parameters output by the multiple random number generators with the multiple sets of mapping models101to103. For example, one set of mapping models101to103may reflect different sections of one two-dimensional curve, and another one set of mapping models101to103may reflect different sections of another two-dimensional curve. Then, the processor11may estimate the entropy of the network packet according to the interpolation mapping values. In an embodiment, the processor11may obtain the evaluation value according to formulae (6.1) to (6.3) below. H^(φ)=-log[1mp×kp∑i=0mp-1∑j=0kp-1exp(Yij)](6.1)Yij=Yij+Rij(keyt)×dt(6.2)Yij=Yijpktcount(6.3) In formulae (6.1) to (6.3), mp represents the total number of provided random number generators, and kp represents the total number of provided sets of mapping models101to103. For example, assuming mp=4 and kp=5, it means that the processor11has been provided with four random number generators and five sets of mapping models101to103. In addition, Rij(keyt) represents the interpolation mapping value calculated according to the index parameter generated by the i-th random number generator with the j-th sets of mapping models101to103, and pktcountrepresents the total number of network packets received within the monitoring time ΔT. In an embodiment, formula (6.1) above may also be replaced by formulae (7.1) and (7.2) below. H^(φ)=-log()-1Δlog(pktcountα)(7.1)=Δmp×kp∑i=0mp-1∑j=0kp-1(Yij)-α/Δ(7.2) The formulae mentioned in the embodiments above are exemplary and are not intended to limit the disclosure. In addition, the formulae mentioned in the embodiments above may be adjusted depending on practical needs, and are not limited by the disclosure. FIG.5is a flowchart of a packet information analysis method according to an embodiment of the disclosure. With reference toFIG.5, the method of this embodiment is adapted for the network traffic monitoring device10as shown inFIG.1. In step S501, network flow data is obtained, and the network flow data includes a plurality of network packets. In step S502, an index parameter is generated according to packet information of a first network packet among the plurality of network packets. In step S503, a target mapping model is determined from a plurality of candidate mapping models according to the index parameter, and the index parameter is between a first sampling point and a second sampling point of the target mapping model. In step S504, an interpolation mapping value is obtained according to the index parameter, the first sampling point, the second sampling point, and the target mapping model. In step S505, an evaluation value is obtained according to the interpolation mapping value, and the evaluation value reflects a distribution status of a monitoring item in the plurality of network packets. However, each step inFIG.5has been described in detail above, and will not be repeatedly described here. Each step inFIG.5may be implemented into a plurality of programming codes or circuits, which is not limited by the disclosure. In addition, the method ofFIG.5may be used with the exemplary embodiments above, and may also be used alone, which is not limited by the disclosure. In summary of the foregoing, the packet information analysis method and the network traffic monitoring device provided by the embodiments of the disclosure can be applied to high-speed network traffic analysis and network security monitoring. Moreover, the packet information analysis method and the network traffic monitoring device can quickly estimate the entropy of network traffic, consume limited memory space, and be easily realized in the hardware form. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents. | 23,419 |
11863413 | Like reference numerals are used to designate like parts in the accompanying drawings. DETAILED DESCRIPTION The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present examples are constructed or utilized. The description sets forth the functions of the examples and the sequence of operations for constructing and operating the examples. However, the same or equivalent functions and sequences may be accomplished by different examples. An orchestration agent is computer-implemented functionality for interoperating with an orchestrator for deploying services in the cloud or other communications network. A non-exhaustive list of examples of orchestrators is: Kubernetes (trade mark), Docker Swarm (trade mark), Azure (trade mark) container instances, Openshift (trade mark) container platform. The term “observability framework” is used to refer to computer-implemented functionality for enabling an end user to view data about the state of services deployed in a communications network. The term “genericized metric” is used to refer to a numeric or Boolean measurement value. The value is in a form understood by an orchestration agent and an observability framework and has a key that can be addressed. In a non-limiting example, a genericized metric is “packet loss”. By using a numeric or Boolean value it is possible to efficiently compare a genericized metric with another genericized metric or with a numerical threshold (rather than having to encode an understanding of enumerated states for instance). The automated deployment and lifecycle management of NSs and individual NFs is called automated orchestration. Deploying, upgrading or making other changes to the NFs is potentially hazardous to the operation of the NS they are a part of and/or to individual NFs within the NS. It is often impossible to predict the full impact of any change until it is deployed and in live operation. The inventors recognize that it is therefore crucial that such upgrades are rolled out incrementally often at a single NF and that the impact on the NS and/or individual NFs within the NS is accurately determined before further roll out. One approach is to use dedicated feedback systems to assess the impact of upgrading and deploying NFs. However, the inventors have recognized that using dedicated feedback systems requires extra connections, additional code and physical and/or virtual resources leading to inefficiencies. As discussed above it is desirable to accurately appreciate the impact of a new deployment on status and performance of a network service and/or individual NFs within the NS. One approach attempts to provide this through a dedicated feedback system with additional connections and resources leading to inefficiency. In contrast to using a dedicated feedback system, the present technology takes an observability framework, which may be already available, and adapts it in order to improve efficiency whilst at the same time giving high quality feedback which is usable to control deployment of network services in an automated manner. An observability framework, which may have been originally designed only for use by human operators, is adapted for interoperation with an orchestration agent in an automated manner. In this way efficient, automated orchestration is achieved as now explained with reference to the drawings. In addition the observability framework is still usable by a human operator to view data. FIG.1is a schematic diagram illustrating a system for orchestrating deployment of network services. A system100comprises a network service (NS)102, other network services104, an observability framework106, an orchestration agent116, and three network functions (NFs)110,112,114. A human operator108is able to access observability framework in order to view data and is able to access orchestration agent116. However, it is not essential for human operator108to be present as in some cases the technology is fully automated without the involvement of a human operator. As described above NSs may comprise one or more NFs. In the example of system100the NS102comprises at least three NFs110,112,114. The NS102is a VoIP telephony or video telephony service or any other type of network service. The NFs110,112,114are virtual NFs, physical NFs or cloud NFs. The NF110is a router, load balancer or any other type of NF. As part of general observability requirements NFs report log streams of events along with status and performance metrics. These allow an operator to observe the specific status of the individual NFs. Each type of NF produces this monitoring data with formatting and content specific to that type. As NSs often comprise many different types of NFs, the monitoring data from a NS will contain varying content and formatting. In the system100this monitoring data from the NFs of the NS is used to decide on actions in the system. For example the actions maybe to make decisions about further deployment. For example when deciding to further deploy other NSs104. This could be at other sites, or for different operators at a same site. This is more efficient than existing systems because it makes use of the monitoring data that is already produced by the NFs. However, as discussed the monitoring data is specific to NFs and not all of the monitoring data will be useful to determine the actions on further deployment. As part of the deployment the NFs110,112,114are sometimes upgraded and therefore lead to unpredictable effects on the behavior of the NS. The NFs110,112,114forward monitoring data, which is log streams and/or metrics, to the observability framework106. The log streams each comprise of a stream of events (such as chronological events) that have been logged at the NF/s. A non-exhaustive list of examples of the metrics in the monitoring data is any one or more of: packet loss, error rates, peak or mean response times, requests per second, jitter, thread count. The observability framework allows an operator108to observe this monitoring data. The observability framework106aggregates the monitoring data into genericized metrics for the NS102. The inventors have found that aggregating the monitoring data into genericized metrics for the NS allows the health of the NS as a whole to be determined, as well as that of the individual NFs. The genericized metrics comprise identified data from the monitoring data that identifies the status and/or performance of the NS102and therefore allows automated decisions to be made regarding further deployment. The identified data depends on the monitoring data received as different NFs will provide different log streams and/or metrics. The identified data also depends on the type of NS as different monitoring data will be more relevant for the decision to further deploy different types of NS. For example, in a cloud telephony NS the packet loss percentage may be a highly relevant metric. The aggregation of the monitoring data into genericized metrics by the observability framework106may occur as for the case where an observability framework provides data to a human operator108without integration with an orchestration agent. Details about the aggregation is described below. The genericized metrics are made available by the observability framework106to the orchestration agent116, and optionally to the operator of the NS102. The orchestration agent116receives the genericized metrics and determines whether at least one portion of the genericized metrics meets at least one threshold. If it determines the portion of the genericized metrics does or does not meet this threshold an action is taken regarding the other network services104. The action which is taken by the orchestration agent may be any one or more of: an action on the network function from which genericized metrics were obtained, an action on the network service from which genericized metrics were obtained, an action on another network function, an action on another network service. Using the above example of packet loss percentage a threshold may be 2.5%. If the portion of the genericized metrics indicates the packet loss percentage is above that threshold the orchestration agent116takes the action of halting the deployment at the other NSs104in some examples. Other actions that may be taken include continuing the deployment process, modifying the deployment process or any other deployment action. System100therefore provides a more efficient means for orchestrating deployment as it allows deployment actions to be determined based on metrics and logs that are already present and collected in an orchestration system but in a NF specific form which is then genericized. System100may be implemented as a cloud-based or datacenter system. The NF does not need to be modified to provide the metrics and/or logs. An observability framework may be available which provides data to a human operator108such as the metrics and logs. The observability framework may be adapted to provide that data to the orchestration agent116. FIG.2is a flow diagram illustrating a method200for generating genericized metrics of a network service at an observability framework such as the observability framework ofFIG.1. Method200begins at operation200where the observability framework such as that described in system100, receives monitoring data from one or more NFs of an NS. As discussed above this monitoring data comprises metrics and/or log streams from the one or more NFs and is specific in content and format to the type of NF that generated the monitoring data. The monitoring data is received out of chronological order or with varying priorities in some cases. For example an event log may be forwarded as soon as it is logged by the NF. This allows for rapid determination of new genericized metrics by the observability framework compared to other implementations without asynchronous receiving of monitoring data. At operation204, the observability framework identifies, from the monitoring data of the NFs, status and/or performance data for the NS. The observability framework receives different monitoring data depending on the type of NF/s that the NS is comprised of. As discussed above for different types of NS different metrics and events in log streams will be indicative of status and/or performance. Only monitoring data that is present from the NFs and relevant to the NS is therefore identified. The identification is performed by searching the monitoring data for target fields of metrics and events known to be relevant to the NS in some cases. The identification may be performed based on configuration rules. The identification is based on types of stored thresholds at the orchestration agent in some examples. At operation206the identified data is then aggregated into genericized metrics. The aggregation combines the identified monitoring data and filters out other monitoring data that was not deemed relevant to the status and/or performance of the NF or NS in operation204. The genericized metrics therefore comprise a combination of events from log streams and metrics from the NFs in some examples. Unlike the NF specific logs and metrics the genericized metrics have a generic format that can be read by the orchestration agent without modification. The genericized metric may comprise at least one field corresponding to a type of the identified monitoring data e.g. a packet loss percentage field. At operation208the genericized metric is made available to an orchestration agent for use in instructing an orchestrator of the network service. FIG.3is a schematic diagram300illustrating a simplified example of the generation of genericized metrics for an example network service. Diagram300comprises a router log stream302and router metrics304forwarded to an observability framework106. Telephony service genericized metrics306are also forwarded from the observability framework106. In the example of diagram300, the NS is a telephony service and the NF is a router. It will be readily understood that the techniques in this disclosure can be applied to many other types of NS and NF and that there may be more than one NF and more than one type of NF. The type of NF and NS partially determine which monitoring data will be included in the genericized metrics from the observability framework106in some cases. As described above the monitoring data from the NFs is often specific to the type of NF. In the example of diagram300, the router NF forwards two types of monitoring data, the router log stream302and the router metrics304. The router log stream302comprises two entries an unauthorized login attempt event and a flow dropped event. The router metrics304comprise two metrics: jitter and packet loss. The events and metrics are presented in such a way for ease of explanation and in reality are forwarded in a different format. At the observability framework106the monitoring data comprising the router log stream302and router metrics304is received. The observability framework106then identifies the monitoring data that indicates the status and/or performance of the network service, in this example a telephony service. Of the two entries in the router log stream302the observability framework106identifies the events that indicate the status and/or performance. This may be none or one or more of the events in the router log stream302. The same identification is performed on the router metrics304. In the example of diagram300, the observability framework106identifies that the flow drop percentage and the packet loss metric of the router NF indicate the status and/or performance of the telephony NS. Once identified the observability framework106aggregates the identified monitoring data into the telephony service genericized metrics306. This includes both the identified event from the router log stream302and the identified metric from the router metrics304. As part of an observability process for orchestration, the observability framework106also provides the monitoring data to an operator of the network service, optionally including the genericized metrics. The extra functionality for the observability framework106therefore provides genericized metrics306for a network service using NF specific metrics and logs already being collected without requiring a dedicated system to collect the NS metrics. The observability framework106is observing a cloud based system and receives feedback from many NFs and NSs across multiple sites. The processing for the observability framework106is therefore distributed across multiple networked servers in some cases for scalability. FIG.4is a flow diagram illustrating a method for performing orchestration actions at an orchestration agent based on genericized metrics of an NS. The method may be carried out by one or more processors from instructions stored in memory. At operation402the orchestration agent receives the genericized metrics for the NS. As discussed above the genericized metrics are generated from logs and metrics of the NFs in the NS that indicate the status and/or performance of the NS. The genericized metrics may contain one or more fields corresponding to different types of monitoring data. At operation404the orchestration agent determines whether one or more portions of the genericized metrics has met at least one threshold. The orchestration agent determines whether a value of the portion is above or below the threshold. The orchestration agent applies the threshold to the portion of the genericized metric corresponding to one or more fields in the genericized metrics. In some examples, the orchestration agent at operation404uses logical combinations of one or more thresholds, or arithmetic combinations of thresholds. At operation406the orchestration agent applies an action on one or more other network services based on the result of the determination of operation404. When the value at the portion meets or exceeds the threshold it indicates the deployment caused a status and/or performance degradation that is not acceptable in some cases. The NS deployment is therefore not safe to roll out to other NSs and any factor that makes this NS different from other NSs is not replicated to them. In such cases the orchestration agent takes action to halt the deployment process at the other NSs. In cases where the threshold is not met it indicates the deployment of the NS is not working correctly and is therefore not safe to roll out to the other NSs. Another possible action is to automatically modify the deployment based on the determination. Another possible action is for deployment to delete the NS and then re-attempt creating it; or when modifying the NS to attempt to roll-back to a previous good configuration. The operation at406occurs at the NF level in some cases, such as when gradually updating an NS one NF at a time. In summary the actions which are possible at operation406are: any one or more of: an action on the network function from which genericized metrics were obtained, an action on the network service from which genericized metrics were obtained, an action on another network function, an action on another network service. FIG.5is a schematic diagram illustrating an example of an orchestration agent116taking orchestration actions. Diagram500comprises telephony service genericized metrics506, and orchestration agent116which includes the received genericized metrics502and table504. Also shown inFIG.5is the entity or entities508to which the orchestration agent applies an action. These are any one or more of: a network function from which genericized metrics were obtained, a network service from which genericized metrics were obtained, another network function, another network service. In the example of diagram500the same telephony service genericized metrics506generated in diagram300are received by the orchestration agent116. The orchestration agent is able to receive and store genericized metrics502from one or more network services. The network services are often across one or more different sites. It is the task of the orchestration agent116to safely deploy network services. To determine whether a network service deployment is operating safely they are rolled out incrementally for example, at one NS at one site at a time this is sometimes referred to as “canarying”. When the NS deployment is deemed to be safe it is rolled out to one or more other NSs104instantiated at the same site or at remote sites. The other NSs104may be a same NS as the NS and the deployment may be an upgrade applied first to the NS. The orchestration agent uses the genericized metrics to automatically determine the safety of the NS site and take actions at the other NSs. In the example of diagram500the telephony service genericized metrics include the metric packet loss. The orchestration agent has a stored packet loss threshold shown in the table504. The thresholds are entered manually by an operator in some cases. Alternatively, the thresholds are set automatically based on historical data or qualify of service QoS parameters. The orchestration agent makes a determination of whether the value of the packet loss metric portion in the telephony service genericized metrics meets and exceeds the packet loss threshold value in table504. In the example of diagram500the packet loss metric has a value of 2.5% compared to the threshold value of 2%. The determination made by the orchestration agent116is therefore that the NS has exceeded the threshold for packet loss. Having determined the NS has exceeded the threshold the orchestration agent116then performs an action based on the determination. In the example of diagram500the action is to halt the deployment to the one or more NSs104. The choice of action may be based on the result of more than one threshold determination from the genericized metrics502. The choice of action in response to the determination is performed using combinational logic. Alternatively a trained machine learning model is used to select an appropriate output from the one or more threshold determinations. The action performed by the orchestration agent is any one or more of: an action on the network function from which genericized metrics were obtained, an action on the network service from which genericized metrics were obtained, an action on another network function, an action on another network service. The orchestration agent116is often orchestrating a cloud based system and responsible for network services at many sites. The processing performed for the orchestration agent116is therefore distributed across multiple networked servers as mentioned above. The orchestration agent improves the functioning of network service deployment by using genericized metrics from the observability framework in an automated manner. FIG.6illustrates various components of an exemplary computing-based device600which are implemented as any form of a computing and/or electronic device, and in which any of the above embodiments are implemented in some examples. In some cases the computing-based device600is used to implement an orchestration agent. In some cases the computing-based device600is used to implement an observability framework. In some cases the computing-based device600implements both an observability framework and an orchestration agent. Computing-based device600comprises one or more processors602which are microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device to implement the examples described above. Platform software comprising an operating system612or any other suitable platform software is provided at the computing-based device to enable application software614to be executed on the device. The computer executable instructions are provided using any computer-readable media that is accessible by computing based device600. Computer-readable media includes, for example, computer storage media such as memory618and communications media. Computer storage media, such as memory618, includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), electronic erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that is used to store information for access by a computing device. In contrast, communication media embody computer readable instructions, data structures, program modules, or the like in a modulated data signal, such as a carrier wave, or other transport mechanism. As defined herein, computer storage media does not include communication media. Therefore, a computer storage medium is not to be interpreted to be a propagating signal per se. Although the computer storage media (memory618) is shown within the computing-based device600it will be appreciated that the storage is, in some examples, distributed or located remotely and accessed via a network or other communication link (e.g. using communication interface604). The computing-based device600also comprises an input/output controller606arranged to output display information to a display device608which may be separate from or integral to the computing-based device600. The display information may provide a graphical user interface. The input/output controller606is also arranged to receive and process input from one or more devices, such as a user input device610(e.g. a mouse, keyboard, camera, microphone or other sensor). In some examples the user input device610detects voice input, user gestures or other user actions and provides a natural user interface (NUI). This user input may be used to set threshold values in the orchestration agent. In an embodiment the display device608also acts as the user input device610if it is a touch sensitive display device. The input/output controller606outputs data to devices other than the display device in some examples, e.g. a locally connected printing device (not shown inFIG.6). Alternatively or in addition to the other examples described herein, examples include any combination of the following clauses: Clause A. A method comprising: using an observability framework, receiving monitoring data corresponding to at least one network function of a network service;using the observability framework, identifying data about the network service in the monitoring data;using the observability framework, aggregating the identified monitoring data into the genericized metrics for the network service; andusing the observability framework, making the genericized metrics available to an orchestration agent;using the orchestration agent to trigger an operation being any one or more of: an action on the network function from which the monitoring data was received, an action on the network service, an action on another network function, an action on another network service;wherein the orchestration agent is configured to trigger the operation based on identifying at least one of the genericized metrics and an associated threshold. Clause B. The method according to clause A wherein at least one of the network functions of the network service has been deployed. Clause C. The method of claims clause A or B wherein the monitoring data comprises metrics of the network function. Clause D. The method according to clause A or B wherein the monitoring data comprises log streams of the network function. Clause E. The method according to clause A or B wherein the content and format of the monitoring data is specific to the network function. Clause F. The method according to any preceding clause wherein the monitoring data is received from the at least one network function. Clause G. The method according to clause A or B wherein the genericized metric comprises at least one field corresponding to a type of the identified monitoring data. Clause H. The method according to clause A or B wherein identifying the data about the network service is based at least partially on a type of the network service. Clause I. The method according to clause C wherein the metrics of the network function comprise at least one of packet loss, error rates, peak or mean response times, requests per second, and thread count. Clause J. The method according to clause D wherein the log streams comprise a stream of events generated at the network function. Clause K. A system comprising: an observability framework configured to:receive monitoring data corresponding to at least one network function of a network service;identifying data about the network service in the monitoring data;aggregate the identified monitoring data into the genericized metrics for the network service; andmake the genericized metrics available to an orchestration agent;an orchestration agent configured to trigger an operation being any one or more of: an action on the network function from which the monitoring data was received, an action on the network service, an action on another network function, an action on another network service;wherein the orchestration agent is configured to trigger the operation based on identifying at least one of the genericized metrics and an associated threshold. Clause L. The system of clause K wherein the observability framework is an observability framework for providing data to a human operator and is adapted for automatic interoperation with the orchestration agent. Clause M. The system of clause K or L wherein the observability framework and the orchestration agent both have a protocol of the genericized metrics. Clause N. An orchestration agent for safely deploying network services comprising: at least one processor; and at least one memory storing instructions which when run by the processor cause the processor to: receive genericized metrics for a network service from an observability framework; determine whether one or more values of one or more portions of the genericized metrics has met at least one threshold; and perform an action on at least one network service or network function based at least in part on the determination. Clause O. The orchestration agent of clause N wherein the received genericized metrics comprise at least one field corresponding to a type of the identified monitoring data; and the at least one portion is at least partially based on data corresponding to the at least one field. Clause P. The orchestration agent according to clause N or O wherein the at least one other network service is instantiated in a remote site to the network service. Clause Q. The orchestration agent according to any of clause N to P wherein the at least one other network service is a same network service instantiated in the remote site to the network service. Clause R. The orchestration agent according to any of clause N to Q wherein the action is to continue a deployment process of the at least one other network service. Clause S. The orchestration agent according to any of clause N to Q wherein the action is to halt a deployment process of the at least one other network service. Clause T. The orchestration agent according to any of clause N to Q wherein the action is to modify a deployment process of the at least one other network service. A method for deploying a network service implemented by a plurality of network functions in a communications network, the method comprising:receiving monitoring data corresponding to at least one network function of the network service;identifying, in the monitoring data, data pertaining to the network service;aggregating the data pertaining to the network service into genericized metrics for the network service, the genericized metrics comprising numeric or Boolean measurement values;sending the genericized metrics to an orchestration agent configured to interoperate with an orchestrator operable to deploy the network service in the communications network; andtriggering, by the orchestration agent, an operation comprising one or more of: an action on the network function from which the monitoring data was received, an action on the network service, an action on another network function, or an action on another network service; wherein the operation is triggered based on at least one of the genericized metrics for the network service and an associated threshold. The term ‘computer’ or ‘computing-based device’ is used herein to refer to any device with processing capability such that it executes instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the terms ‘computer’ and ‘computing-based device’ each include personal computers (PCs), servers, mobile telephones (including smart phones), tablet computers, set-top boxes, media players, games consoles, personal digital assistants, wearable computers, and many other devices. The methods described herein are performed, in some examples, by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the operations of one or more of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The software is suitable for execution on a parallel processor or a serial processor such that the method operations may be carried out in any suitable order, or simultaneously. Those skilled in the art will realize that storage devices utilized to store program instructions are optionally distributed across a network. For example, a remote computer is able to store an example of the process described as software. A local or terminal computer is able to access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a digital signal processor (DSP), programmable logic array, or the like. Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items. The operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements. It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this specification. | 35,229 |
11863414 | DETAILED DESCRIPTION Some embodiments provide a computer program product comprising a non-volatile computer readable medium and non-transitory program instructions embodied therein, the program instructions being configured to be executable by a central processing unit of a baseboard management controller to cause the processor to perform various operations. The operations comprise receiving a message from a system management computer, wherein the message instructs the baseboard management controller of a server to cause a host central processing unit on the server to run network diagnostics on a host network physically connected to the server. The operations further comprise instructing, in response to receiving the message, the host central processing unit to boot from a bootable image stored on a data storage device hosted by the baseboard management controller and run a network diagnostic utility included with the bootable image to monitor network traffic on the host network. A baseboard management controller (BMC) is a small computer that resides on a motherboard of a server and some other devices, such as higher-end switches, to provide remote monitoring and control of the server. Redfish is the current standard used to expose the BMC functions as defined by the Distributed Management Task Force (DMTF) and largely replaces the older Intelligent Platform Management Interface (IPMI) standard. The BMC is a specialized microcontroller that is typically embedded on the motherboard of a computer server and has its own firmware and memory. The BMC manages the interface between system-management software and platform hardware BMC. The BMC monitors the server hardware by receiving input from various sensors built into the server, including such input as component temperatures, cooling fan speeds, power status, and the like. Furthermore, the BMC can send alerts and operating data to a system administrator over a network under various conditions. The system administrator may also remotely communicate with the BMC to take some corrective actions, such as resetting or power cycling the server to get a hung operating system running again. Some BMCs may also have out-of-band embedded web-server interface functionality, enabling an administrator to monitor and take action via the BMC from a remote computer with a web-browser. Other out-of-band interfaces include an Intelligent Platform Management Interface (IPMI), Redfish interface, and Common Information Model (CIM) interface. In some embodiments, the operations may further comprise the baseboard management controller communicating with the system management computer over a management network using a host network interface controller on the server. For example, the baseboard management controller may communicate via a direct physical connection with the host network interface controller using the Network Controller Sideband Interface (NC-SI) protocol. In some embodiments, the operations may further comprise the baseboard management controller communicating with the system management computer over a management network using a dedicated management network interface controller. In some embodiments, the operations of the central processing unit of the baseboard management controller may further comprise receiving the bootable image from the system management computer and storing the bootable image on the data storage device hosted by the baseboard management controller. In one option, the baseboard management controller may receive and store the bootable image at some time prior to, or without regard to, a need to run network diagnostics on the host network. Specifically, the bootable image may be received and stored during initial setup of the server and/or other time period independent of the message instructing the baseboard management controller of the server to cause the host central processing unit on the server to run network diagnostics on the host network physically connected to the server. In another option, the baseboard management controller may receive the bootable image in association with a need to run network diagnostics on the host network. Specifically, the baseboard management controller may receive both the bootable image and the message during a single communication session. In some embodiments, the server may be deployed in a remote data center or edge location, and the network diagnostic utility may be run in support of unattended deployment of the server in the host network under the control of the system management computer. While embodiments may be used in any environment and/or deployment scenario, embodiments may facilitate remote network diagnostics and unattended server deployment. In some embodiments, the baseboard management controller may instruct the host central processing unit to boot from the bootable image and run the network diagnostic utility by communicating with the host central processing unit through a system bus within the server. This is made possible because the baseboard management controller is installed in the same server as the host central processing unit. The network diagnostic utility may include any type and number of diagnostic utility utilities and may analyze any type of network activity. Without limitation, the network diagnostic utility may analyze Address Resolution Protocol (ARP) network activity, Service Location Protocol (SLP) network activity, Dynamic Host Configuration Protocol (DHCP) network activity, Link Layer Discovery Protocol (LLDP) network activity, and/or Internet Protocol version 6 (IPv6) Neighbor Discovery solicitations. In some embodiments, the operations of the baseboard management controller may further comprise receiving network information from the host central processing unit running the network diagnostic utility, wherein the network information is obtained by the host central processing unit as a result of running the network diagnostic utility to monitor traffic on the host network. The scope and content of the network information may vary according to the one or more types of network diagnostic utilities that are run by the host central processing unit using the bootable image. The operations may further comprise causing the network information received from the host central processing unit to be stored. For example, the network information may be stored on a remote data storage device and/or on the data storage device hosted by the baseboard management controller. Still further, the operations may further comprise forming a network map using the network information received from the host central processing unit. In one option, the network map may include a network report, identified Subnets, identified virtual local area networks, and/or identified switch ports. Some embodiments provide a computer program product comprising a non-volatile computer readable medium and non-transitory program instructions embodied therein, the program instructions being configured to be executable by a central processing unit of a baseboard management controller to cause the processor to perform various operations. The operations comprise receiving a message from a system management computer, wherein the message instructs the baseboard management controller of a server to run network diagnostics on a host network physically connected to the server. The operations further comprise accessing a network diagnostic utility and running the network diagnostic utility to monitor and analyze traffic on the host network through a direct physical connection between the baseboard management controller and a host network interface controller on the server. It should be recognized that this embodiment is distinct from some previously described embodiments in that the baseboard management controller runs the network diagnostics on the host network rather than instructing the host central processing unit to run the network diagnostics. However, other than this distinction, embodiments that run the network diagnostics on the baseboard management controller may include any one or more operations, aspects or features of the embodiments that run the network diagnostics on the host central processing unit. Therefore, a description of these operations, aspects or features may not be fully described again in the context of the network diagnostics being run by the baseboard management controller. Some embodiments provide a technological benefit by enabling an administrative user with hardware management credentials to access the baseboard management controller to utilize the network diagnostic utility without requiring credentials to login to an operating system running on a host central processing unit of the server and/or without the operating system including the network diagnostic utility. It is a further technological benefit that some embodiments do not require the host computer to have a fully functional operating system. Although the host computer may in fact have a fully functional operating system, this is not required. In some embodiments, the host computer may eventually install an operating system, may be in the process of installing an operating system, or could already have an operating system installed, but embodiments can operate independent of whether or not the host is running its operating system. For example, a host CPU may be attempting to install the operating system or boot from a network resource (e.g., implementation of the Preboot eXecution Environment; “iPXE”) but may not have the utilities that an operating system would need to diagnose a network problem. Furthermore, even if the host CPU is running an operating system, embodiments may still enable a hardware administrator to utilize network analysis tools even without authority or domain knowledge to login to the operating system. Further, an installed operating system may not have user-accessible network diagnostic utilities provisioned by default, yet those network diagnostic utilities may be provided according to some embodiments. Embodiments include methods to enable a server to perform automated discovery of network information without requiring that the server have a pre-installed operating system on the server. For example, the network information may include subnet analysis of Address Resolution Protocol (ARP) traffic, observation of neighbor solicitations, and monitoring for SLP (Service Location Protocol), DHCP (Dynamic Host Configuration Protocol), or other relevant network activity. A system that is “physically deployed” is connected to electrical power so that electrical power is provided to the baseboard management controller (BMC) and network interface controller (NIC), and preferably also provided to the host central processing unit (CPU) and main memory, and the system is also physically connected to a network. For example, a physical connection to a network may include an Ethernet cable or other wired connection. A “failure to connect to the network” means that the primary network connection between the operating system (OS) run on the host CPU and a network is non-functional despite having a physical connection to the network. However, even though the server's host CPU may have a non-functional connection to the network, a baseboard management controller (BMC) on the same server may have a functional management network connection. Some embodiments store a bootable image in data storage that is hosted by the baseboard management controller (BMC) and visible to the host CPU and memory subsystem. The bootable image can be accessed by the host CPU and memory subsystem so that the host CPU may boot from the bootable image and perform network analysis and diagnostics. For example, the BMC that hosts the bootable image may cause the host CPU to access the bootable image and then execute the bootable image. When the host CPU executes the bootable image, the bootable image causes the host CPU to run standard utilities to snoop the traffic on the available network interface(s). The bootable image may be pre-installed on a data storage device hosted by the BMC or may be saved on the data storage device as needed. For example, the bootable image may be pushed from a computer running external management software to the BMC for storage in response to a need to perform the network diagnosis. However, the bootable image may be stored by the BMC to be run by the host CPU and memory subsystem, or to be run by the BMC itself. Some examples of the standard utilities used by or included within the bootable image include ping, arping, tracert, ifconfig, wireshark, tcpdump, lldpd, lldpad. In addition, the bootable image may use custom code that talks directly to a raw socket. “ping is a computer network administration software utility used to test the reachability of a host on an Internet Protocol (IP) network. “arping” is a computer software tool for discovering and probing hosts on a computer network. “tracerout” and “tracert” are computer network diagnostic commands for displaying possible routes and measuring transit delays of packets across an Internet Protocol (IP) network. “ipconfig” is a console application program of some computer operating systems that displays all current TCP/IP network configuration values and refreshes Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) settings. “wireshark” is a free and open-source packet analyzer. “tcpdump” is a data-network packet analyzer computer program that runs under a command line interface. “lldpd” is a daemon able to receive and second Link Layer Discovery Protocol (LLDP) frames. “lldpad” a Link Layer Discovery Protocol (LLDP) agent daemon. A map of the local network and its operating parameters may be determined using the information gathered from the traffic. For example, the information gathered from the traffic on the network may include subnet analysis of ARP traffic, observation of neighbor solicitations, and monitoring for SLP, DHCP, or other relevant network activity. Some embodiments are implemented by servers having an internal communication connection from the host CPU to the BMC, which is typical of server implementations with a BMC present. This internal communication connection causes the host CPU to view the BMC as a network device, which facilitates a normalized communication interface for software to utilize. Using the internal network connection, the host CPU discovers the BMC-hosted storage as a device that looks like a USB flash drive that has been inserted into a USB port. The data storage device may be a component of the BMC subsystem, but the host CPU doesn't know or need to know the physical topology or implementation of the data storage device. In some embodiments, the BMC has a physical connection to the host network interface controller. Use of the NC-SI (Network Controller Sideband Interface) interface specification and a compliant connection enables the BMC to communicate with the network interface controller (NIC) in a server to provide the BMC with access to the host network. In such configurations, the BMC may be able to directly monitor the traffic on the host network for the purpose of building the network map without the extra step of booting a network diagnostic image on the host CPU. NC-SI defines a standard way for the BMC to share the physical Ethernet connection with the host CPU. Although network traffic flows through the same wire from the network to the network interface controller, an Ethernet controller chip may direct the network traffic to the proper endpoint within the server, such as the BMC or the host CPU. In this configuration, the diagnostic tools can be run on the BMC, since the BMC has a connection to the same Ethernet controller chip and Ethernet cable as the host CPU. Some embodiments may, in conjunction with either of the disclosed configurations (i.e., either of the BMC or the host CPU running the network diagnostic utilities), use “passive” network interface monitoring and/or “active” network interface monitoring. Passive monitoring techniques observe traffic on the network without broadcasting any network traffic of their own. Active monitoring techniques emit network traffic that probes the network, for example by broadcasting a service request and monitoring for a response. For example, the network interface monitoring and analysis may include subnet analysis of ARP (Address Resolution Protocol) traffic, observation of neighbor solicitations, and monitoring for SLP (Service Location Protocol), DHCP (Dynamic Host Configuration Protocol), or other relevant network activity. DHCP requests are an example of an active monitoring technique. Some active monitoring techniques may trigger a response from malware or network attack detection utilities, so active monitoring techniques may not be preferred in some environments. In some embodiments, the BMC may communicate over a “management network” that is distinct/separate from a “host network.” These two networks may be supported by the same physical network wires/switches or by different physical network wires/switches if the BMC has its own dedicated network interface controller. Further, even in NC-SI supported configurations where the BMC has the capability to communicate with a host network via the system NIC, it is also possible for the BMC to utilize a network connection that is dedicated to the BMC. In other words, the presence of NC-SI does not require that the BMC make use of the NC-SI connection. In one option, the BMC may monitor and diagnose the host network using the NC-SI link to directly monitor network traffic, while communicating with system management through a dedicated management network interface. Once a network map has been derived, the network map can be stored on the BMC-hosted storage or at a pre-configured remote destination and may be subsequently used to correct issues with a failing network connection process. Examples of a remote destination may include a central management server, one or more peer BMCs on systems sharing a common management sub-network, or a network debugging system. The network map information may include a network report and identified Subnets, Virtual Local Area Networks (VLANs), and Switchports. The network map may be derived or determined using information from ARP, SLP, DHCP and other network activity. Embodiments enable the BMC to perform network analysis or provide a bootable image that enables the CPU to perform this network analysis and are not directed to the details of the analysis itself. In one option, network information may be obtained using Address Resolution Protocol (ARP) and Internet Protocol version 6 (IPv6) Neighbor Discovery. A host on the network may passively detect traffic from other hosts present on the network, then discern the subnets that are in use by well-configured systems without needing to “guess” the subnet or hosts that may respond. For example, if some ARP activity between 172.30.2.2 and 172.30.2.8 is detected, then this activity may suggest detecting at least 172.30.30.0/28. However, if the ARP activity later observes a 0.67 address, then this may suggest extending the detection to at least a /25. Some heuristic may be used to decide whether this later address is in the same subnet or potentially in multiple subnets. For example, when one party to an ARP transaction is in a confirmed detected range, only then assume continuity of subnet, or try to identify a free address near the target address and assume it for ARP query toward a detected network participant at increasingly large gaps in the subnet until the address is clearly no longer in the subnet. In another option, network information may be obtained using Service Location Protocol (SLP) and Simple Service Discovery Protocol (SSDP). Peer systems may be more-confidently located on IPv6 link local and then query the located peers for various additional parameters. For example, an XClarity Controller (XCC), which is an expanded-capability replacement for a BMC that is offered by LENOVO, can be found on a subnet regardless of IPv4. In yet another option, network information may be obtained using Link Layer Discovery Protocol (LLDP). When enabled on a network, LLDP provides a specific network location (per it's design point) in terms of switch name, switch port, and sometimes information like a Virtual Local Area Network (VLAN) identifier. The network information may include a set of detected subnets. The set of detected subnets may be presented in a tabular form indicating networks that are in use. Furthermore, the detected subnets may be presented in a way that is adaptive to the presence or absence of any one or more protocols on each subnet. For example, the ARP is universally available, but is perhaps the least effective. On the other hand, a Dynamic Host Configuration Protocol (DHCP) offer is perhaps the simplest way to obtain the network information, but some networks may not allow a DHCP offer to be made to an unknown system or may have no DHCP server at all. Embodiments may be implemented as part of a services package or -as-a-Service type offering or through system management portal. For example, embodiments may be implemented in a management-as-a-service system, network diagnostics-as-a-service system, or system deployment-as-a-service system. In some embodiments, an administrative computer may issue an instruction to the BMC to cause the BMC to initiate diagnosis of network issues. An administrative computer may detect that a particular host has not established a host network connection and transmit the instruction to the BMC. The detection and the transmission of the instruction may be performed automatically by the administrative computer or with input from an administrative user. In some embodiments, the BMC may make is own determination that the host has failed to establish a host network connection and then initiate diagnosis of network issues. Whether the diagnostics are initiated by an instruction from an administrative computer to the BMC or initiated by the BMC as a result of its own determination, the BMC may either cause the host CPU to load and run the network diagnostic utilities or the BMC may load and run the network diagnostic utilities itself. In some embodiments, the BMC may provide the network information generated by the network diagnosis to an administrative computer so that an administrative user may further analyze the cause of the failed network connection and take steps to establish the network connection. Alternatively, the BMC may provide the network information, or some subset of the network information, to a workload running on the host CPU. However, providing network information to the host workload may be effective only if the host workload has the capability of interpreting the network information and adjusting settings to fix a problem identified using the network information. The foregoing computer program products may further include program instructions for implementing or initiating any one or more aspects of the methods described herein. Accordingly, a separate description of the methods will not be duplicated in the context of a computer program product. Conversely, embodiments may include methods that include any one or more of the operations of the computer program products described herein and/or systems that perform any one or more of the operations of the computer program products described herein. FIG.1is a diagram of a system10in which some embodiments may be implemented. The system10includes a datacenter20including a plurality of servers30, a computer40running a system management application42, and an edge computer12. The plurality of servers30within the datacenter20may communicate over a local network22. A gateway24may connect the local network22to an external network14, such as the Internet. Accordingly, the system management computer40may establish communication with the edge computer12and/or any of the plurality of servers30 FIG.2is a diagram of a server30, which may also be representative of the architecture and operation of the edge computer12, according to some embodiments. The server30includes both a host central processing unit (CPU)34and a baseboard management controller (BMC)50. The CPU34and the BMC50are connected by an internal network, such as a system bus. The BMC50hosts a data storage device52. As illustrated, the data storage device52may store, among other things, a bootable image54that is used to perform network diagnostics and network information and reports56. The host CPU34is also connected to a network interfaces controller (NIC)32that enables communication to devices over a host network18. In some embodiments, the BMC may also use the NIC32to communicate with devices, such as the system management controller40, over the management network16. Alternatively, the BMC50may have its own dedicated NIC33for communicating with devices over the management network16. In reference to previously described embodiments, the BMC50of the server30may receive a message from the system management computer40, wherein the message instructs the baseboard management controller50of the server30to cause a host central processing unit34on the server to run network diagnostics on the host network18physically connected to the server. The baseboard management controller50may then instruct, in response to receiving the message, the host central processing unit34to boot from the bootable image54stored on the data storage device52hosted by the baseboard management controller and run a network diagnostic utility included with the bootable image54to monitor network traffic on the host network18. FIG.3is a diagram of a baseboard management controller (BMC)50according to some embodiments. The BMC50is similar to a small computer or system on a chip (SoC), including a central processing unit (CPU)60(which is a separate entity from the central processing units16,17inFIG.1and processor104ofFIG.6), memory61(such as random-access memory (RAM) on a double data rate (DDR) bus), firmware62on a flash memory (such as an embedded multi-media card (eMMC) flash memory or a serial peripheral interface (SPI) flash memory), and a root of trust (RoT) chip64. The BMC50further includes a wide variety of input/output ports. For example, the input/output (I/O) ports may include I/O ports65to the hardware components of the server, such as a Platform Environment Control Interface (PECI) port and/or an Advanced Platform Management Link (APML) port; I/O ports66to the hardware components of the servers and/or a network interface controller (NIC), such as a Peripheral Component Interconnect Express (PCIe) port; I/O ports67to the NIC, such as a network controller sideband interface (NC-SI) port; and I/O ports68to a network that accessible to an external user, such as an Ethernet port. The BMC50may use any one or more of these I/O ports to interact with hardware devices installed on the server for purposes of monitoring and control. FIG.4is a diagram of a computer server100that may be representative of any of the servers30, the system management computer40, and/or the edge computer12shown inFIG.1. The server100includes a processor unit34that is coupled to a system bus106. The processor unit104may utilize one or more processors, each of which has one or more processor cores. An optional graphics adapter108, which may drive/support an optional display120, is also coupled to system bus106. The graphics adapter108may, for example, include a graphics processing unit (GPU). The system bus106may be coupled via a bus bridge112to an input/output (I/O) bus114. An I/O interface116is coupled to the I/O bus114, where the I/O interface116affords a connection with various optional I/O devices, such as a camera110, a keyboard118(such as a touch screen virtual keyboard), and a USB mouse124via USB port(s)126(or other type of pointing device, such as a trackpad). As depicted, the computer100is able to communicate with other network devices over a network14,22using a network adapter or network interface controller32. A hard drive interface132is also coupled to the system bus106. The hard drive interface132interfaces with a hard drive134. In a preferred embodiment, the hard drive134may communicate with system memory136, which is also coupled to the system bus106. The system memory may be volatile or non-volatile and may include additional higher levels of volatile memory (not shown), including, but not limited to, cache memory, registers and buffers. Data that populates the system memory136may include the operating system (OS)140and application programs144. The hardware elements depicted in the server100are not intended to be exhaustive, but rather are representative. The operating system114includes a shell141for providing transparent user access to resources such as application programs144. Generally, the shell141is a program that provides an interpreter and an interface between the user and the operating system. More specifically, the shell141may execute commands that are entered into a command line user interface or from a file. Thus, the shell141, also called a command processor, is generally the highest level of the operating system software hierarchy and serves as a command interpreter. The shell may provide a system prompt, interpret commands entered by keyboard, mouse, or other user input media, and send the interpreted command(s) to the appropriate lower levels of the operating system (e.g., a kernel142) for processing. Note that while the shell141may be a text-based, line-oriented user interface, the present invention may support other user interface modes, such as graphical, voice, gestural, etc. As depicted, the operating system140also includes the kernel142, which includes lower levels of functionality for the operating system140, including providing essential services required by other parts of the operating system140and application programs144. Such essential services may include memory management, process and task management, disk management, and mouse and keyboard management. In addition, the computer server100may include application programs144stored in the system memory136. The server100may further include a baseboard management controller (BMC)50. The BMC is considered to be an out-of-band controller and may monitor and control various components of the server100. However, the BMC may also communicate with various devices via the network interface32and network(s)14,22. The BMC50is also shown hosting dynamic random-access memory (DRAM)61and flash memory63. FIG.5is a diagram of a server or edge computer70according to some embodiments. The server70includes many of the same components as described in reference toFIG.2, which components are labeled with the same reference numbers used in reference toFIG.2. In contrast toFIG.2, the server70includes a network controller sideband interface (NC-SI) connection between the BMC50and the host NIC32. The NC-SI connection enables the BMC50to communicate with the host NIC32in the server70to provide the BMC50with access to the host network18. In such configurations, the BMC50may be able to directly monitor the traffic on the host network18for the purpose of building the network map without the extra step of booting a network diagnostic image on the host CPU34. In some embodiments, the BMC may use either the host NIC32or an optional dedicated NIC33to communicate over a “management network” that is distinct/separate from a “host network.” These two networks may be supported by the same physical network wires/switches or by different physical network wires/switches if the BMC has its own dedicated network interface controller. Further, even in the NC-SI supported configuration ofFIG.5where the BMC50has the capability to communicate with the host network18via the host NIC32, it is also possible for the BMC50to utilize a network connection that is dedicated to the BMC. In other words, the presence of NC-SI does not require that the BMC make use of the NC-SI connection. In one option, the BMC50may monitor and diagnose the host network18using the NC-SI link to directly monitor network traffic, while communicating with a system management computer40through a dedicated management network interface controller33. The BMC50of the server70may receive a message from the system management computer40, wherein the message instructs the BMC50of the server70to run network diagnostics54on the host network18physically connected to the server70. Accordingly, the BMC50may access at least one network diagnostic utility54and run the at least one network diagnostic utility to monitor and analyze traffic on the host network18communicating through a direct physical connection (i.e., the NC-SI connection) between the baseboard management controller50and the host network interface controller32on the server70. The same network information56may be gathered by host CPU34running the network diagnostic utilities54for the server70as described in reference toFIG.5as may be gathered by the BMC50running the network diagnostic utilities54for the server30as described in reference toFIG.2. FIG.6is a flowchart of operations150according to some embodiments. Operation152includes receiving a message from a system management computer, wherein the message instructs the baseboard management controller of a server to cause a host central processing unit on the server to run network diagnostics on a host network physically connected to the server. Operation154includes instructing, in response to receiving the message, the host central processing unit to boot from a bootable image stored on a data storage device hosted by the baseboard management controller and run a network diagnostic utility included with the bootable image to monitor network traffic on the host network. FIG.7is a flowchart of operations160according to some embodiments. Operation162includes receiving a message from a system management computer, wherein the message instructs the baseboard management controller of a server to run network diagnostics on a host network physically connected to the server. Operation164includes accessing a network diagnostic utility and operation166includes running the network diagnostic utility to monitor and analyze traffic on the host network through a direct physical connection between the baseboard management controller and a host network interface controller on the server. As will be appreciated by one skilled in the art, embodiments may take the form of a system, method or computer program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable storage medium(s) may be utilized. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Furthermore, any program instruction or code that is embodied on such computer readable storage media (including forms referred to as volatile memory) that is not a transitory signal are, for the avoidance of doubt, considered “non-transitory”. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out various operations may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Embodiments may be described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored on computer readable storage media is not a transitory signal, such that the program instructions can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, and such that the program instructions stored in the computer readable storage medium produce an article of manufacture. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the claims. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components and/or groups, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms “preferably,” “preferred,” “prefer,” “optionally,” “may,” and similar terms are used to indicate that an item, condition or step being referred to is an optional (not required) feature of the embodiment. The corresponding structures, materials, acts, and equivalents of all means or steps plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. Embodiments have been presented for purposes of illustration and description, but it is not intended to be exhaustive or limited to the embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art after reading this disclosure. The disclosed embodiments were chosen and described as non-limiting examples to enable others of ordinary skill in the art to understand these embodiments and other embodiments involving modifications suited to a particular implementation. | 42,140 |
11863415 | DETAILED DESCRIPTION OF THE DISCLOSURE The present disclosure relates systems and methods for determining application endpoint and application behavior for monitoring user experience. Also, the present disclosure relates to various techniques for using tracing with tunnels and cloud-based systems for determining measures of network performance. This disclosure provides an approach to reduce the number of probes to avoid firewall issues. Also, this disclosure describes an approach for adaptively finding the protocol that works best for the internal network and the destination. The present disclosure relates to various techniques for using tracing with tunnels and cloud-based systems for determining measures of network performance. The various techniques are used to detect network hops, packet loss, and latency from a client to a destination as well as discover how the client connects to the Internet and if any proxies or firewalls are present in the path. For determining a connection to the Internet, the present disclosure includes a technique to detect tunnels. For determining proxies or firewalls, the present disclosure utilizes an Application Programming Interface (API) to detect an egress router's IP port on a client's network. Once the client has visibility of the path (i.e., tunnels, proxies, firewalls, etc.), the client can communicate, such as out of band, to request other devices to trace different legs. Note, in various descriptions, the term traceroute or trace can also include PING, such as the My Traceroute (MTR). Also, a traceroute is protocol dependent, and the present disclosure can be referred instead to a “cloudpath” that is protocol independent. § 1.0 Example Cloud-Based System Architecture FIG.1Ais a network diagram of a cloud-based system100offering security as a service. Specifically, the cloud-based system100can offer a Secure Internet and Web Gateway as a service to various users102, as well as other cloud services. In this manner, the cloud-based system100is located between the users102and the Internet as well as any cloud services106(or applications) accessed by the users102. As such, the cloud-based system100provides inline monitoring inspecting traffic between the users102, the Internet104, and the cloud services106, including Secure Sockets Layer (SSL) traffic. The cloud-based system100can offer access control, threat prevention, data protection, etc. The access control can include a cloud-based firewall, cloud-based intrusion detection, Uniform Resource Locator (URL) filtering, bandwidth control, Domain Name System (DNS) filtering, etc. The threat prevention can include cloud-based intrusion prevention, protection against advanced threats (malware, spam, Cross-Site Scripting (XSS), phishing, etc.), cloud-based sandbox, antivirus, DNS security, etc. The data protection can include Data Loss Prevention (DLP), cloud application security such as via a Cloud Access Security Broker (CASB), file type control, etc. The cloud-based firewall can provide Deep Packet Inspection (DPI) and access controls across various ports and protocols as well as being application and user aware. The URL filtering can block, allow, or limit website access based on policy for a user, group of users, or entire organization, including specific destinations or categories of URLs (e.g., gambling, social media, etc.). The bandwidth control can enforce bandwidth policies and prioritize critical applications such as relative to recreational traffic. DNS filtering can control and block DNS requests against known and malicious destinations. The cloud-based intrusion prevention and advanced threat protection can deliver full threat protection against malicious content such as browser exploits, scripts, identified botnets and malware callbacks, etc. The cloud-based sandbox can block zero-day exploits (just identified) by analyzing unknown files for malicious behavior. Advantageously, the cloud-based system100is multi-tenant and can service a large volume of the users102. As such, newly discovered threats can be promulgated throughout the cloud-based system100for all tenants practically instantaneously. The antivirus protection can include antivirus, antispyware, antimalware, etc. protection for the users102, using signatures sourced and constantly updated. The DNS security can identify and route command-and-control connections to threat detection engines for full content inspection. The DLP can use standard and/or custom dictionaries to continuously monitor the users102, including compressed and/or SSL-encrypted traffic. Again, being in a cloud implementation, the cloud-based system100can scale this monitoring with near-zero latency on the users102. The cloud application security can include CASB functionality to discover and control user access to known and unknown cloud services106. The file type controls enable true file type control by the user, location, destination, etc. to determine which files are allowed or not. For illustration purposes, the users102of the cloud-based system100can include a mobile device110, a headquarters (HQ)112which can include or connect to a data center (DC)114, Internet of Things (IOT) devices116, a branch office/remote location118, etc., and each includes one or more user devices (an example user device300is illustrated inFIG.5). The devices110,116, and the locations112,114,118are shown for illustrative purposes, and those skilled in the art will recognize there are various access scenarios and other users102for the cloud-based system100, all of which are contemplated herein. The users102can be associated with a tenant, which may include an enterprise, a corporation, an organization, etc. That is, a tenant is a group of users who share a common access with specific privileges to the cloud-based system100, a cloud service, etc. In an embodiment, the headquarters112can include an enterprise's network with resources in the data center114. The mobile device110can be a so-called road warrior, i.e., users that are off-site, on-the-road, etc. Those skilled in the art will recognize a user102has to use a corresponding user device300for accessing the cloud-based system100and the like, and the description herein may use the user102and/or the user device300interchangeably. Further, the cloud-based system100can be multi-tenant, with each tenant having its own users102and configuration, policy, rules, etc. One advantage of the multi-tenancy and a large volume of users is the zero-day/zero-hour protection in that a new vulnerability can be detected and then instantly remediated across the entire cloud-based system100. The same applies to policy, rule, configuration, etc. changes—they are instantly remediated across the entire cloud-based system100. As well, new features in the cloud-based system100can also be rolled up simultaneously across the user base, as opposed to selective and time-consuming upgrades on every device at the locations112,114,118, and the devices110,116. Logically, the cloud-based system100can be viewed as an overlay network between users (at the locations112,114,118, and the devices110,116) and the Internet104and the cloud services106. Previously, the IT deployment model included enterprise resources and applications stored within the data center114(i.e., physical devices) behind a firewall (perimeter), accessible by employees, partners, contractors, etc. on-site or remote via Virtual Private Networks (VPNs), etc. The cloud-based system100is replacing the conventional deployment model. The cloud-based system100can be used to implement these services in the cloud without requiring the physical devices and management thereof by enterprise IT administrators. As an ever-present overlay network, the cloud-based system100can provide the same functions as the physical devices and/or appliances regardless of geography or location of the users102, as well as independent of platform, operating system, network access technique, network access provider, etc. There are various techniques to forward traffic between the users102at the locations112,114,118, and via the devices110,116, and the cloud-based system100. Typically, the locations112,114,118can use tunneling where all traffic is forward through the cloud-based system100. For example, various tunneling protocols are contemplated, such as Generic Routing Encapsulation (GRE), Layer Two Tunneling Protocol (L2TP), Internet Protocol (IP) Security (IPsec), customized tunneling protocols, etc. The devices110,116, when not at one of the locations112,114,118can use a local application that forwards traffic, a proxy such as via a Proxy Auto-Config (PAC) file, and the like. An application of the local application is the application350described in detail herein as a connector application. A key aspect of the cloud-based system100is all traffic between the users102and the Internet104or the cloud services106is via the cloud-based system100. As such, the cloud-based system100has visibility to enable various functions, all of which are performed off the user device in the cloud. The cloud-based system100can also include a management system120for tenant access to provide global policy and configuration as well as real-time analytics. This enables IT administrators to have a unified view of user activity, threat intelligence, application usage, etc. For example, IT administrators can drill-down to a per-user level to understand events and correlate threats, to identify compromised devices, to have application visibility, and the like. The cloud-based system100can further include connectivity to an Identity Provider (IDP) 122 for authentication of the users102and to a Security Information and Event Management (SIEM) system124for event logging. The system124can provide alert and activity logs on a per-user102basis. § 1.1 Zero Trust FIG.1Bis a logical diagram of the cloud-based system100operating as a zero-trust platform. Zero trust is a framework for securing organizations in the cloud and mobile world that asserts that no user or application should be trusted by default. Following a key zero trust principle, least-privileged access, trust is established based on context (e.g., user identity and location, the security posture of the endpoint, the app or service being requested) with policy checks at each step, via the cloud-based system100. Zero trust is a cybersecurity strategy wherein security policy is applied based on context established through least-privileged access controls and strict user authentication—not assumed trust. A well-tuned zero trust architecture leads to simpler network infrastructure, a better user experience, and improved cyberthreat defense. Establishing a zero trust architecture requires visibility and control over the environment's users and traffic, including that which is encrypted; monitoring and verification of traffic between parts of the environment; and strong multifactor authentication (MFA) methods beyond passwords, such as biometrics or one-time codes. This is performed via the cloud-based system100. Critically, in a zero trust architecture, a resource's network location is not the biggest factor in its security posture anymore. Instead of rigid network segmentation, your data, workflows, services, and such are protected by software-defined microsegmentation, enabling you to keep them secure anywhere, whether in your data center or in distributed hybrid and multicloud environments. The core concept of zero trust is simple: assume everything is hostile by default. It is a major departure from the network security model built on the centralized data center and secure network perimeter. These network architectures rely on approved IP addresses, ports, and protocols to establish access controls and validate what's trusted inside the network, generally including anybody connecting via remote access VPN. In contrast, a zero trust approach treats all traffic, even if it is already inside the perimeter, as hostile. For example, workloads are blocked from communicating until they are validated by a set of attributes, such as a fingerprint or identity. Identity-based validation policies result in stronger security that travels with the workload wherever it communicates—in a public cloud, a hybrid environment, a container, or an on-premises network architecture. Because protection is environment-agnostic, zero trust secures applications and services even if they communicate across network environments, requiring no architectural changes or policy updates. Zero trust securely connects users, devices, and applications using business policies over any network, enabling safe digital transformation. Zero trust is about more than user identity, segmentation, and secure access. It is a strategy upon which to build a cybersecurity ecosystem. At its core are three tenets: Terminate every connection: Technologies like firewalls use a “passthrough” approach, inspecting files as they are delivered. If a malicious file is detected, alerts are often too late. An effective zero trust solution terminates every connection to allow an inline proxy architecture to inspect all traffic, including encrypted traffic, in real time—before it reaches its destination—to prevent ransomware, malware, and more. Protect data using granular context-based policies: Zero trust policies verify access requests and rights based on context, including user identity, device, location, type of content, and the application being requested. Policies are adaptive, so user access privileges are continually reassessed as context changes. Reduce risk by eliminating the attack surface: With a zero trust approach, users connect directly to the apps and resources they need, never to networks (see ZTNA). Direct user-to-app and app-to-app connections eliminate the risk of lateral movement and prevent compromised devices from infecting other resources. Plus, users and apps are invisible to the internet, so they cannot be discovered or attacked. FIG.1Cis a logical diagram illustrating zero trust policies with the cloud-based system100and a comparison with the conventional firewall-based approach. Zero trust with the cloud-based system100allows per session policy decisions and enforcement regardless of the user102location. Unlike the conventional firewall-based approach, this eliminates attack surfaces, there are no inbound connections; prevents lateral movement, the user is not on the network; prevents compromise, allowing encrypted inspection; and prevents data loss with inline inspection. § 1.2 Example Implementation of the Cloud-Based System FIG.2is a network diagram of an example implementation of the cloud-based system100. In an embodiment, the cloud-based system100includes a plurality of enforcement nodes (EN)150, labeled as enforcement nodes150-1,150-2,150-N, interconnected to one another and interconnected to a central authority (CA)152. The nodes150and the central authority152, while described as nodes, can include one or more servers, including physical servers, virtual machines (VM) executed on physical hardware, etc. An example of a server is illustrated inFIG.4. The cloud-based system100further includes a log router154that connects to a storage cluster156for supporting log maintenance from the enforcement nodes150. The central authority152provide centralized policy, real-time threat updates, etc. and coordinates the distribution of this data between the enforcement nodes150. The enforcement nodes150provide an onramp to the users102and are configured to execute policy, based on the central authority152, for each user102. The enforcement nodes150can be geographically distributed, and the policy for each user102follows that user102as he or she connects to the nearest (or other criteria) enforcement node150. Of note, the cloud-based system is an external system meaning it is separate from tenant's private networks (enterprise networks) as well as from networks associated with the devices110,116, and locations112,118. The enforcement nodes150are full-featured secure internet gateways that provide integrated internet security. They inspect all web traffic bi-directionally for malware and enforce security, compliance, and firewall policies, as described herein, as well as various additional functionality. In an embodiment, each enforcement node150has two main modules for inspecting traffic and applying policies: a web module and a firewall module. The enforcement nodes150are deployed around the world and can handle hundreds of thousands of concurrent users with millions of concurrent sessions. Because of this, regardless of where the users102are, they can access the Internet104from any device, and the enforcement nodes150protect the traffic and apply corporate policies. The enforcement nodes150can implement various inspection engines therein, and optionally, send sandboxing to another system. The enforcement nodes150include significant fault tolerance capabilities, such as deployment in active-active mode to ensure availability and redundancy as well as continuous monitoring. In an embodiment, customer traffic is not passed to any other component within the cloud-based system100, and the enforcement nodes150can be configured never to store any data to disk. Packet data is held in memory for inspection and then, based on policy, is either forwarded or dropped. Log data generated for every transaction is compressed, tokenized, and exported over secure Transport Layer Security (TLS) connections to the log routers154that direct the logs to the storage cluster156, hosted in the appropriate geographical region, for each organization. In an embodiment, all data destined for or received from the Internet is processed through one of the enforcement nodes150. In another embodiment, specific data specified by each tenant, e.g., only email, only executable files, etc., is processed through one of the enforcement nodes150. Each of the enforcement nodes150may generate a decision vector D=[d1, d2, . . . , dn] for a content item of one or more parts C=[c1, c2, . . . , cm]. Each decision vector may identify a threat classification, e.g., clean, spyware, malware, undesirable content, innocuous, spam email, unknown, etc. For example, the output of each element of the decision vector D may be based on the output of one or more data inspection engines. In an embodiment, the threat classification may be reduced to a subset of categories, e.g., violating, non-violating, neutral, unknown. Based on the subset classification, the enforcement node150may allow the distribution of the content item, preclude distribution of the content item, allow distribution of the content item after a cleaning process, or perform threat detection on the content item. In an embodiment, the actions taken by one of the enforcement nodes150may be determinative on the threat classification of the content item and on a security policy of the tenant to which the content item is being sent from or from which the content item is being requested by. A content item is violating if, for any part C=[c1, c2, . . . , cm] of the content item, at any of the enforcement nodes150, any one of the data inspection engines generates an output that results in a classification of “violating.” The central authority152hosts all customer (tenant) policy and configuration settings. It monitors the cloud and provides a central location for software and database updates and threat intelligence. Given the multi-tenant architecture, the central authority152is redundant and backed up in multiple different data centers. The enforcement nodes150establish persistent connections to the central authority152to download all policy configurations. When a new user connects to an enforcement node150, a policy request is sent to the central authority152through this connection. The central authority152then calculates the policies that apply to that user102and sends the policy to the enforcement node150as a highly compressed bitmap. The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. Once downloaded, a tenant's policy is cached until a policy change is made in the management system120. The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. When this happens, all of the cached policies are purged, and the enforcement nodes150request the new policy when the user102next makes a request. In an embodiment, the enforcement node150exchange “heartbeats” periodically, so all enforcement nodes150are informed when there is a policy change. Any enforcement node150can then pull the change in policy when it sees a new request. The cloud-based system100can be a private cloud, a public cloud, a combination of a private cloud and a public cloud (hybrid cloud), or the like. Cloud computing systems and methods abstract away physical servers, storage, networking, etc., and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser or the like, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based and other applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “Software as a Service” (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The cloud-based system100is illustrated herein as an example embodiment of a cloud-based system, and other implementations are also contemplated. As described herein, the terms cloud services and cloud applications may be used interchangeably. The cloud service106is any service made available to users on-demand via the Internet, as opposed to being provided from a company's on-premises servers. A cloud application, or cloud app, is a software program where cloud-based and local components work together. The cloud-based system100can be utilized to provide example cloud services, including Zscaler Internet Access (ZIA), Zscaler Private Access (ZPA), and Zscaler Digital Experience (ZDX), all from Zscaler, Inc. (the assignee and applicant of the present application). Also, there can be multiple different cloud-based systems100, including ones with different architectures and multiple cloud services. The ZIA service can provide the access control, threat prevention, and data protection described above with reference to the cloud-based system100. ZPA can include access control, microservice segmentation, etc. The ZDX service can provide monitoring of user experience, e.g., Quality of Experience (QoE), Quality of Service (QoS), etc., in a manner that can gain insights based on continuous, inline monitoring. For example, the ZIA service can provide a user with Internet Access, and the ZPA service can provide a user with access to enterprise resources instead of traditional Virtual Private Networks (VPNs), namely ZPA provides Zero Trust Network Access (ZTNA). Those of ordinary skill in the art will recognize various other types of cloud services106are also contemplated. Also, other types of cloud architectures are also contemplated, with the cloud-based system100presented for illustration purposes. § 2.0 User Device Application for Traffic Forwarding and Monitoring FIG.3is a network diagram of the cloud-based system100illustrating an application350on user devices300with users102configured to operate through the cloud-based system100. Different types of user devices300are proliferating, including Bring Your Own Device (BYOD) as well as IT-managed devices. The conventional approach for a user device300to operate with the cloud-based system100as well as for accessing enterprise resources includes complex policies, VPNs, poor user experience, etc. The application350can automatically forward user traffic with the cloud-based system100as well as ensuring that security and access policies are enforced, regardless of device, location, operating system, or application. The application350automatically determines if a user102is looking to access the open Internet104, a SaaS app, or an internal app running in public, private, or the datacenter and routes mobile traffic through the cloud-based system100. The application350can support various cloud services, including ZIA, ZPA, ZDX, etc., allowing the best in class security with zero trust access to internal apps. As described herein, the application350can also be referred to as a connector application. The application350is configured to auto-route traffic for seamless user experience. This can be protocol as well as application-specific, and the application350can route traffic with a nearest or best fit enforcement node150. Further, the application350can detect trusted networks, allowed applications, etc. and support secure network access. The application350can also support the enrollment of the user device300prior to accessing applications. The application350can uniquely detect the users102based on fingerprinting the user device300, using criteria like device model, platform, operating system, etc. The application350can support Mobile Device Management (MDM) functions, allowing IT personnel to deploy and manage the user devices300seamlessly. This can also include the automatic installation of client and SSL certificates during enrollment. Finally, the application350provides visibility into device and app usage of the user102of the user device300. The application350supports a secure, lightweight tunnel between the user device300and the cloud-based system100. For example, the lightweight tunnel can be HTTP-based. With the application350, there is no requirement for PAC files, an IPsec VPN, authentication cookies, or user102setup. § 3.0 Example Server Architecture FIG.4is a block diagram of a server200, which may be used in the cloud-based system100, in other systems, or standalone. For example, the enforcement nodes150and the central authority152may be formed as one or more of the servers200. The server200may be a digital computer that, in terms of hardware architecture, generally includes a processor202, input/output (I/O) interfaces204, a network interface206, a data store208, and memory210. It should be appreciated by those of ordinary skill in the art thatFIG.4depicts the server200in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (202,204,206,208, and210) are communicatively coupled via a local interface212. The local interface212may be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface212may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface212may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor202is a hardware device for executing software instructions. The processor202may be any custom made or commercially available processor, a Central Processing Unit (CPU), an auxiliary processor among several processors associated with the server200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the server200is in operation, the processor202is configured to execute software stored within the memory210, to communicate data to and from the memory210, and to generally control operations of the server200pursuant to the software instructions. The I/O interfaces204may be used to receive user input from and/or for providing system output to one or more devices or components. The network interface206may be used to enable the server200to communicate on a network, such as the Internet104. The network interface206may include, for example, an Ethernet card or adapter or a Wireless Local Area Network (WLAN) card or adapter. The network interface206may include address, control, and/or data connections to enable appropriate communications on the network. A data store208may be used to store data. The data store208may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store208may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store208may be located internal to the server200, such as, for example, an internal hard drive connected to the local interface212in the server200. Additionally, in another embodiment, the data store208may be located external to the server200such as, for example, an external hard drive connected to the I/O interfaces204(e.g., SCSI or USB connection). In a further embodiment, the data store208may be connected to the server200through a network, such as, for example, a network-attached file server. The memory210may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory210may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory210may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor202. The software in memory210may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory210includes a suitable Operating System (O/S)214and one or more programs216. The operating system214essentially controls the execution of other computer programs, such as the one or more programs216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs216may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein. § 4.0 Example User Device Architecture FIG.5is a block diagram of a user device300, which may be used with the cloud-based system100or the like. Specifically, the user device300can form a device used by one of the users102, and this may include common devices such as laptops, smartphones, tablets, netbooks, personal digital assistants, MP3 players, cell phones, e-book readers, IoT devices, servers, desktops, printers, televisions, streaming media devices, and the like. The user device300can be a digital device that, in terms of hardware architecture, generally includes a processor302, I/O interfaces304, a network interface306, a data store308, and memory310. It should be appreciated by those of ordinary skill in the art thatFIG.5depicts the user device300in an oversimplified manner, and a practical embodiment may include additional components and suitably configured processing logic to support known or conventional operating features that are not described in detail herein. The components (302,304,306,308, and302) are communicatively coupled via a local interface312. The local interface312can be, for example, but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface312can have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, among many others, to enable communications. Further, the local interface312may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The processor302is a hardware device for executing software instructions. The processor302can be any custom made or commercially available processor, a CPU, an auxiliary processor among several processors associated with the user device300, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the user device300is in operation, the processor302is configured to execute software stored within the memory310, to communicate data to and from the memory310, and to generally control operations of the user device300pursuant to the software instructions. In an embodiment, the processor302may include a mobile optimized processor such as optimized for power consumption and mobile applications. The I/O interfaces304can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, a barcode scanner, and the like. System output can be provided via a display device such as a Liquid Crystal Display (LCD), touch screen, and the like. The network interface306enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the network interface306, including any protocols for wireless communication. The data store308may be used to store data. The data store308may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store308may incorporate electronic, magnetic, optical, and/or other types of storage media. The memory310may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory310may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory310may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor302. The software in memory310can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example ofFIG.3, the software in the memory310includes a suitable operating system314and programs316. The operating system314essentially controls the execution of other computer programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The programs316may include various applications, add-ons, etc. configured to provide end user functionality with the user device300. For example, example programs316may include, but not limited to, a web browser, social networking applications, streaming media applications, games, mapping and location applications, electronic mail applications, financial applications, and the like. In a typical example, the end-user typically uses one or more of the programs316along with a network such as the cloud-based system100. § 5.0 Zero Trust Network Access Using the Cloud-Based System FIG.6is a network diagram of a Zero Trust Network Access (ZTNA) application utilizing the cloud-based system100. For ZTNA, the cloud-based system100can dynamically create a connection through a secure tunnel between an endpoint (e.g., users102A,102B) that are remote and an on-premises connector400that is either located in cloud file shares and applications402and/or in an enterprise network410that includes enterprise file shares and applications404. The connection between the cloud-based system100and on-premises connector400is dynamic, on-demand, and orchestrated by the cloud-based system100. A key feature is its security at the edge—there is no need to punch any holes in the existing on-premises firewall. The connector400inside the enterprise (on-premises) “dials out” and connects to the cloud-based system100as if too were an endpoint. This on-demand dial-out capability and tunneling authenticated traffic back to the enterprise is a key differentiator for ZTNA. Also, this functionality can be implemented in part by the application350on the user device300. Also, the applications402,404can include B2B applications. Note, the difference between the applications402,404is the applications402are hosted in the cloud, whereas the applications404are hosted on the enterprise network410. The B2B service described herein contemplates use with either or both of the applications402,404. The paradigm of virtual private access systems and methods is to give users network access to get to an application and/or file share, not to the entire network. If a user is not authorized to get the application, the user should not be able even to see that it exists, much less access it. The virtual private access systems and methods provide an approach to deliver secure access by decoupling applications402,404from the network, instead of providing access with a connector400, in front of the applications402,404, an application on the user device300, a central authority152to push policy, and the cloud-based system100to stitch the applications402,404and the software connectors400together, on a per-user, per-application basis. With the virtual private access, users can only see the specific applications402,404allowed by the central authority152. Everything else is “invisible” or “dark” to them. Because the virtual private access separates the application from the network, the physical location of the application402,404becomes irrelevant—if applications402,404are located in more than one place, the user is automatically directed to the instance that will give them the best performance. The virtual private access also dramatically reduces configuration complexity, such as policies/firewalls in the data centers. Enterprises can, for example, move applications to Amazon Web Services or Microsoft Azure, and take advantage of the elasticity of the cloud, making private, internal applications behave just like the marketing leading enterprise applications. Advantageously, there is no hardware to buy or deploy because the virtual private access is a service offering to end-users and enterprises. § 6.0 Digital Experience Monitoring FIG.7is a network diagram of the cloud-based system100in an application of digital experience monitoring. Here, the cloud-based system100providing security as a service as well as ZTNA, can also be used to provide real-time, continuous digital experience monitoring, as opposed to conventional approaches (synthetic probes). A key aspect of the architecture of the cloud-based system100is the inline monitoring. This means data is accessible in real-time for individual users from end-to-end. As described herein, digital experience monitoring can include monitoring, analyzing, and improving the digital user experience. The cloud-based system100connects users102at the locations110,112,118to the applications402,404, the Internet104, the cloud services106, etc. The inline, end-to-end visibility of all users enables digital experience monitoring. The cloud-based system100can monitor, diagnose, generate alerts, and perform remedial actions with respect to network endpoints, network components, network links, etc. The network endpoints can include servers, virtual machines, containers, storage systems, or anything with an IP address, including the Internet of Things (IoT), cloud, and wireless endpoints. With these components, these network endpoints can be monitored directly in combination with a network perspective. Thus, the cloud-based system100provides a unique architecture that can enable digital experience monitoring, network application monitoring, infrastructure component interactions, etc. Of note, these various monitoring aspects require no additional components—the cloud-based system100leverages the existing infrastructure to provide this service. Again, digital experience monitoring includes the capture of data about how end-to-end application availability, latency, and quality appear to the end user from a network perspective. This is limited to the network traffic visibility and not within components, such as what application performance monitoring can accomplish. Networked application monitoring provides the speed and overall quality of networked application delivery to the user in support of key business activities. Infrastructure component interactions include a focus on infrastructure components as they interact via the network, as well as the network delivery of services or applications. This includes the ability to provide network path analytics. The cloud-based system100can enable real-time performance and behaviors for troubleshooting in the current state of the environment, historical performance and behaviors to understand what occurred or what is trending over time, predictive behaviors by leveraging analytics technologies to distill and create actionable items from the large dataset collected across the various data sources, and the like. The cloud-based system100includes the ability to directly ingest any of the following data sources network device-generated health data, network device-generated traffic data, including flow-based data sources inclusive of NetFlow and IPFIX, raw network packet analysis to identify application types and performance characteristics, HTTP request metrics, etc. The cloud-based system100can operate at 10 gigabits (10G) Ethernet and higher at full line rate and support a rate of 100,000 or more flows per second or higher. The applications402,404can include enterprise applications, Office 365, Salesforce, Skype, Google apps, internal applications, etc. These are critical business applications where user experience is important. The objective here is to collect various data points so that user experience can be quantified for a particular user, at a particular time, for purposes of analyzing the experience as well as improving the experience. In an embodiment, the monitored data can be from different categories, including application-related, network-related, device-related (also can be referred to as endpoint-related), protocol-related, etc. Data can be collected at the application350or the cloud edge to quantify user experience for specific applications, i.e., the application-related and device-related data. The cloud-based system100can further collect the network-related and the protocol-related data (e.g., Domain Name System (DNS) response time). Application-related dataPage Load TimeRedirect count (#)Page Response TimeThroughput (bps)Document Object ModelTotal size (bytes)(DOM) Load TimeTotal Downloaded bytesPage error count (#)App availability (%)Page element countby category (#) Network-related dataHTTP Request metricsBandwidthServer response timeJitterPing packet loss (%)Trace RoutePing round tripDNS lookup tracePacket loss (%)GRE/IPSec tunnel monitoringLatencyMTU and bandwidth measurements Device-related data (endpoint-related data)System detailsNetwork (config)Central Processing Unit (CPU)DiskMemory (RAM)ProcessesNetwork (interfaces)Applications Metrics could be combined. For example, device health can be based on a combination of CPU, memory, etc. Network health could be a combination of Wi-Fi/LAN connection health, latency, etc. Application health could be a combination of response time, page loads, etc. The cloud-based system100can generate service health as a combination of CPU, memory, and the load time of the service while processing a user's request. The network health could be based on the number of network path(s), latency, packet loss, etc. The lightweight connector400can also generate similar metrics for the applications402,404. In an embodiment, the metrics can be collected while a user is accessing specific applications that user experience is desired for monitoring. In another embodiment, the metrics can be enriched by triggering synthetic measurements in the context of an inline transaction by the application350or cloud edge. The metrics can be tagged with metadata (user, time, app, etc.) and sent to a logging and analytics service for aggregation, analysis, and reporting. Further, network administrators can get UEX reports from the cloud-based system100. Due to the inline nature and the fact the cloud-based system100is an overlay (in-between users and services/applications), the cloud-based system100enables the ability to capture user experience metric data continuously and to log such data historically. As such, a network administrator can have a long-term detailed view of the network and associated user experience. § 7.0 Cloud Tunnel FIG.8is a network diagram of the cloud-based system100with various cloud tunnels500, labeled as cloud tunnels500A,500B,500C, for forwarding traffic.FIGS.9and10are flow diagrams of a cloud tunnel500illustrating a control channel (FIG.9) and a data channel (FIG.10), with the tunnel illustrated between a client510and a server520. The cloud tunnel500is a lightweight tunnel that is configured to forward traffic between the client510and the server520. The present disclosure focuses on the specific mechanisms used in the cloud tunnel500between two points, namely the client510and the server520. Those skilled in the art will recognize the cloud tunnel500can be used with the cloud-based system100as an example use case, and other uses are contemplated. That is, the client510and the server520are just endpoint devices that support the exchange of data traffic and control traffic for the tunnel500. For description, the server520can be referred to as a local node and the client510as a remote node, where the tunnel operates between the local and remote nodes. In an embodiment, the cloud-based system100can use the cloud tunnel500to forward traffic to the enforcement nodes150, such as from a user device300with the application350, from a branch office/remote location118, etc.FIG.8illustrates three example use cases for the cloud tunnel500with the cloud-based system100, and other uses are also contemplated. In a first use case, a cloud tunnel500A is formed between a user device300, such as with the application350, and an enforcement node150-1. For example, when a user102associated with the user device300connects to a network, the application350can establish the cloud tunnel500A to the closest or best enforcement node150-1, and forward the traffic through the cloud tunnel500A so that the enforcement node150-1can apply the appropriate security and access policies. Here, the cloud tunnel500A supports a single user102, associated with the user device300. In a second use case, a cloud tunnel500B is formed between a Virtual Network Function (VNF)502or some other device at a remote location118A and an enforcement node150-2. Here, the VNF502is used to forward traffic from any user102at the remote location118A to the enforcement node150-2. In a third use case, a cloud tunnel110C is formed between an on-premises enforcement node, referred to as an Edge Connector (EC)150A, and an enforcement node150-N. The edge connector150A can be located at a branch office118A or the like. In some embodiments, the edge connector150A can be an enforcement node150in the cloud-based system100but located on-premises with a tenant. Here, in the second and third use cases, the cloud tunnels500B,500C support multiple users102. There can be two versions of the cloud tunnel500, referred to a tunnel1and tunnel2. The tunnel1can only support Web protocols as an HTTP connect tunnel operating on a TCP streams. That is, the tunnel1can send all proxy-aware traffic or port 80/443 traffic to the enforcement node150, depending on the forwarding profile configuration. This can be performed via CONNECT requests, similar to a traditional proxy. The tunnel2can support multiple ports and protocols, extending beyond only web protocols. As described herein, the cloud tunnels500are the tunnel2. In all of the use cases, the cloud tunnel500enables each user device300to redirect traffic destined to all ports and protocols to a corresponding enforcement node150. Note, the cloud-based system100can include load balancing functionality to spread the cloud tunnels500from a single source IP address. The cloud tunnel500supports device logging for all traffic, firewall, etc., such as in the storage cluster156. The cloud tunnel500utilizes encryption, such as via TLS or DTLS, to tunnel packets between the two points, namely the client510and the server520. As described herein, the client510can be the user device300, the VNF502, and/or the edge connector150A, and the server520can be the enforcement node150. Again, other devices are contemplated with the cloud tunnel500. The cloud tunnel500can use a Network Address Translation (NAT) device that does not require a different egress IP for each device's300separate sessions. Again, the cloud tunnel500has a tunneling architecture that uses DTLS or TLS to send packets to the cloud-based system100. Because of this, the cloud tunnel500is capable of sending traffic from all ports and protocols. Thus, the cloud tunnel500provides complete protection for a single user102, via the application350, as well as for multiple users at remote locations118, including multiple security functions such as cloud firewall, cloud IPS, etc. The cloud tunnel500includes user-level granularity of the traffic, enabling different users102on the same cloud tunnel500for the enforcement nodes150to provide user-based granular policy and visibility. In addition to user-level granularity, the cloud tunnel500can provide application-level granularity, such as by mapping mobile applications (e.g., Facebook, Gmail, etc.) to traffic, allowing for app-based granular policies. FIGS.9and10illustrate the two communication channels, namely a control channel530and a data channel540, between the client510and the server520. Together, these two communication channels530,540form the cloud tunnel500. In an embodiment, the control channel530can be an encrypted TLS connection or SSL connection, and the control channel530is used for device and/or user authentication and other control messages. In an embodiment, the data channel540can be an encrypted DTLS or TLS connection, i.e., the data channel can be one or more DTLS or TLS connections for the transmit and receive of user IP packets. There can be multiple data channels540associated with the same control channel530. The data channel540can be authenticated using a Session Identifier (ID) from the control channel530. Of note, the control channel530always uses TLS because some locations (e.g., the remote location118A, the branch office118B, other enterprises, hotspots, etc.) can block UDP port443, preventing DTLS. Whereas TLS is widely used and not typically blocked. The data channel540preferably uses DTLS, if it is available, i.e., not blocked on the client510. If it is blocked, the data channel540can use TLS instead. For example, DTLS is the primary protocol for the data channel540with TLS used as a fallback over TCP port443if DTLS is unavailable, namely if UDP port443is blocked at the client510. InFIG.9, the control channel530is illustrated with exchanges between the client510and the server520. Again, the control channel530includes TLS encryption, which is established through a setup or handshake between the client510and the server520(step550-1). An example of a handshake is illustrated inFIG.11. The client510can send its version of the tunnel500to the server520(step550-2) to which the server520can acknowledge (step550-3). For example, the version of the tunnel can include a simple version number or other indication, as well as an indication of whether the client510supports DTLS for the data channel540. Again, the control channel530is fixed with TLS or SSL, but the data channel540can be either DTLS or TLS. The client510can perform device authentication (step550-4), and the server520can acknowledge the device authentication (step550-5). The client510can perform user authentication (step550-6), and the server520can acknowledge the user authentication (step550-7). Note, the device authentication includes authenticating the user device300, such as via the application350, the VNF502, the edge connector150A, etc. The user authentication includes authenticating the users102associated with the user devices300. Note, in an embodiment, the client510is the sole device300, and here the user authentication can be for the user102associated with the client510, and the device authentication can be for the user device300with the application350. In another embodiment, the client510can have multiple user devices300and corresponding users102associated with it. Here, the device authentication can be for the VNF502, the edge connector150A, etc., and the user authentication can be for each user device300and corresponding user102, and the client510and the server520can have a unique identifier for each user device300, for user-level identification. The device authentication acknowledgment can include a session identifier (ID) that is used to bind the control channel530with one or more data channels540. The user authentication can be based on a user identifier (ID) that is unique to each user102. The client510can periodically provide keep alive packets (step550-8), and the server520can respond with keep alive acknowledgment packets (step550-9). The client510and the server520can use the keep alive packets or messages to maintain the control channel530. Also, the client510and the server520can exchange other relevant data over the control channel530, such as metadata, which identifies an application for a user102, location information for a user device300, etc. InFIG.10, similar toFIG.9, the data channel540is illustrated with exchanges between the client510and the server520. Again, the data channel540includes TLS or DTLS encryption, which is established through a setup or handshake between the client510and the server520(step560-1). An example of a handshake is illustrated inFIG.11. Note, the determination of whether to use TLS or DTLS is based on the session ID, which is part of the device authentication acknowledgment, and which is provided over the data channel540(steps560-2,560-3). Here, the client510has told the server520its capabilities, and the session ID reflects what the server520has chosen, namely TLS or DTLS, based on the client's510capabilities. In an embodiment, the server520chooses DTLS if the client510supports it, i.e., if UDP port443is not blocked, otherwise the server520chooses TLS. Accordingly, the control channel530is established before the data channel540. The data channel540can be authenticated based on the session ID from the control channel530. The data channel540includes the exchange of data packets between the client510and the server520(step560-4). The data packets include an identifier such as the session ID and a user ID for the associated user102. Additionally, the data channel540can include keep alive packets between the client510and the server520(steps560-5,560-6). The cloud tunnel500can support load balancing functionality between the client510and the server520. The server520can be in a cluster, i.e., multiple servers200. For example, the server520can be an enforcement node150cluster in the cloud-based system100. Because there can be multiple data channels540for a single control channel530, it is possible to have the multiple data channels540, in a single cloud tunnel500, connected to different physical servers200in a cluster. Thus, the cloud-based system100can include load balancing functionality to spread the cloud tunnels500from a single source IP address, i.e., the client510. Also, the use of DTLS for the data channels540allows the user devices300to switch networks without potentially impacting the traffic going through the tunnel500. For example, a large file download could continue uninterrupted when a user device300moves from Wi-Fi to mobile, etc. Here, the application350can add some proprietary data to the DTLS client-hello servername extension. That proprietary data helps a load balancer balance the new DTLS connection to the same server200in a cluster where the connection prior to network change was being processed. So, a newly established DTLS connection with different IP address (due to network change) can be used to tunnel packets of the large file download that was started before the network change. Also, some mobile carriers use different IP addresses for TCP/TLS (control channel) and UDP/DTLS (data channel) flows. The data in DTLS client-hello helps the load balancer balance the control and data connection to the same server200in the cluster. § 8.0 Traceroute Traceroute can be based on Internet Control Message Protocol (ICMP), TCP, User Datagram Protocol (UDP), etc. For example, a traceroute based on ICMP provides all hops on the network. TCP and UDP are also supported by most clients, if ICMP is blocked. The response from the traceroute provides a holistic view of the network with packet loss details and latency details.FIG.11is a network diagram of a traceroute between a user102and a destination640with no tunnel in between. Here, the user102(via a user device300) connects to an access point600, which connects to the destination640via routers602A—602D and a switch604. The traceroute includes transmitting a request packet from the user102to the destination640(with an address of a.b.c.d) via the access point600, the routers602, and the switch604. Each of these intermediate devices600,602,604process the request packet and the enforcement node150sends a response packet back to the user102, which is also processed by the intermediate devices600,602,604. Accordingly, all hops in the network are visible. FIG.12is a network diagram of a trace between a user102and the destination640with an opaque tunnel610between a tunnel client510and a tunnel server520. The opaque tunnel610can be the tunnel500as well as a GRE, IPsec, VPN, etc. The opaque tunnel610is referred to as opaque because there is no visibility into the tunnel. The traceroute inFIG.12, based on ICMP, TCP, UDP, etc., provides visibility of the hops before and after the opaque tunnel610, but does not provide visibility in the opaque tunnel610. There are no details about packet loss or latency while tunneled transmission. Also, the opaque tunnel610can be referred to as an overlay tunnel. Traceroute includes a series of packets that are exchanged from a probe initiator along a path. Each trace packet includes an increasing TTL value. When a node along the path receives a trace packet where the TTL expires, it sends a response. Based on all of the responses, it is possible for the probe initiator (e.g., the client) to determine the network hops, the latency at each hop, packet loss, and other details. Again, the traceroute can be an MTR, which also includes PING functionality. Again, MTR is used to traceroute the destination to show the latency, packet loss, and hop information between an initiator and destination. It helps to understand the network status and diagnose network issues. In an embodiment, MTR is implemented on the user device300, such as through the application350, and on the tunnel server520and/or the enforcement node150. As is described herein, there is a requirement to implement probes at two points in the service path—at the client and at the tunnel server520and/or the enforcement node150. The MTR implementation can support ICMP, UDP, and/or TCP. For ICMP, two sockets are used to send and receive probes, and the ICMP sequence number in reply messages are used to match ICMP request messages. For UDP, one UDP socket is created to send UDP probes, and one ICMP socket is created to receive ICMP error messages. For TCP, one raw socket is created to send TCP probes, and one ICMP socket is created to receive ICMP error messages, and the TCP socket is also used to receive SYN-ACK/RST from the destination. The foregoing functionality can be performed by the application350on the user device300and a tracing service on the enforcement node150. SYN=Synchronize, ACK=Acknowledgment, and RST=Reset. § 8.1 Detecting Opaque Tunnel FIG.13is a flowchart of a process650for detecting a tunnel500,610between a user device300and a destination. The process650is described with reference to the network inFIG.12with actions at the user device300, the intermediate devices600,602,604, and the tunnel server520. Also, note that while the enforcement node150and the tunnel server520are illustrated as separate devices, it is also possible that these are combined in the same device. Also, actions at the user device300(client) can be performed via the application350executed thereon. The tunnel server520can be a proxy or transparent proxy. The process650includes the client sending a trace packet for the destination (e.g., the node150with an address of a.b.c.d) with a Signature-A (step651). Note, the client (e.g., the user device300) does not know if there is a tunnel or not between the destination and itself. The purpose of the Signature-A is for any tunnel server520to detect this trace packet and provide tunnel details, i.e., to allow the client to detect the tunnel. The Signature-A can be any encrypted data for security. The process650further includes the tunnel server detecting the Signature-A as a valid signature and intercepting the trace packet (step652). InFIG.12, even though the tunnel server520is not the destination, it intercepts the trace packet because of the presence of the Signature-A and responds. Namely, the tunnel server responds to the trace packet with tunnel info (step653). The client receives the trace response from the tunnel server (instead of the destination) and is informed about the tunnel, and can take appropriate action (step654). The tunnel info can include IP address, tunnel type, protocol, etc. As described herein, appropriate action includes determining a trace via different legs to account for the tunnel. Also, as described herein, a leg is a segment of the network between the client and the destination. Without a tunnel, there is a single leg between the client and the destination. With a tunnel, there is a plurality of legs with at least one leg being the tunnel itself. If there is a transparent proxy present with an overlay tunnel to it from the client, the client sends traceroute probes with a signature to detect the presence of the proxy. When the packets traverse through the proxy, it scans for the signature in the payload, which can be encrypted using a shared key that can be rotated constantly. If the signature matches, the proxy identifies this as a probe generated by a trusted client and identifies itself as a proxy by responding to the probe with an encrypted signature. On receiving the probe response, the client would be able to identify the proxy in the path and request it to find the hops through the overlay tunnel. The request to the proxy can be performed out of band. § 8.2 Collecting Network Details Including a Tunnel FIG.14is a flowchart of a process660for collecting network details in a trace where there is an opaque tunnel. The process660is described with reference to the network inFIG.12with actions at the user device300, the intermediate devices600,602,604, and the tunnel server520. Further, the process660can be used with the process660. Also, while described with reference to the enforcement node150as the destination, the process660contemplates operation with any type of computing device. Also, note that while the enforcement node150and the tunnel server520are illustrated as separate devices, it is also possible that these are combined in the same device. Also, actions at the user device300(client) can be performed via the application350executed thereon. Once an opaque tunnel is detected, the process660is used to collect details of the service path between the client and the destination. The process660includes, responsive to detection of a tunnel, dividing the network from the client to the destination into a plurality of legs (step661). A trace is performed separately on all of the plurality of legs (step662), and the results of the trace on all of the plurality of legs are aggregated to provide a holistic view of the network (step663). The objective in segmenting the network into different legs is to provide visibility with the tunnel. Specifically, a trace is performed in the tunnel, such as via the tunnel server which is performing a so-called “reverse trace.” Here, the tunnel server is sending trace packets through the tunnel without tunnel encapsulation so that details of the trace can be obtained in the opaque tunnel. These details are combined with traces from the other legs to provide full visibility. For the example ofFIG.12, once the client (user device300) knows about tunnel, the network can be divided into three segments: Leg-1: From the user device300to an egress router630, Leg-2: From the tunnel client510to the tunnel server520(i.e., the opaque tunnel610), and Leg-3: From the tunnel server520to the destination (node150). For the Leg-1, the trace can be performed as normal. For the Leg-2, the trace is performed between the egress router630and the tunnel server520. This is the reverse trace where the tunnel610is traced by the tunnel server. In an embodiment, the client, knowing there is an opaque tunnel based on the signature used in the process650, requests the tunnel server trace the tunnel. That is, the client sends a request for tracing by the tunnel server to the tunnel client, i.e., a reverse trace. The tunnel server performs the reverse trace, collects the results and forwards them to the client. For the Leg-3, either the client can send a trace packet without the signature to trace the Leg-3 or the client can request the tunnel server perform a trace to the destination on its behalf. If the trace packet is sent from the client without the signature, the results will include details from Legs 1 and 2, which can be subtracted out since the results from Legs 1 and 2 are also separately obtained. Finally, the client can process all of the results from the three legs to present a holistic view of the network. Note, Leg-2 and Leg-3 go hand in hand—either you have both or none. If there is none, then the client only has one leg to the destination. The foregoing assumes the tunnel client510is on the public Internet and reachable from the tunnel server520, i.e., the outside world can connect to the tunnel client510. However, most tunnel clients510are on an internal network behind a firewall, making it a problem for the tunnel server520to reverse trace to the tunnel client510. Thus, there are additional steps in this scenario. Consider the issue of the tunnel client510being behind a firewall; there is a need to modify the network segments as follows: Leg-1: From the user device300to an egress router630, Leg-2: From the egress router630to the tunnel server520, and Leg-3: From the tunnel server520to the destination. As described herein, the egress router630is typically a router at an edge of a customer's network with a public IP address. The following describes the trace in each of these legs. For the Leg-3, the client can send the trace packet without the signature or request the tunnel server520to perform this leg on its behalf, i.e., the same as described above. For the Leg-2, the following steps are needed, note these are as described above except the target is the egress router630. The tunnel server520is performing a reverse trace based on accepting a request from the client, but the reverse trace is from the tunnel server520to the egress router630. The tunnel server520provides the results to the client as before. For the Leg-1, the client sends a trace packet to the egress router630. And as before, finally, the client aggregates all three legs to present a holistic view of the network. For the Leg-1, there are two possibilities for what can happen to the trace packet from the client to the egress router. For a case-1, the tunnel client510can route the trace packet into the opaque tunnel610. For a case-2, the tunnel client510does not route the trace packet into the opaque tunnel610, i.e., bypasses it. For the case-2, this yields the trace to the egress router630data. However, for the case-1, this provides the wrong network path, namely from the client to the tunnel client510to the tunnel server520to the Internet to the egress router630. That is, the trace packet echoes from the tunnel server520providing the wrong network path. There is a need for the client to detect this wrong network path. To detect the wrong path for the Leg-1, the client can be configured to insert another signature, Signature-B, in the trace packet for the egress router630. The objective is for the trace packet to reach the egress router630for a response. The purpose of this Signature-B is for the tunnel server520to detect it and provide a flag in the response. If the client gets a response to this trace packet with the flag therein, the client knows the trace went on the wrong network path, i.e., through the tunnel610to the tunnel server520. When this is detected, IT must reconfigure the tunnel client510to bypass the tunnel610for packets destined to the egress router630. Of note, the use of the terms Signature-A and Signature-B is solely meant to differentiate these as different signatures for different purposes. As described herein, the present disclosure includes various traces of different legs of a service path, such as using MTR, and having the client (or another device) aggregate the results. Of note, while the illustrated example embodiments describe the traces in order, those skilled in the art will appreciate any order is contemplated. For example, in some embodiments, the traces of Leg 1 are performed first, then Leg 2, etc. In other embodiments, the traces of Leg 2 are performed first, etc. Finally, the traces may be performed concurrently or at about the same time. In an embodiment, the tunnel client510can be a tunnel originating from the application350and the egress router630can represent the public facing side of the network from where location tunnels (GRE/IPSEC) will originate. Most cases will have the user device300on a private IP talking to the outside world via a router or a Wi-Fi Access Point (AP) that is connected to an egress router630that has a public IP. The case of a tunnel client510having a public IP is rare and could happen when there is a device on cellular network. From the point of the enforcement node150, it always traces the Leg 2 path from itself to the public IP the client comes out with. It does not care if it is an egress router or a tunnel-client end point that is on the public IP. § 8.3 Example Operation FIG.15is a flow diagram illustrating actions between the client (user device300), the tunnel client510, the egress router630, the tunnel server520, and the destination640in an example operation of the processes650,660. Note, the processes650,660can be orchestrated by the user device300(client) via the application350. The client sends a trace packet to the destination with the Signature-A as described in the process650. If the response comes back with no tunnel info in the response, then the full and accurate service path has been traced and the trace is complete. If there is tunnel info, the client knows there is the tunnel610and moves to the process620. In order to collect a full network path, first the client needs to detect if there is a tunnel on the path. Again, this is achieved by the client inserting a signature in a packet. The packet is intercepted by the tunnel server520and it will respond with tunnel information like type, IP, etc. Once the client notices the tunnel on the path, it will run the multi-segment approach in the process660to detect the full service path. Next, the client fetches the egress IP using the restful API. The client assumes three network segments—Leg-1: Client to Egress, Leg-2: Egress to Tunnel Server, and Leg-3: Tunnel Server to Destination. The client performs the trace of the Leg-3 either directly or by requesting the tunnel server to perform it and collect information. The client performs the trace of Leg-2 by requesting the tunnel server perform the reverse trace. The client also sends a trace packet to the egress router630with the Signature-B. If there is no tunnel flag in the response, the client has the full and accurate Leg-1 information. If there is the tunnel flag in the response, there is a misconfiguration presented to the user. Finally, the client aggregates all three legs and consumes the data. The tunnel server520can host a tracing service that will accept tracing requests from clients such as via a restful API call, an HTTP Post call, etc. This service will perform standard network tracing, collect the data and respond to clients. The resultant data can be displayed and used in different ways. § 9.0 Detection of Network Hops and Latency Through an Opaque Tunnel and Detection Misconfiquration of Tunnels FIG.16is a flowchart of a process670for detection of network hops and latency through an opaque tunnel and detection misconfiguration of tunnels. The process670is described with reference to the user device300, i.e., the client. The process670can be implemented as a method that includes steps, via the user device300configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process670includes requesting a trace to a destination with a signature inserted into a trace packet (step671); receiving a response to the trace packet (step672); when the response does not include tunnel info, providing details in the response to a service where the details include parameters associated with a service path between the client and the destination (step673); and when the response includes tunnel info, segmenting the service path into a plurality of legs, causing a trace for each of the plurality of legs, and aggregating details for each of the plurality of legs based on the causing (step674). When the response includes tunnel info, a tunnel server is configured to intercept the trace packet responsive to detection of the signature, and wherein the tunnel server responds to the trace packet with the response with the tunnel info. The aggregating details includes aggregating network hops, packet drops, and latency for each of the plurality of legs. The plurality of legs can include three legs. In an embodiment, a first leg is between the client and a tunnel client, a second leg is between the tunnel client and a tunnel server, and a third leg is between the tunnel server and the destination. In another embodiment, a first leg is between the client and an egress router, a second leg is between the egress router and a tunnel server, and a third leg is between the tunnel server and the destination. The causing the trace for the plurality of legs can further include including a second signature in a second trace packet to an egress router, and the process670can further include receiving a response from the second trace packet; when the response does not include a flag, utilizing details from the response for a leg between the client and the egress router; and when the response includes the flag, determining a misconfiguration where the second trace packet was sent over a tunnel. At least one of the plurality of legs can include a reverse trace from a tunnel server. The tunnel info can include a type of tunnel including any of Generic Routing Encapsulation (GRE) and Internet Protocol (IP) Security (IPsec). The process670helps detect the network hops, packet drops, and their latencies through tunnels like the GRE/IPsec or any other overlay tunnel. A typical network analyzer will not be able to find the hops, packet drops and their latency through individual routers that constitute the overlay tunnel as the probe traffic is encapsulated through the tunnel and the whole tunnel looks like a single hop. The process670enables a trace of the hops through the tunnel thus giving an insight into the hops inside the tunnel. The tracing of the path is done by initiating the probes from the other side of the tunnel without encapsulating the packet, i.e., from the a destination640towards the client which is called as “Reverse Traceroute” as described herein. This also helps detect if the overlay tunnels are correctly configured so that traffic bound to the internal network is not pulled into the tunnel. § 10.0 Detection of Latency, Packet Drops, and Network Hops Through a TCP Tunnel Using ICMP and UDP Probes In another embodiment, the tunnel can include a TCP connection, i.e., a TCP-based tunnel or an exclusive TCP overlay tunnel. The present disclosure can trace this path to detect statistics such as hops, packet drops, and latency through the exclusive TCP overlay tunnel using ICMP and UDP traffic. This approach leverages the approach in the process670to find the hops through the tunnel using a protocol other than TCP for which the tunnel was built. This approach uses the routing in the opposite direction as the enforcement of the TCP check made at the end of the tunnel that the client owns. The destination640sends probes from its side of the tunnel without using any tunnel encapsulation towards the client's egress router's IP. Advantageously, this approach avoids using TCP-PINGs (use of TCP SYNs) from the client side towards the destination to avoid cases where firewall rules would flag issues thinking of it as an attack. FIG.17is a flowchart of a process680for detection of latency, packet drops, and network hops through a TCP tunnel using ICMP and UDP probes. The process680is described with reference to the destination640. The process680can be implemented as a method that includes steps, via the server200configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process680includes receiving a request from a client to perform a reverse trace (step681); requesting a trace to an endpoint that is one of an egress router and a tunnel client, wherein there is a tunnel between i) the destination and ii) the one of the egress router and the tunnel client (step682); receiving a response to the trace (step683); and sending details associated with the response to the client so that the client aggregates these details with details from one or more additional legs to provide an overall view of a service path between the client and the destination (step684). The process680can further include receiving a trace packet from the client with a signature included therein, wherein the signature is indicative of a request for tunnel info; and, responsive to detection of the signature, sending the tunnel info to the client in a response. The process680can further include receiving a trace packet from the client with a signature included therein, wherein the signature is indicative of a misconfiguration of a tunnel; and, responsive to detection of the signature, sending a flag to the client in a response indicative of the misconfiguration. The destination can be one of a tunnel server and a node in a cloud-based system. The tunnel can utilize Transmission Control Protocol (TCP) and the trace to the endpoint utilizes a packet without tunnel encapsulation. The packet can utilize one of Internet Control Message Protocol (ICMP) and User Datagram Protocol (UDP). The request can be via a RESTful (Representational State Transfer) Application Programming Interface (API) call from the client. § 11.0 Detection of Latency, Packet Drops, and Network Hops Through a Tunnel by Tracing Hops Therein As described above, the tunnel610is an opaque overlay making it difficult to trace. The aforementioned approaches contemplate a reverse trace via unencapsulated packets. In an embodiment, the tunnel itself may be configured to perform the trace, such as via the cloud tunnel500. There are two techniques the tunnel500can use to perform the trace inside the tunnel. In a first approach, the tunnel500can be configured to identify probe traffic based on a predefined signature and inherits the IP TTL value of the probe packet. Note, as described herein, probe or probe traffic means trace packets. As the packet makes its way through the tunnel the packet's TTL would expire triggering an ICMP “Time Exceeded” error. This error is propagated by the tunnel to the probe initiator (such as the client) spoofing the IP address of the router that generated the error. In a second approach, the tunnel500itself can initiate trace probes towards the other end of the tunnel500by increasing the TTL in the packets by one at a time. By tracing the path to the other end of the tunnel500, the exact number of hops, packet drops, and latency inside the tunnel500is determined. This information can be provided to any of the clients/applications via an API so that they know the measure of these stats that can be combined with the other trace stats to get a complete picture of the path the packet traverses. This measurement can be initiated from both sides of the tunnel500to gauge any changes in routing due to asymmetric routing. FIG.18is a flowchart of a process690for detection of latency, packet drops, and network hops through a tunnel by tracing hops therein. The process690is described with reference to a node associated with the tunnel500, i.e., either the tunnel client510, the tunnel server520, or the egress router630. The process690can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process690includes receiving a request for a trace of the tunnel from a client (step691); causing the trace inside the tunnel (step692); obtaining results of the trace inside the tunnel (step693); and sending the results of the trace inside the tunnel to the client so that the client aggregates these details with details from one or more additional legs to provide an overall view of a service path between the client and a destination (step694). The inside the tunnel can include identifying a packet with a predefined signature, analyzing a Time-to-Live (TTL) value in the packet, and sending a response to a probe initiator based on the TTL value. The response can include an Internet Protocol (IP) address that was spoofed based on a router where the TTL value expired. The trace inside the tunnel can include sending trace packets to another end of the tunnel each having increasing Time-to-Live (TTL) values. The trace packets can be sent from both ends of the tunnel to determine any changes in routing between directions. The tunnel can include a data channel and a control channel each having different encryption. The encryption can be any of Transport Layer Security (TLS), Secure Sockets Layer (SSL), and Datagram Transport Layer Security (DTLS). § 12.0 Metric Computation for Trace Probes Using Cached Data to Prevent a Surge on Destination Servers FIG.19is a network diagram illustrating a user102connected to an enforcement node150in a digital experience monitoring application. In a practical embodiment, the cloud-based system100with the nodes150as proxies can be used to perform digital experience monitoring as described herein. In such as system, there can be a lot of probes. To prevent a surge of traffic to the destination640, the present disclosure includes a cache approach where trace results are cached on the proxy for a finite configurable time. For that time interval, all subsequent probe requests are served out of the cache rather than sending a new set of probes per request. While one request is pending on a destination640, any probe that arrives for the same destination can be held in a queue and responded from the cache when the response for the first probe arrives and is cached. Specifically, if a lot of user devices300with the applications350are independently probing the destination640there is a risk of throttling of the probes at the destination640and the hops as well as blacklisting IP addresses of the tunnel server520or nodes150used to probe the destination640. The enforcement node150is configured to probe the destination640, i.e., the leg 3, on behalf of requesting clients. The enforcement node150is also configured to probe the tunnel500,610as described herein, i.e., leg 2, in a reverse trace. The present disclosure contemplates the enforcement node150caching results from these two legs and serving subsequent requests from the cache for a predetermined amount of time. Each cache entry can include all hop IP addresses from the enforcement node150to the destination640and from the enforcement node150to the egress router630, packet loss, and latency for each probe sent. Note, some clients can share both legs 2 and 3 whereas some clients may have a different leg 2 or 3. Those skilled in the art will recognize either or both can be served out of cache as required. FIG.20is a flow diagram illustrating actions between the client (user device300), the application350, the egress router630, the enforcement node150, and the destination640in an example operation of the processes650,660, along with caching of trace results at the enforcement node150. In this example, the application350is the tunnel client510whereas the enforcement node150is the tunnel server520. The flow includes client configuration via the application350including the cloud tunnel500. The application350can send an ICMP traceroute to the destination640IP address with the Signature-A in the ICMP payload. The enforcement node150is configured to terminate the ICMP traceroute and send an ICMP response by faking the destination IP as the source along with tunnel info in the ICMP payload. Once the application350is aware of the tunnel, the application350can send a trace API request, create an SSL connection with the enforcement node150and send a POST request to the tunnel service at the enforcement node150with details in a JavaScript Object Notation (JSON) body. The application350can send a restful MTR request to enforcement node150which includes the destination address and port in case of TCP/UDP MTR. It should also include the MTR type: TCP, UDP or ICMP. The various signatures can be via a Type-Length-Value (TLV) in the ICMP request and reply. The enforcement node150is configured to perform the reverse trace of Leg 2 and the trace of Leg 3. The enforcement node150maintains the results of these two Legs 2, 3 in a cache for a predetermined amount of time, e.g., one minute or some other configurable value. If the results are not in the cache, the enforcement node150performs the trace, e.g., using MTR. The enforcement node150can combine the results which include latency, packet loss, and hop information and send this via a trace POST API response to the application350. The application350performs an ICMP traceroute to the enforcement node150outside of the tunnel500. The application350can determine or compute the Leg 1 results based on subtracting the Leg 2 results from the results of this ICMP traceroute to the enforcement node150outside of the tunnel500. Of course, this can be other types of traces. FIG.21is a flowchart of a process700for metric computation for trace probes using cached data to prevent a surge on destination servers. The process700is described with reference to one of the enforcement nodes150associated with the cloud-based system100. The process700can be implemented as a method that includes steps, via the enforcement node150configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process700includes receiving a request, from a client, for one or more of a first trace of a tunnel and a second trace to a destination (step701); checking a cache at the node for results from previous traces of the first trace and the second trace (step702); responsive to the results not being in the cache, performing one or more of the first trace and the second trace (step703); and providing the results to the client so that the client aggregates the results with details from one or more additional legs to provide an overall view of a service path between the client and the destination (step704). The process700can further include, subsequent to the performing, storing corresponding results in the cache. The process700can further include, subsequent to a predetermined time period, removing the results from the cache. The process700can further include receiving a trace packet from the client outside of the tunnel; and providing a response to the trace packet, wherein the client utilizes details in the response in addition to the first trace and the second trace to determine details of the service path. The process700can further include receiving a trace packet to the destination from the client with a signature therein; and terminating the trace packet and responding thereto with the destination's address and with details about the tunnel. The client can connect to the destination through at least three legs. The providing can include at least one of the first trace and the second trace from the cache and the other from the performing. § 13.0 TCP Traceroute Using RST and SYN-ACK to Determine Destination Reachability Referring back toFIG.11, for a description of a TCP traceroute from the client (user device300) to the destination (node150), the client creates a series of packets with increasing TTL values. TTL values are decremented for each hop. When each packet is received at the routers602A-602D with the TTL value of 0, the packet is discarded, and a response is sent back to the client (“TTL Time Exceeded”). The response includes information regarding its location and indicating data transfer times. Finally, the client knows that the destination has been reached (and stops sending packets) when it receives a different message from a hop, saying that the port intended is unreachable (“Destination/Port unreachable”). In order to use TCP for tracing the path to the destination, one cannot use standard TCP stream sockets as internally TCP always retransmits packets, and, as a result, one cannot estimate the packet loss and latency sitting at the application layer. To avoid this, traceroute (aka TR) applications use raw sockets where TCP packets are framed in the application and directly injected into the network bypassing the TCP stack. Current TCP traceroute applications/tools cannot determine if the destination has been reached as they have no ability to read the response sent by the destination. In an embodiment, the present disclosure includes determining the reachability of the destination by peeking into the response packets for a SYN-ACK or an RST sent by the destination. A reception of the SYN-ACK or RST from the destination will indicate the availability of the destination. This ability to peek into the TCP stack for a response is unique and gives the ability to use TCP as a technique to determine reachability. ICMP and UDP TR implementations detect the destination reachability by looking at “ICMP ECHO” response and “UDP port unreachable” errors, respectively. This is relatively straightforward as the responses from the intermediate hops and the destination are at the ICMP layer which the applications can snoop and process. TCP poses a unique challenge in that the final destination responds with either an RST or a SYN-ACK when the TCP SYN hits the destination stack. These responses generated by the destination are not ICMP responses but instead are standard TCP responses that the local TCP stacks on the originator of the request consume. So while the request packet was injected by a raw socket, the TCP RST or the SYN-ACK would land up on the TCP stack and as there is no corresponding TCP socket, the response from the destination is silently dropped believing its a stray. As a result of this, TCP traceroute applications will not be able to detect the responses from the destination thus rendering the utility with very little use as the path is always incomplete with no destination ever discovered. To address the lack of reachability detection of the destination, the present disclosure includes a modification to the TCP stack to recognize TCP traceroute traffic and divert the RST/SYN-ACK response to appropriate “raw sockets” so that the TR application can determine the reachability to the destination. This way the TCP TR can draw the complete path with all the intermediate hops and the final destination giving the administrator a full picture of the path taken by a packet from the source to the destination. Also, the raw RST packet can be sent to the destination as well after SYN-ACK is received by a TR application so that the connection can be closed in time rather than waiting for a timeout. As described herein, a TR or traceroute application is software executed on a processing device such as the server200or the user device300for implementing a traceroute, such as using TCP traceroute. Also, TCP checksum, sequence, and ACK in the RST packet are handled by TR application itself. The source port in the SYN packet is allocated by TCP stack from the port pool based on destination IP and port to avoid collision with real user traffic. FIG.22is a flowchart of a process710for TCP traceroute using RST and SYN-ACK to determine destination reachability. The process710is described with reference to one of the user device300with the application350and the enforcement nodes150associated with the cloud-based system100. The process710can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process710can be implemented via a trace application implementing a TCP stack in the processing device. The process710includes sending a plurality of TCP packets via a raw socket to perform a trace to a destination (step711); receiving responses to the plurality of TCP packets (step712); detecting the responses in the TCP stack and diverting the responses to the raw socket (step713); and aggregating the responses by the trace application to determine details of a service path from the processing device to the destination (step714). The plurality of TCP packets can include TCP Synchronize (SYN) messages, and the responses include TCP SYN-Acknowledgement (ACK) or Reset (RST) messages. The process710receiving a TCP SYN-ACK message from the destination; and sending a TCP RST packet to the destination. A TCP checksum, sequence, and ACK in the TCP RST packet can be implemented by the trace application. The raw socket can be used in lieu of a TCP socket. A port for the raw socket can be allocated by the TCP stack from a pool of ports based on the destination. § 14.0 Adaptive Probing to Discover a Protocol for Network Tracing Traceroute implementations conventionally use just one protocol to trace the path from the source to the destination along with the hops, latency, and packet loss stats. In an embodiment, the present disclosure includes a combination of ICMP, UDP and TCP to get a more accurate measurements of hops, packet loss, and latency from source to destination. As each network entity tends to respond to a particular protocol more favorably, the present disclosure uses the protocol that would have the highest probability of getting a response. Results from using different protocols are aggregated and displayed as one. A problem with traceroute is that it relies on hosts responding with ICMP errors for TTL expiry which is unreliable due to routers either disabling this or rate limiting. Note, routers that run BGP respond to TCP port179while blocking ICMP. The following utilizes the example ofFIG.19with the three legs, namely Leg 1, Leg 2, and Leg 3. In an embodiment, a single protocol—ICMP/UDP/TCP—is used to probe all three legs. Using ICMP/UDP for Leg 3 is not advisable as the probes are primarily to check the availability of a destination640that is a Web app which is running on TCP ports80/443. For example, a particular Web app can be 100% available but show a path to the destination that is broken, with the reason being that ICMP and UDP probes are blocked by the destination640. The present disclosure includes a dynamic probe that tries a combination of protocol types to get an estimate of packet loss and the latency to the egress/destination. Determining the intermediate hops and their latency/packet loss is a matter of luck irrespective of the protocol used as the TTL expiry is a Layer 3 property handled by routers. For practical purposes, the choice of protocol is significant inside a customer network due to Access Control List (ACL)/Firewall (FW) rules while less significant on the internet although some routers prioritize TCP traffic over the rest. The choice of protocol is the most significant when the end host receives it as the response to the probe is completely dependent on the rules configured on that host and these are all over the place. Most destinations640will only respond to TCP ports80/443. The egress routers630will respond to ICMP-ECHO at times and could either respond with a SYN-ACK or RST when a TCP probe is sent to port179/80/443. There are only two entities that are guaranteed to respond and metrics to these can be trusted, and the rest are best effort. The two entities include the destination640responding to a TCP SYN on port443(assuming Web apps), and the node150responding to a PING or TCP SYN. In an embodiment, the destination640is a SaaS endpoint running Web applications. With a TCP SYN to port443on the destination640, the destination640is bound to respond with a true measure of reachability, latency and packet loss. Assume that this will be the IP of the load balancer fronting a server farm for the destination640but then that is how far the service path can be reached. It is also possible to close the connection to the server with an RST/FIN to free up any resource on the destination640. Packet loss and latency to the destination640are determined by the response to the TCP SYN. One optimization to find the latency and packet loss could be to harvest the data for the domain from the web probes. But it is still necessary to send the TCP traceroute probes to determine the number of hops to the destination640. FIG.23is a network diagram with an excerpt of the network diagram ofFIG.19illustrating Legs 2 and 3 for illustrating adaptive probing. In an embodiment, the egress router630is probed from two sides—from the application350and from the enforcement node150. The approach is to first find a protocol the egress router630will respond to by sending a set of probes directly to the egress router630by setting a large TTL and then employing the regular MTR logic to trace the hops in between. This way it is known that there is a point at which the probes will get a response. To give an example, start with ICMP-ECHO to the egress router630IP with TTL=64, if there is no response, then switch to TCP-SYN probes to ports179(Border Gateway Protocol (BGP)), 80,443. Either an RST or an SYN-ACK will give the latency and the packet loss. § 14.1 Detecting Packet Loss Between the Application and the Egress Router There are two parameters to check here—packet loss and latency. In an embodiment, once the egress router630IP address is determined, ICMP/UDP probes are sent towards the egress IP with the hope that it responds. The issue with this is that if the egress router630is configured to drop ICMP/UDP probes then it will show as unreachable. With respect to packet loss detection, as the handling of the ICMP responses to TTL expiry are done in software and rate limited, the lack of an ICMP error response is not a measure of the packet loss at that hop. Also, the egress routers on the customer network might have ICMP turned off or rate limited. But if the packets are being forwarded by the egress router630then that is a good measure of its ability to handle load and also routers are rated based on their ability to forward packets which is mostly done in hardware. The following describe techniques to gauge packet loss when the egress router630is configured to drop or rate limit packets. In a first step, the approach tries to reach the egress router630by using ICMP followed by UDP and TCP and checks for packet loss. This does not need to be a configured number of probe, e.g., it can be three probes to see if the egress router630responds. Based on the response to a protocol, this is stored for future reference. For example, send three ICMP probes and wait for a response. If they all fail, then send three UDP problems, and if they all fail, then send three TCP probes. In a second step, if the result of the first step is not 0% packet loss or an acceptable %, the second step includes trying to reach beyond the egress router630to get a response. The intent is to exercise the packet forwarding path of the egress router630versus the software handling of the packets. If the packets could be forwarded successfully, then its implied that there is no loss. A safe reference point can be the enforcement node150as the IP address. There are two possibilities—approach 1—use the tunnel500,610, or approach 2—outside the tunnel500,610. In a third step, when the results of the first step and the second step are not acceptable, pick a last router in the customer's network with a private IP that is responding. The egress router630is the first public IP address that is encountered. For the last router, looking at the routing of packets, it is the egress router630with one leg in the private network and the other in the public that will move the packet out of the customer premise. There could be an independent Network Address Translation (NAT) device before the egress router630for NAT'ing the IP but even reaching that could be a fair approximation of the loss. The above steps are performed by the application350and it can maintain a cache with the approach and the results that may be refreshed periodically, when a network change occurs, and/or when the results are not good. As TCP-SYN seems to be the best bet given the rate limiting logic for ICMP on most devices, it is possible to a firewall that might see too many SYNs going out, and caching seems the best way to avoid raising a False Alarm on the firewalls and for them making changes on the firewall to let the probes out. § 14.2 Detecting Packet Loss Between the Enforcement Node and the Egress Router Note that a majority of the IT administrators disable their egress routers630to respond to any form of traffic destined to their IP on the Internet facing side. Based on experimentation, with 7000 egress router IP addresses, only 39% responded. In a first approach, the packet loss can be measurement outside of the tunnel500,610. Here, the application350can send a configured number of probes (e.g., ICMP, TCP) to the enforcement node150, e.g., 11 TCP-SYN probes with TTL=64. That is, in this first approach, the assumption is packet loss between the enforcement node150and the egress router630is the same as the packet loss between the user device300and the enforcement node150. If the packet loss is zero or acceptable, this is a safe assumption. In a second approach, the enforcement node150can try to direct a trace to the egress router630. This second approach can be performed if the packet loss from the first approach is not acceptable. In an embodiment, this can include sending a set number of ICMP probes destined to the egress router IP. If the response is obtained, then ICMP works other probes can be sent to the egress router630to measure latency and packet loss. If the ICMP probes fail, then TCP SYN probes can be sent to port179/80/443hoping to get a SYN-ACK or RST. Otherwise, UDP probes are sent to the trace ports. Any result can be one or a combination of the first approach and the second approach. § 14.3 Detecting Latency From Application and Node to the Egress Router If the egress router630responds, then the latency is known. The problem is when the egress router630does not and there is still a need to estimate the latency. When switching between the ICMP, the TCP, and the UDP probes to judge the latency to the egress, if the egress router630does not respond, the following is performed to infer the latency. With reference toFIG.23, it is possible to determine the latency from the application350to the node150as the node150's IP responds to pings and TCP SYN. The latency from the application350to the egress router630is called ‘A’ and the latency from the enforcement node150to the egress router630is called ‘B.’ If either A or B can be measured, the other one can be derived and, as long as it is a positive value, it can be used as a fair estimate. That is C≅A+B, C being the latency from the client to the enforcement node150. In the worst case, if the egress router630was not reachable from either side, then take CA' as the time it takes for the application350to reach the farthest router (private IP) on the Intranet. If needed, it is possible to take the time the first public IP took to respond and the time it took to reach the farthest router on the Intranet and average their times. The reverse trace can be avoided when there is no opaque tunnel present. Here, the application350can trace the path from itself to the enforcement node150using ICMP or TCP pings. Due to the absence of the opaque tunnel, the tracero trace ute probes from the application350will be able to trace its path to the enforcement node150. For the purpose of calculating the latency when the application350is not able to reach the egress router630, it is possible to have the enforcement node150to PING/TCP-PING to the egress router630to get latency. The enforcement node150does not have to do the trace but just needs to get the Round Trip Time (RTT) to the egress router630so that it is possible to compute A=C−B. § 14.4 Comparing ICMP and TCP PING Data It was evaluated as to whether ICMP and TCP probes take different paths on the Internet. It was determined that TCP and ICMP packets are routed along the same path on the Internet when we consider the network as an Autonomous System (AS). This was based on a 122 k set of hops and it was found that PING and TCP probes took the same path and never deviated even once when looking at it from an ASN angle. § 14.5 Adaptive Probe Process FIG.24is a flowchart of an adaptive probe process720for trace probes. The process720is described with reference to one of the user device300with the application350and the enforcement nodes150associated with the cloud-based system100. The process720can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process720includes, for one or more legs of the plurality of legs, sending a number of probes using one of a plurality of protocols (step721); responsive to receiving a response from the number of probes, determining the one of the plurality of protocols is successful and storing this protocol the one or more legs (step722); and, responsive to failure to receive the response, sending a number of probes using another one of the plurality of protocols and continuing until a successful protocol is determined or all of the plurality of protocols fail (step723). The plurality of protocols can include Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). The plurality of legs can include a first leg, a second leg, and a third leg. The third leg can be to a destination that includes a Web application, and wherein a protocol for the third leg includes Transmission Control Protocol (TCP). At least one of the first leg, the second leg, and the third leg can include a different protocol used thereon. Packet loss and/or latency between the first leg and the second leg can be determined based on a single trace therebetween. The process720can further include aggregating results for all of the plurality of legs, wherein at least two of the plurality of legs used a different protocol from one another. § 15.0 Accurate Differential Trace Latency Calculation Between Hops Again, trace is a diagnostic command to find the routes (paths) and measures the latency to each hop. In a trace, each node-to-node connection is called a hop and the latency is the round trip from the user's machine to the destination. The conventional traceroute has limitations that it might not be complete, and the results are not accurate for the final hop as the final hop does not provide the processing delay. The traceroute results might not be complete as the final destination might not respond to the probe. The conventional traceroute does not provide the latency between the hops. Routers typically have a very fast forward path as this is done in the hardware, but some routers take significant time to respond to TTL expired messages as they do this through software. In an embodiment, trace enhancements are provided that provide accurate calculations when the traffic goes through the enforcement node150as well as provides the latency between hops. When a customer uses the cloud-based system100, the traffic from the user device300is sent through the enforcement nodes150. The trace is used to provide the latency from the user device300to the egress router630as well to the enforcement node150. If a site is bypassed in the cloud-based system100, the trace measures the latency from the user device300to the site. The edge connector150A can be configured to combines this trace information with the information from the enforcement node150and provide the measurements to the user. The enforcement node150provides the trace measures from enforcement node150to the destination640. Both the enforcement nodes150and the edge connector150A cab support ICMP, TCP, and UDP protocols for traceroute. When traffic is going through the enforcement node150, the edge connector150A can perform the trace using the enforcement node150's IP address. The enforcement node150is configured to always respond to the trace probe from the edge connector150A. This solves the incompleteness problem for the conventional traceroute that can happen in the trace that some destinations might not respond to the probe. If the destination640is bypassed in the cloud-based system100, the edge connector150A does tracing to the destination640, for a best effort latency measurement to the final destination as the final destinations did not provide the processing delay. If the final destination did not respond, it provides the information for all other hops. When the enforcement node150receives this probe, it responds back providing the packet processing delay in the data payload. This provides accurate absolute latency to the enforcement node150. If the destination is bypassed in the Zscaler cloud, the Zscaler Edge connector does the best effort latency measurement to the final destination as the final destinations do not provide the processing delay. § 15.1 Latency Between Hops The edge connector150A sends a configured number of packets to hops starting with TTL 1 to the maximum configured TTL to the enforcement node150. The hops, which are configured to respond, send the response and the edge connector150A measures of the round-trip latency for the packet to these hops. The edge connector150A uses the results from all the routers602as well the enforcement node150to calculate the latency difference between hops. The edge connector150A uses the average latency for a hop and uses that to compute adjusted averages and the difference is computed between adjusted averages. § 15.2 Average Latency FIG.25is a network diagram of a network for illustrating an average latency calculation. This section describes how the average latency is calculated. In this example, there is the user device300connected to the destination640via four intermediate routers602-1to602-4.FIG.26is a diagram of the network ofFIG.25illustrating an operation. When a router/destination does not respond to ICMP/UDP/TCP traceroute probe, the value is recorded as −1. The average (AVG) is the sum of all positive values divided by the positive value count. If the hop is not responding, its average latency is set to 0. The following describes how the average phase is adjusted. The average latency for each hop is copied to the adjusted average. The end is the last hop and the start is the first hop. Step S1: Set index=end where end is the last value. Step S2: Set current to end −1. Step S3: If current==start −1, Go to step 9. Step S4: If the hop at the current is not responding, set current=current −1. Go to Step S3. Step S5: If the average latency of the current is more than the adjusted average of the index, then set the adjusted average of the current to the adjusted average of the index. If the average latency for the current is lesser than or equal to the adjusted average of the index, then do not change. Step S6: Set index=current. Step S7: Current=current −1. Step S8: Go to step S3. Step S9: Exit. FIGS.27-30illustrate an example operation of the average latency adjustment. § 15.3 Differential Average Latency If there is only one hop, the edge connector150A can set the differential average to its average. The following describes a differential phase computation. Step S11: Set index=first responding hop. Step S12: Set current=index+1. Step S13: If current=end+1, Go to step S19. Step S14: If the hop at “current” is non-responding hop, set current=current+1. Go to step S13. Step S15: Compute differential average for the hop at current=adjusted average of hop at current−adjusted average of the hop at index. Step S16: index=current. Step S17: current=current+1. Step S18: Go to step 13. Step S19: Exit. FIGS.31-34illustrate an example operation of the differential average latency adjustment. This shows that average round trip latency is 14 ms from the user device300to router602-1. The average latency between routers602-1,602-2is <1 ms. The average latency between602-2,602-3is 1 ms. The average latency between the routers602-3,602-4is 2 ms. § 15.4 Process for Accurate Differential Traceroute Latency Calculation Between Hops FIG.35is a flowchart of a process750for an accurate differential traceroute latency calculation between hops. The process750is described with reference to one of the user device300with the application350and the enforcement nodes150associated with the cloud-based system100. The process750can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process750includes performing a plurality of traces between two nodes in a service path (step751); obtaining latency measurements for each of the plurality of traces for each of one or more hops between the two nodes (step752); and determining average latency between each of the one or more hops based on the latency measurements, adjusted average latency for each hop, and differential average latency for each hop (step753). The nodes can include two nodes in a cloud-based system. A first node is an enforcement node150and a second node is an edge connector150A. The plurality of traces utilize either Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or a combination thereof. A destination of the plurality of traces can be a node in a cloud-based system. § 16.0 Adaptive Tracing, aka “CloudPath” The present disclosure includes an approach, using the cloud-based system100and the user device300, for adaptively finding the protocol that works best for the internal network and the destination640. This approach can be implemented in a software module that detects the best protocol (e.g., TCP, UDP, ICMP, etc.) by checking which protocol could reach the destination and which protocol provides the result by checking which protocol provides Least Average latency, Least Average Loss, and Number of Hops found. The module can be implemented in the user device300, communicating to the cloud-based system100. In this approach, egress means the exit of the network and the destination means the final target for the trace. The application350is able to identify the Client egress through the REST API call that the client connector makes the to one of the enforcement nodes150. Trace policy is provided from the cloud-based system100. The policy specifies a starting hop, ending hop, protocols to be used for egress and destination, number of packets to send, delay between the packets, UDP and TCP ports for egress and destination, destination domain or IP, intervals to be used by the application350, and the default protocol to used for egress and destination in case of failure. The policy also specifies the detection technique—least latency, least loss, or the number of hops found, that can be used to find the best protocol for the target. § 16.1 Automatic Operation The adaptive protocol module runs without manual intervention when there is an egress change or a gateway IP change on the user device300or at the configured interval if there is no change in the egress and gateway. The module runs before the actual trace to find the best protocol to the destination, through traces performed in the different protocols for the purpose of finding the best results. The module then finds the protocol to use and then performs the actual trace using the protocol. The adaptive protocol module can be part of the application350on the user device as well as in one of the nodes150. That is, the techniques described herein can be performed at the user device300and at the node150. § 16.2 Adaptive Protocol Detection for the Internal Network The module can detect the egress through a call to one of the nodes150in the cloud-based system100which can provide the egress IP. The adaptive trace module finds the best protocol to use for the trace to the egress by sending probes using TCP, UDP, and ICMP protocol. The detection is triggered on egress or a gateway change or at the end of the configured interval if there is no change in egress or gateway. The module checks which protocol can reach the egress IP by doing a trace to the Egress IP. The module detects the best protocol by checking which protocol could reach the egress, evaluating least latency, least loss, and/or the number of hops found. For example, this protocol detection step can include sending trace probes using different protocols to the egress IP, e.g., TCP, UDP, and ICMP protocol. The results are evaluated, namely the results will either be a failure or success with results for latency, loss, and number of hops. In an embodiment, if multiple protocols are successful, the module selects the one with the least latency and/or least loss and/or based on the number of hops found. The selected protocol is noted for this egress IP (internal network). The adaptive module caches this information for the configured internal. At the end of this interval, it can again detects the best protocol to be used on the internal network for the trace. § 16.3 Adaptive Protocol Detection for the Destination In a similar manner as protocol detection for the internal network, the module can find the best protocol to use for the trace to the destination640by sending probes by doing traces one by one using the configured protocols. The module checks which protocol can reach the destination IP. The module detects the best protocol by checking which protocol could reach the egress—with the least latency and/or least loss and/or based on the number of hops found. If the destination640could not be reached using either TCP, UDP, or ICMP protocol then it gives the default protocol, which comes in the policy, as the protocol to be used for the destination. The Adaptive Trace, aka “CloudPath,” is called to detect the best protocol to reach the destination. The protocol result from the Adaptive Trace module is used for doing a trace to the destination. § 16.3 Adaptive Protocol Detection for the Cloud Nodes The module also detects if the request will go through the cloud-based system100, and passes the protocol type as adaptive, and the node150finds the best protocol to be used for reverse trace to the egress as well the best protocol to be used for forward trace to the destination. § 16.4 Results For the direct case where the trace is not through the cloud-based system100, the application350determines the destination604is not through the cloud-based system100. The trace module combines the result for the direct case from1) Trace to Egress using the protocol suggested by the adaptive module, and2) Trace to the destination using the protocol suggested by the adaptive module. It creates the Host to the Egress hops using trace results from the internal network and Egress to Destination hops using the results from tracing to the destination. The results are sent to the cloud-based system100and the user102or administrator can view these results on a dashboard. The case wherein the trace is through the cloud-based system100, the application604finds the domain goes via the node150. It combines the results from—1) Results up to the Egress using protocol suggested by the adaptive module,2) Results from the node150to Egress using the protocol suggested by the adaptive module running on the node150, and3) Results from the node150to the destination using the protocol suggested by the adaptive module running on the node150. The combined results are sent to the cloud-based system100and the user102or administrator can view these results on a dashboard. § 16.5 Adaptive Trace Process FIG.36is a flowchart of a process800for an adaptive trace determination between two points in a network, such as a user device300, an egress from an internal network, a cloud node150, and a destination640. The process800is described with reference to a software module that is executed by one of the user device300with the application350and the enforcement nodes150associated with the cloud-based system100. The process800can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process800includes obtaining policy information related to a trace (step801); performing a plurality of traces, from a start point to an end point in a network, using the different protocols based on the policy information (step802); evaluating which of the plurality of traces reach the end point, and evaluating any of average latency of the plurality of traces, average loss of the plurality of traces, and a number of hops found, for each of the plurality of traces that reach the end point (step803); and selecting a protocol of the different protocols to use for the trace based on the evaluating (804). The different protocols can include Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). The selecting can be based on any of a least average latency of the plurality of traces and a least average loss of the plurality of traces. The policy information can include starting hop, ending hop, protocols to be used for egress and destination, number of packets to send, delay between the packets, User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) ports for egress and destination, destination domain or address, intervals to be used, and a default protocol to used for egress and destination in case of failure. The process800can be performed by a user device as the start point, and the end point includes an egress Internet Protocol (IP) address for an internal network. The process800can be performed responsive to any of the egress IP address change and a defined interval. For a given user device, a given egress IP, and a destination, the selected protocol can be different in the internal network from the selected protocol to the destination. The process800can further include making a call to a node in a cloud-based system to determine the egress IP. The process800can be performed by a cloud node in a cloud-based system as the start point, and the end point includes a destination. The process800can further include performing the trace with the selected protocol; and combining a plurality of results including the trace to obtain data from a user device to a destination through an internal network. § 16.6 Examples of the Adaptive Trace Process A key aspect of the adaptive trace process is to look at TCP or UDP or ICMP and then determine which of these protocols for tracing have the least packet loss and select the protocol for the adaptive trace. Of course, this can extend to any other trace path protocols, e.g., routing protocols, and is not limited to TCP/ICMP/UDP. In an example embodiment, the adaptive trace can send some number of trace packets at the different protocols, e.g., TCP, UDP, and ICMP. Then we find from these which fit the configured criteria (say Least Loss). For example—If TCP, we could reach the destination, If with UDP we could not reach the destination, and If ICMP we could reach the destination. Then we compare TCP and UDP to see which one gave the least Loss (assuming the selection is to use Least Loss) and select the protocol. For Egress our default for TCP is TCP BGP Port (179). When we send results to back end we specify which protocol we used for each leg. We can make the decision based on Least Loss, Least Latency, or number of Hops. § 17.0 Adaptive Tracing with a Reduced Number of Probes to Avoid Firewall Issues Probes can originate from the user device300at customer premises and it is possible this activity can be deemed suspicious based on a volume of such probes. For example, a customer premises can include small office and home office (SOHO) routers. If the probes from the application350are deemed suspicious, this can cause blacklisting of the user devices300originating the probes, such as in firewalls. These firewalls view the probes as a distributed denial-of-service (DDoS) attack and block traffic from that IP for a few minutes or seconds. Moreover, it can cause port table exhaustion on the firewalls/NAT if they do not have effective port management. For example, an implementation of the TCP probes can send up to hundreds of SYNs on the order of tens of seconds for a single monitor triggering the firewalls to block these devices. Let's assume that the maximum number of hops that will be queried is 64. A first step includes detection of how far a destination is for a whole trace run. For example, a trace service can have a default of 11 samples with a max of 20, i.e., 11 samples, one second apart for running probes. The objective is to first determine the number of hops. Of note, the number of hops to the destination may vary during a run (e.g., ±1), but we can stay with the determined number during the first run. § 17.1 Determination of the Number of Hops to a Destination The determination of the number of hops involves sending probes with different TTL values, from different ports, to see when you do and do not get a response. It is possible to send one or more SYN probes for various values of TTL to determine the number of hops. For example, in an embodiment, send 3 SYN packets for each TTL mentioned in the table below. The 2nd and 3rd SYN packets can be retransmits to conserve the ports as it is just to gauge if the destination is reachable at that TTL. Probe to the destination with TTL 64, if the destination did not respond to three tries, then it means that it is unreachable for this run and we will indicate as such. To be more accurate, we can do a second retry in 2 seconds if needed before declaring the destination unreachable. If it responds, move to the steps below. Assume the Destination is at hop23and we need to determine that using binary search as illustrated in the table below. For each TTL use a different source port: TTL ofDid Destinationthe SYNRespondAction64YesStep down to lower TTL32YesStep down to lower TTL16No*Step up to higher TTL24YesStep down to lower TTL20No*Step up to higher TTL22No*Step up to higher TTL23YesReached. This is how far the dest is.*This means that the hop sent a TTL expired ICMP msg or timed out. For the timed out case, one can assume that 3 packet drops indicate a router not sending a TTL expiry. In the example above, the total ports used so far is 7. And a RST can be sent to clear any port tables or open connections after this phase. § 17.2 Gathering Hops Between the Source and Destination The second step is to gather the hops between the source and destination. Now we have determined that the destination is at Hop #23, the next step is to detect the 22 hops in between. We have at times seen that between each run even one second apart, the destination can vary by ±one hop. Next are the steps to trace to the destination for a number of runs (e.g., 11 for an example default config) with a minimum number of ports, e.g., just 11 ports. For each run perform these steps:1) Allocate an ephemeral TCP port (used for only a short period of time for the duration of a communication session). As an optimization, if the determination of the actual distance (hop23) was done without any retransmissions, we can just use that connection itself and save a port.2) Send a TCP SYN with TTL 64 and track the time for the SYN-ACK. Once you get the SYN-ACK, complete the three-way handshake by sending the final ACK. This will keep the firewall in a good state because the TCP state machine is now in the established state. Note, in the rare case that the SYN-ACK was not received, it better to not retransmit the SYN, and to open a new port and try again.3) Based the first step, the destination has been determined, e.g., 23 hops away.4) Send a TCP packet with data of a few bytes (if SSL, send a few bytes of “client hello,” if non-SSL, send a GET /) as payload with increasing TTL of 1 until TTL of 23 is hit, i.e., the determined number of hops. The TCP connection established above is used to avoid opening more ports. Because the devices in-between have the port table for this 5-tuple they will forward the traffic. The traffic will look as legitimate retransmissions as its compliant.5) For hops 1 to the destination minus 1 (e.g., 22), we will get TTL expired or the router might choose not to respond. These metrics are recorded.6) For the last hop, we will get a response that we can ignore as we have the RTT from the first step when we received the SYN-ACK.7) Close the connection if the server did not. § 17.3 Optimizations to Consider The following is an aggressive approach where we try to complete the N runs with just one port after we determine the distance to the destination.1) Send a TCP SYN with TTL 64 and track the time for the SYN-ACK. Once you get the SYN-ACK, do not send the final ACK.2) Retransmit the SYN with an incremental TTL until the destination minus 1 (e.g., 22). One should be getting TTL expiry for most hops. Record the metrics.3) If by any chance the route got shorter and you see a SYN-ACK sooner—this is not an issue, just reset the hops to the destination as the new value for all runs by discarding the hops over this new value even from older runs.4) Go Back to Step 1) for the next run. The destination will see these as SYN retransmits. If there is a SYN cookie logic on the destination it will be a stateless response, else will think of these as coming from a lossy network. § 17.4 ICMP and UDP Traceroute The above description was with TCP probes, but those skilled in the art will recognize a similar approach can be used with ICMP. Additionally, the process can keep the ICMP sequence number and ID the same and send the same probe with an incrementing TTL after detecting the distance to the destination. Also, a similar approach can be used with UDP, but UDP has some challenges due to the lack of the connection handshake. Sending a UDP probe to RFC port 33434 will generate an ICMP port unreachable. Once we get the ICMP response, we can send UDP probes using the same source port to the destination by increasing the TTL. This avoids consuming the port table on the device. § 17.5 Hybrid Approach If firewalls find increasing the TTL value of TCP packets undesirable in the second step, another approach for the second step of the TCP trace can include, once you get the SYN-ACK for the SYN of TTL of 64, the connection can be closed. Assuming that either ICMP or UDP traffic is permitted to the destination, it is possible to craft ICMP/UDP packets to the destination with increasing TTL of 1 up to the destination minus 1 (e.g., 22). This will help us get the intermediate hops as the routers will generate the TTL expired messages. From our point of view, it does not matter for what the TTL expiry messages came—UDP/TCP/ICMP. There could be cases where routers can treat ICMP/UDP traffic differently than TCP in which case the path we show might not match the path the traffic takes when we have the TCP traffic. The Hybrid approach comes handy when we have a case where the end application is not running and the stack returns an RST. The RST is a fine indicator of the latency and packet loss to the destination. Once that is determined, we can use ICMP/UDP to find the details of the intermediate hops. FIG.37is a flowchart of a process850for adaptive tracing with a reduced number of probes in a network. The process850is described with reference to a software module that is executed by the user device300with the application350. The process850can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process850includes determining a number of hops from a source that is the user device and a destination, including determining metrics from the source to the destination (step851); performing a trace to all intermediate nodes between the source and the destination, including determining metrics from the source to each of the intermediate nodes (step852); and combining and presenting the metrics from the source to the destination and from the source to each of the intermediate nodes (step853). The process850can further include repeating the trace to all intermediate nodes a plurality of times for the determining metrics from the source to each of the intermediate nodes. The metrics can include any of average latency and average loss. The number of hops is determined utilizing a binary search that changes a time to live (TTL) of packets to the destination. The process850can further include, subsequent to determining the number of hops, clearing any port tables or open connections. The process850can further include, subsequent to determining the number of hops without any retransmission, utilizing an associated connection for the trace to all intermediate nodes. The process850can further include, subsequent to determining the number of hops and prior to the trace to all intermediate nodes, creating an ephemeral port for the trace to all intermediate nodes. The process850can further include subsequent to the trace to all intermediate nodes, closing an associated connection for the trace to all intermediate nodes. The number of hops can be determined utilizing transmission control protocol (TCP). The trace to all intermediate nodes can utilize one of Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). § 18.0 Determining Endpoint and Application Behavior for Monitoring User Experience One of the challenges, in digital experience monitoring, of using a thick client application (e.g., a real-time collaboration application such as Zoom, Microsoft Teams, Slack, WebEx, and the like) is that there is no visibility about the endpoint the thick client application is talking to and about the internals of the thick client application. As described herein, a thick client application is any client application that communicates to a network endpoint and the network endpoint is not visible to the application350. This poses a challenge to identify the network endpoint to monitor and the precise reasons for any degradation of the application. Also, as described herein, the terms endpoint, application endpoint, and network endpoint are equivalent and is the destination640. Also, of note, while described herein with reference to a “thick” client application, the various techniques described herein are contemplated with any application on a user device, including browser-based applications, mobile applications, and the like. These can be covered by the term “client” applications. In various embodiments, the present disclosure includes techniques to identify network endpoints for probing thereto, i.e., the destination640. In an embodiment, one technique for identifying the endpoints is by mining application logs on the user device300. In another embodiment, another technique for identifying the endpoints is analyzing network flows of the thick client application at the user device300. Identification of the appropriate endpoint, i.e., destination640, is critical for probing and monitoring any degradation issues. One of the ways one can identify the endpoints is to mine the application's log, e.g., database and registry entries, on the user device300, for this information and initiate probes to them, when needed, periodically, etc. The thick client application can include application logs which is a rich set of data written into local files, databases, registry, etc., by parsing these, we can get a clear insight into the application behavior and the network endpoints they are communicating with. While externally monitoring an application, such as via the cloud-based system100, can give clues to the IPs they communicate with, it is difficult to understand what is carried inside the connections. The application's log gives us an insight into the type of traffic that each connection carries, e.g., Voice, Video, Control info, Chat, etc. Once these are identified, we can selectively probe the endpoints of interest, the destination640. Additional information can also be determined by monitoring the application logs, such as, for example, change in network characteristics (network drops, change in network, jitter, RTT, etc.) as seen by the thick client application and other application statistics such as threads counts, internal failure reasons, resource limits etc., can also be inferred. These data points help us understand why user experience could have deteriorated when using an application. Once the issue is detected by watching the logs, we can immediately, or after, initiate probes to them to identify issues in the network path and the performance of the endpoint itself. Similarly, based on any of the other data points, the user can be alerted and/or remedial action can be taken. In another embodiment, it is possible to determine the endpoint by looking at network flows of an application. That is, in the absence of application logs, another way of getting to know the network endpoints is to look for network connections that an application initiates. There are few attributes by which we can identify a network flow, locally at the user device3001. Destination port to which the connection is made2. Transport Protocol used3. SSL handshake parameters like server name indication (SNI), Ciphers, TLS version.4. Traffic patterns on the connections, such as, for example, control connections will primarily be over TCP, Voice connections will have a Constant Bit Rate (CBR). Voice/Video/Screenshare are primarily over UDP. Once we study these flows, we can with a very high accuracy identify the traffic flows of interest for many apps and initiate probes to them to identify issues in the network path and the performance of the endpoint itself. Similarly, based on any of the other data points, the user can be alerted and/or remedial action can be taken by us. § 18.1 Process for Determining Endpoint and Application Behavior for Monitoring User Experience FIG.38is a flowchart of a process900for determining endpoint and application behavior for monitoring user experience. The process900is described with reference to a software module that is executed by the user device300with the application350; although other embodiments are contemplated. The process900can be implemented as a method that includes steps, via a processing device configured to execute the steps, and via a non-transitory computer-readable medium that includes instructions that cause one or more processors to implement the steps. The process900includes determining a thick client application is being executed (step901); determining an endpoint associated with the thick client application, based on any of monitoring application logs associated with the thick client application and network flows associated with the thick client application (step902); and causing one or more probes to the determined endpoint and deriving metrics based on the one or more probes for determining performance of the thick client application (step903). The process900can further include monitoring application logs associated with the thick client application and detecting degradation of the thick client application based thereon. The degradation can be based on any of network drops, jitter, Round Trip Time (RTT), thread counts, resource limits, and failure reasons. The causing can be performed subsequent to the detected degradation. The process900can further include one or more of providing a notification to a user of the degradation and causing a remedial action based on the degradation. The application logs can include any of files, database entries, and registry entries on a user device executing the thick client application. The thick client application can be a real-time collaboration application. The process900can further include identifying a type of traffic associated with the thick client application based on the application logs. The process900can further include monitoring network flows associated with the thick client application to determine the endpoint, wherein the monitoring includes any of monitoring a destination port, a transport protocol, tunnel handshakes, and traffic patterns. The monitoring can include detecting control connections over Transmission Control Protocol (TCP), Voice connections being a Constant Bit Rate (CBR), Voice/Video/Screenshare over User Datagram Protocol (UDP). § 19.0 Conclusion It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application-Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments. Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments. The foregoing sections include headers for various embodiments and those skilled in the art will appreciate these various embodiments may be used in combination with one another as well as individually. Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. | 148,414 |
11863416 | DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the present invention will be described in detail with reference to drawings. Note that the present invention is not limited by the embodiment. In a description of the drawings, the same portions are denoted by the same reference signs. Embodiment A detection device according to the present embodiment makes it possible to enhance accuracy in DDoS attack detection, by imparting information on degree centrality related to a network structure. [Detection Device] First, the detection device according to the embodiment will be described.FIG.1is a diagram describing an example of a configuration of the detection device according to the embodiment. The detection device10according to the present embodiment includes a communication unit11, a storage unit12, and a control unit13. The communication unit11is a communication interface that transmits various information to and receives various information from another device connected through a network or the like. The communication unit11is implemented by a NIC (Network Interface Card) or the like, and performs communication between another device and the control unit13(which will be describe later) over a telecommunication circuit such as a LAN (Local Area Network) or the Internet. For example, the communication unit11is connected to an external device through a network or the like, and receives input of a packet to be analyzed. The storage unit12is implemented by a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, or a storage device such as a hard disk or an optical disk, and stores a processing program that causes the detection device10to operate, data used during execution of the processing program, and the like. The storage unit12includes statistical data121and a graph table122. The statistical data121is data on statistical values of a 5-tuple statistical flow acquired by a statistic unit131(which will be described later). The graph table122is a table for graph that is used in calculation of a network graph by a degree centrality calculation unit132(which will be described later).FIGS.2and3show examples of a data configuration of the graph table122. In a graph table1221(first table) shown inFIG.2, node granularity is an IP address. In a graph table1222(second table) shown inFIG.3, node granularity is a combination of an IP address, a port number, and protocol information. In the graph table1221, an IP address of a node is associated with each item for an indegree and each item for an outdegree. In the graph table1222, a combination of an IP address, a port number, and protocol information of a node is associated with an indegree and an outdegree of the node. The items for an indegree include: presence/absence of an edge (the number of edges) communicated by the corresponding node; the number of 5-tuple flows received by the corresponding node; the number of packets received by the corresponding node; and the number of bytes in received data. The items for an outdegree include: presence/absence of an edge (the number of edges) communicated by the corresponding node; the number of 5-tuple flows sent by the corresponding node; the number of packets sent by the corresponding node; and the number of bytes in sent data. Each column of the graph tables1221,1222is updated by the degree centrality calculation unit132, and is initialized by a degree centrality impartation unit133. The control unit13includes an internal memory for storing a program defining various processing procedures and required data, and executes various processing based on the program and the data. For example, the control unit13is an electronic circuit such as a CPU (Central Processing Unit) or an MPU (Micro Processing Unit). The control unit13includes the statistic unit131, the degree centrality calculation unit132(calculation unit), the degree centrality impartation unit133(impartation unit), and a detection unit134. The statistic unit131acquires statistical value data on 5-tuple statistical flows, with respect to received packets buffered for a certain time period. The statistic unit131stores the statistical value data on the 5-tuple statistical flows in the certain time period in the storage unit12. The degree centrality calculation unit132calculates a degree centrality of a node, based on the statistical value data on the 5-tuple flows in the certain time period acquired by the statistic unit131.FIGS.4and5are diagrams describing processing by the degree centrality calculation unit132and the degree centrality impartation unit133. The degree centrality of a node is a feature including an indegree, which is a total sum of weights of edges flowing into the node, and an outdegree, which is a total sum of weights of edges flowing out of the node (see (1) inFIG.4). Note that the granularity of the node is an IP address, or a combination of an IP address, a port number, and protocol information. An arrow between nodes inFIG.4indicates a direction of an edge directed from a source node toward a destination node. Types of edge weights include presence/absence of a communication (the number of edges communicated), the number of statistical data samples, the number of packets, and the number of bytes. As shown at (a) inFIG.5, the indegree represents the following, with respect to a certain node:(1) the number of source nodes with which the node communicates;(2) the number of packets received by the node;(3) the number of bytes received by the node; and(4) the number of flows received by the node. Note that the node is an IP address, or a combination of an IP address and a port number. The indegree represents a degree of input of flows at a certain node from other nodes, and serves as an indicator detecting that the node may be a victim attacked by attackers when the indegree has a high value. As shown at (b) inFIG.5, the outdegree represents the following, with respect to a certain node:(1) the number of communication-destination nodes with which the node communicates;(2) the number of packets sent out by the node;(3) the number of bytes sent by the node; and(4) the number of flows sent out by the node. The outdegree represents a degree of output of flows at a certain node to other nodes, and serves as an indicator detecting that the node may be an attacker or an infected node when the outdegree has a high value. The degree centrality calculation unit132selects the graph table1221or the graph table1222, depending on the granularity of a node under analysis, performs graph calculation for statistical value data DO on 5-tuple flows in a certain time period by using the selected table, and thereby calculates a degree centrality for the statistical value data on the 5-tuple flows in the certain time period (see (A) inFIG.4). Processing by the degree centrality calculation unit132will be described specifically with reference toFIG.6.FIG.6is a diagram describing the processing by the degree centrality calculation unit132. As shown inFIG.6, the degree centrality calculation unit132extracts information on a node under analysis from the statistical value data DO on the 5-tuple flows in the certain time period (see (1) inFIG.6). The degree centrality calculation unit132then selects either the graph table1221or the graph table1222, depending on the granularity of the node to be analyzed. When the granularity is an IP address, the degree centrality calculation unit132selects the graph table1221in which node granularity is an IP address, and accesses, in the graph table1221, a row in which the extracted node is stated (see (2-1) inFIG.6). When the granularity is a combination of an IP address, a port number, and protocol information, the degree centrality calculation unit132selects the graph table1222in which node granularity is a combination of an IP address, a port number, and a protocol information, and accesses, in the graph table1222, a row in which the extracted node is stated (see (2-2) inFIG.6). Subsequently, when the extracted node corresponds to a destination IP address (dst_ip), the degree centrality calculation unit132acquires, from the statistical value data DO on the 5-tuple flows, presence/absence of an edge (the number of edges) communicated by the node, the number of 5-tuple flows received by the node, the number of packets received by the node, and the number of bytes in the received data. The degree centrality calculation unit132updates each of fields of the presence/absence of an edge (the number of edges), the number of 5-tuple flows, the number of packets, and the number of bytes under the indegree in the accessed row in the graph table1221,1222, with the respective values acquired (see (3-1), (4-1) inFIG.6). When the extracted node is a source IP address (src_ip), the degree centrality calculation unit132acquires, from the statistical value data DO on the 5-tuple flows, presence/absence of an edge (the number of edges) communicated by the node, the number of 5-tuple flows sent by the node, the number of packets sent by the node, and the number of bytes in the sent data. The degree centrality calculation unit132then updates each of fields of the presence/absence of an edge (the number of edges), the number of 5-tuple flows, the number of packets, and the number of bytes under the outdegree in the accessed row in the graph table1221,1222, with the respective values acquired (see (3-2), (4-2) inFIG.6). FIG.7is a diagram describing an example of communication between nodes.FIGS.8and9are diagrams describing examples of a result of the graph calculation by the degree centrality calculation unit132. In the example inFIG.7, a flow with a number “α” of packets and a number “β” of bytes is transmitted from a node with an IP address “x” to a node with an IP address “y” via a port “a”. A flow with a number “γ” of packets and a number “δ” of bytes is transmitted from the node with the IP address “x” to a node with an IP address “z” via a port “b”. When node granularity is an IP address, with respect to the node “x”, since the node is the source of the packets, the degree centrality calculation unit132updates the number of edges to “2”, the 5-tuple flows to “2”, the number of packets to “α+γ”, and the number of bytes to “β+δ” under the outdegree in the row of the node “x” in the graph table1221, as shown inFIG.8. With respect to the node “y”, since the node is a destination of the packets, the degree centrality calculation unit132updates the number of edges to “1”, the 5-tuple flows to “1”, the number of packets to “α”, and the number of bytes to “β” under the indegree in the row of the node “y” in the graph table1221. When node granularity is a combination of an IP address, a port number, and protocol information, as shown inFIG.9, the degree centrality calculation unit132updates the number of edges to “1”, the 5-tuple flows to “1”, the number of packets to “γ”, and the number of bytes to “δ” under the outdegree in the row of the node “(x, α, udp)” in the graph table1222. With respect to the node “(y, d, udp)”, since the node is a destination of the packets, the degree centrality calculation unit132updates the number of edges to “1”, the 5-tuple flows to “1”, the number of packets to “α”, and the number of bytes to “β” under the indegree in the row of the node “(y, d, udp)” in the graph table1222. As described above, by using the graph table1221,1222, the degree centrality calculation unit132calculates, from the statistical value data DO on the 5-tuple flows in the certain time period, a network graph that represents “from which address to which address a communication is performed, how many packets flow, and how much statistical value data is communicated”, or “from which (address, port, protocol) combination to which (address, port, protocol) combination a communication is performed, how many packets flow, and how much statistical value data is communicated”. Next, referring back toFIGS.1and4, the degree centrality impartation unit133will be described. The degree centrality impartation unit133imparts the degree centrality calculated by the degree centrality calculation unit132, as a feature, to the statistical value data DO on the 5-tuple flows in the certain time period (see (B) inFIG.4). The degree centrality impartation unit133outputs, to the detection unit134, impartation data D1 to which the degree centrality related to 5-tuple values is imparted. Processing by the degree centrality impartation unit133will be described specifically with reference toFIG.10.FIG.10is a diagram describing the processing by the degree centrality impartation unit133. As shown inFIG.7, the degree centrality impartation unit133extracts information on a node under impartation of a feature, from the statistical value data DO on the 5-tuple flows in the certain time period (see (1) inFIG.10). The degree centrality impartation unit133then selects either the graph table1221or the graph table1222, depending on the granularity of the node under impartation. When the granularity is an IP address, the degree centrality impartation unit133selects the graph table1221, and accesses, in the graph table1221, a row in which the extracted node is stated (see (2-1) inFIG.10). When the granularity is a combination of an IP address, a port number, and protocol information, the degree centrality impartation unit133selects the graph table1222, and accesses, in the graph table1222, a row in which the extracted node is stated (see (2-2) inFIG.10). Subsequently, when the extracted node is a destination IP address (dst_ip), the degree centrality impartation unit133retrieves each field under the indegree in the accessed row (see (3-1), (4-1) inFIG.10). When the extracted node is a source IP address (src_ip), the degree centrality impartation unit133retrieves each field under the outdegree in the accessed row (see (3-2), (4-2) inFIG.10). The degree centrality impartation unit133then imparts the retrieved degree centrality, as a feature, to the statistical value data DO on the 5-tuple flows (see (5) inFIG.7). Next, referring back toFIG.1, the detection unit134will be described. The detection unit134detects, based on the impartation data D1, which is the statistical value data DO on the 5-tuple flows in the certain time period to which the degree centrality related to 5-tuple values is imparted, whether or not the flows in the certain time period have abnormality. For example, by using machine learning or the like, the detection device10pre-learns statistical value data on flows in a certain time period and a characteristic of a degree centrality imparted to the statistical value data on the 5-tuple flows in the certain time period at a normal time. The detection unit134compares, with a result of the learning, the statistical value data on the 5-tuple flows in the certain time period under analysis and the degree centrality imparted to the statistical value data on the flows in the certain time period, and thereby detects whether or not the 5-tuple flows under analysis have abnormality. For example, baselines for statistical value data on 5-tuple flows in a certain time period and a corresponding degree centrality at a normal time are set in the detection device10, by learning, in machine learning or the like, statistical value data on 5-tuple flows in a certain time period and a characteristic of a degree centrality imparted to the statistical value data on the 5-tuple flows in the certain time period at a normal time. When the statistical value data on the 5-tuple flows under analysis and the imparted degree centrality deviate from the baselines by predetermined values or more, the detection unit134determines abnormality and detects an attack. [Procedure of Detection Processing by Detection Device] Next, a processing procedure of detection processing by the detection device10will be described.FIG.11is a flowchart showing the processing procedure of the detection processing according to the embodiment. As shown inFIG.11, in the detection device10, the statistic unit131acquires statistical value data on 5-tuple statistical flows, with respect to received packets buffered for a certain time period (step S1). The degree centrality calculation unit132calculates a degree centrality of a node, based on the statistical value data on the 5-tuple flows in the certain time period acquired by the statistic unit131(step S2). The degree centrality impartation unit133imparts the degree centrality calculated by the degree centrality calculation unit132, as a feature, to the statistical value data on the 5-tuple flows in the certain time period (step S3). Based on the impartation data with the imparted degree centrality related to 5-tuple values, the detection unit134detects whether or not the flows in the certain time period have abnormality (step S4), and transmits a result of the detection to a countermeasure device. Effects of Embodiment As described above, the detection device10according to the embodiment calculates a degree centrality of a node including an indegree, which is a total sum of weights of edges flowing into the node, and an outdegree, which is a total sum of weights of edges flowing out of the node, based on statistical value data on 5-tupl flows in a certain time period, and imparts the degree centrality, as a feature, to the statistical value data on the 5-tuple flows. Specifically, with respect to a certain node, the detection device10calculates, as a degree centrality, an outdegree that represents the number of communication-destination nodes with which the node communicates, the number of packets sent out by the node, the number of bytes sent by the node, and the number of flows sent out by the node, and an indegree that represents the number of source nodes with which the certain node, the node communicates, the number of packets received by the node, the number of bytes received by the node, and the number of flows received by the node. In other words, the detection device10calculates a degree centrality related to a network structure, as a feature. Accordingly, by imparting the degree centrality related to the network structure, as a feature, to the statistical value data on the 5-tuple flows, the detection device10can achieve attack detection based on a network-structure perspective, whereby accuracy in DDos attack detection can be enhanced. Node granularity is an IP address or a combination of an IP address, a port number, and protocol information, and the detection device10stores in advance the graph table1221, in which an IP address of a node is associated with an indegree and an outdegree of the node, and the graph table1222, in which a combination of an IP address, a port number, and protocol information of a node is associated with an indegree and an outdegree of the node. Accordingly, the detection device10can calculate a degree centrality appropriately by selecting the graph table1221or the graph table1222, depending on the granularity of a node under analysis, and calculating the degree centrality by using the selected table. Example 1 The functions of the detection device10according to the present embodiment may be deployed in a distributed manner among a plurality of devices in a communication system.FIG.12is a diagram describing an example 1 of the embodiment. As shown inFIG.12, the statistic unit131, the degree centrality calculation unit132, and the degree centrality impartation unit133(not shown) may be provided to a router10A, the router10A may perform acquisition of statistical data on 5-tuple statistical flows and calculation of a degree centrality, and a server10B may perform abnormality detection based on the impartation data D1. Example 2 FIG.13is a diagram describing an example 2 of the embodiment. As shown inFIG.13, the statistic unit131may be provided to a router10C, and the degree centrality calculation unit132, the degree centrality impartation unit133(not shown), and the detection unit134(not shown) may be provided to a server10D. In such a case, the router10C performs acquisition of statistical data on 5-tuple statistical flows, and the server10D performs calculation of a degree centrality and abnormality detection. Example 3 FIG.14is a diagram describing an example 3 of the embodiment. As shown inFIG.14, a router20may send a header sample to a server10E, and the server10E may perform acquisition of statistical data on 5-tuple statistical flows and calculation of a degree centrality. A different server10F from the server10E performs abnormality detection based on the impartation data D1. [System Configuration and the Like] Each component of each device depicted in the drawings is of a functional concept, and does not necessarily need to be physically configured as depicted in the drawings. In other words, a specific distributed or integrated form of each device is not limited to those depicted in the drawings, and an entirety or part of each device can be functionally or physically configured in a distributed or integrated manner in arbitrary units, depending on various loads, usage situations, and the like. Moreover, all, or any one or some, of the functions for the processing performed in each device can be implemented by a CPU and a program that is analyzed and executed by the CPU, or can be implemented by wired logic-based hardware. Of the processing steps described in the present embodiment, all, or one or some, of processing steps described as being automatically performed may be manually performed, or all, or one or some, of processing steps described as being manually performed may be automatically performed by using a well-known method. In addition, the information described in the above description and the drawings, including processing procedures, control procedures, specific names, and various data and parameters, can be arbitrarily changed unless otherwise specified. [Program] FIG.16shows an example of a computer by which a program is executed and the detection device10is thereby implemented. The computer1000includes, for example, a memory1010and a CPU1020. Moreover, the computer1000includes a hard disk drive interface1030, a disk drive interface1040, a serial port interface1050, a video adapter1060, and a network interface1070. Such components are connected to each other by a bus1080. The memory1010includes a ROM (Read Only Memory)1011and a RAM1012. For example, the ROM1011stores a boot program such as BIOS (Basic Input Output System). The hard disk drive interface1030is connected to a hard disk drive1090. The disk drive interface1040is connected to a disk drive1100. For example, a removable storage medium such as a magnetic disk or an optical disk is inserted into the disk drive1100. The serial port interface1050is connected to, for example, a mouse1110and a keyboard1120. The video adapter1060is connected to, for example, a display1130. The hard disk drive1090stores, for example, an OS (Operating System)1091, an application program1092, a program module1093, and program data1094. In other words, the program defining each processing step in the detection device10is implemented as the program module1093in which computer-executable codes are described. The program module1093is stored, for example, in the hard disk drive1090. For example, the program module1093for execution of processing similar to the functional components of the detection device10is stored in the hard disk drive1090. Note that the hard disk drive1090may be substituted by an SSD (Solid State Drive). Setting data used in the processing in the embodiment described above is stored as the program data1094, for example, in the memory1010or the hard disk drive1090. The CPU1020reads and executes on the RAM1012, the program module1093or the program data1094stored in the memory1010or the hard disk drive1090when necessary. Note that the program module1093and the program data1094are not limited to being stored in the hard disk drive1090, but may be stored, for example, in a removable storage medium and may be read by the CPU1020via the disk drive1100or the like. Alternatively, the program module1093and the program data1094may be stored in another computer connected through a network (LAN, WAN (Wide Area Network), or the like). Then, the program module1093and the program data1094may be read from the other computer by the CPU1020via the network interface1070. Although embodiments to which the invention made by the present inventor is applied have been described hereinabove, the present invention is not limited by the description and the drawings based on the present embodiment that are part of the disclosure of the present invention. In other words, all of other embodiments, examples, operational techniques, and the like worked based on the present embodiment by persons skilled in the art and the like are incorporated in the scope of the present invention. REFERENCE SIGNS LIST 10Detection device11Communication unit12Storage unit13Control unit131Statistic unit132Degree centrality calculation unit133Degree centrality impartation unit134Detection unit | 25,245 |
11863417 | DETAILED DESCRIPTION Generally described, the present disclosure is directed to DNS query processing that provides for the use of various routing modes responsive to receipt of a DNS query corresponding to a requested resource. More specifically, aspects of the disclosure will be described with regard to processing resource requests associated with content providers having a flat-rate pricing for use of a CDN service provider's computing devices. In some embodiments, in response to a DNS query corresponding to a requested resource, a CDN service provider may select a less optimal POP to service the requested resource based on one or more criteria and thereby select a “sloppy routing” scheme. For example, the one or more criteria may correspond to aspects of a flat-rate pricing model offered to content providers by the CDN service provider to provide content on their behalf. Continuing with this example, in such an approach, if a content provider has exceeded a threshold network usage, which may, for example, be based at least in part on pricing information for CDN service provider to provide content on behalf of the content provider, DNS query processing at a DNS server of the CDN service provider can include routing a response to the DNS query, in a “sloppy” routing manner and/or fashion, to a suboptimal POP by determining an alternative resource identifier or cache IP address at another POP (e.g., located at another edge server). Further, aspects of the disclosure will be described with regard to DNS query processing that can determine a suboptimal, or sloppy, routing approach to avoid costs associated with data links of cache servers providing the requested resources. Accordingly, the one or more criteria used by the CDN service provider may also correspond to aspects of the CDN service provider cost information. More specifically, the data links, provisioned by a CDN service provider, can correspond to a financial cost for the content delivery bandwidth available on the data links of the cache servers. This financial cost can be determined in relation to a threshold content delivery bandwidth. For example, if a current content delivery bandwidth exceeds the threshold content delivery bandwidth, the CDN service provider incurs greater costs. In various embodiments, responses to DNS queries, such as an alternative resource identifier or a cache IP address, can be sloppy routed to another POP location with associated data links of cache servers operating below the threshold content delivery bandwidth. In one embodiment, the one or more criteria for selecting a “sloppy routing” scheme includes a latency associated with providing requested resources for the content provider. In various other embodiments, other criteria that may affect the selection of sloppy routing for the response to a DNS query can include: optimizing content accessibility via hashing algorithms, security concerns, favoring certain content providers (e.g., customers), or favoring certain content uses. Still further, aspects of the disclosure will be described with regard to DNS query processing for sloppy routing schemes using one or more criteria. The one or more criteria may include both the threshold network usage and the threshold content delivery bandwidth. For example, in such multi-criterion approach, the latency associated with routing a response of a DNS query to a suboptimal POP is considered in combination with the marginal cost to service a content request at a data link operating above the content delivery bandwidth threshold. Accordingly, a content provider that has exceeded a threshold network usage can be routed in a sloppy manner during a peak time, when other available data links of cache servers located at another POP are available at a lower cost or no cost because the data links of those cache servers are operating under the threshold content delivery bandwidth. Thus, a DNS server can use the one or more criteria to determine whether to use a suboptimal POP instead of an optimal or original POP. Further aspects of the disclosure will be described with regard to determining an appropriate routing mode for providing a requested resource. In various embodiments, a spectrum of routing modes are available to the CDN service provider for use in responding to DNS queries. The routing modes can include: a default routing mode, a sloppy routing mode, a regional anycast routing mode, an anycast routing mode, and a “follow-the-moon” routing mode. (The “follow-the-moon” routing mode is described in U.S. patent application Ser. No. 14/229,568, titled “Scaling Computing Instances” and filed on Mar. 28, 2014, the entirety of which is incorporated herein by reference). The CDN service provider may determine a routing mode for providing a requested resource from a plurality of available routing modes based on one or more criteria. For example, the one or more criteria may include the network usage associated with the content provider, the content delivery bandwidth associated with the CDN service provider, a susceptibility factor associated with the content provider, or a latency associated with a POP in providing the requested resource. Additionally or alternatively, the one or more criteria may include one or more susceptibility factors and/or one or more latencies. For example, in one embodiment, the CDN service provider may determine that an anycast routing mode is appropriate if a susceptibility factor indicates that the anycast routing mode is a more appropriate routing mode when providing the requested resource. In another embodiment, the CDN service provider may determine that a default routing mode is appropriate if the one or more criteria indicate that a latency for providing the resource request is to be minimized. After selection of the routing mode, the CDN service provider may provide a response to the DNS query in accordance with the determined routing mode. In another example, a regional anycast routing mode can be determined as the appropriate routing mode when the one or more criteria indicate that a susceptibility factor (e.g., a security concern) associated with the plurality of available routing modes is higher for a default routing mode, than the regional anycast routing mode. Continuing with this example, the response to the DNS query may be routed in accordance with the regional anycast routing mode so that several cache servers could be used to service the request, thereby enhancing security. In contrast, in the default routing mode, a cache server can be a single cache server (e.g., an optimal cache server with minimal latency for providing the resource request). Thus, the CDN service provider can provide responses to DNS queries in accordance with one of a plurality of available routing modes based on a variety of criteria. Although various aspects of the disclosure will be described with regard to illustrative examples and embodiments, one skilled in the art will appreciate that the disclosed embodiments and examples should not be construed as limiting. FIG.1is a block diagram illustrative of content delivery environment100for the management and processing of content requests. As illustrated inFIG.1, the content delivery environment100includes a number of client computing devices102(generally referred to as clients) for requesting content from a content provider and/or a CDN service provider. In an illustrative embodiment, the client computing devices102can correspond to a wide variety of computing devices including personal computing devices, laptop computing devices, hand-held computing devices, terminal computing devices, mobile devices, wireless devices, various electronic devices and appliances and the like. In an illustrative embodiment, the client computing devices102include necessary hardware and software components for establishing communications over a communication network108, such as a wide area network or local area network. For example, the client computing devices102may be equipped with networking equipment and browser software applications that facilitate communications via the Internet or an intranet. The client computing devices102may also include necessary hardware and software components for requesting content from network entities in the form of an originally requested resource that may include identifiers to two or more embedded resources that need to be requested. Further, the client computing devices102may include or be associated with necessary hardware and software components, such as browser software applications, plugins, scripts, etc., for fulfilling the original resource request and each embedded resource request. In other embodiments, the client computing devices102may be otherwise associated with an external proxy application or device, as well as any other additional software applications or software services, used in conjunction with requests for content. Although not illustrated inFIG.1, each client computing device102utilizes some type of local DNS resolver component, such as a DNS Name server, that generates the DNS queries attributed to the client computing device. In one embodiment, the local DNS resolver component may be provided by an enterprise network to which the client computing device102belongs. In another embodiment, the local DNS resolver component may be provided by an Internet Service Provider (ISP) that provides the communication network connection to the client computing device102. The content delivery environment100can also include a content provider104in communication with the client computing devices102via the communication network108. The content provider104illustrated inFIG.1corresponds to a logical association of one or more computing devices associated with a content provider. Specifically, the content provider104can include a web server component110corresponding to one or more server computing devices for obtaining and processing requests for content (such as Web pages) from the client computing devices102. The content provider104can further include an origin server component112and associated storage component114corresponding to one or more computing devices for obtaining and processing requests for network resources from the CDN service provider. One skilled in the relevant art will appreciate that the content provider104can be associated with various additional computing resources, such additional computing devices for administration of content and resources, DNS name servers, and the like. For example, although not illustrated inFIG.1, the content provider104can be associated with one or more DNS name server components that would be authoritative to resolve client computing device DNS queries corresponding to a domain of the content provider. Although the content delivery environment100is illustrated in a client-server configuration, one skilled in the relevant art will appreciate that the content delivery environment100may be implemented in a peer-to-peer configuration as well. With continued reference toFIG.1, the content delivery environment100can further include a CDN service provider106in communication with the client computing devices102and the content providers104via the communication network108. The CDN service provider106illustrated inFIG.1corresponds to a logical association of one or more computing devices associated with a CDN service provider. Specifically, the CDN service provider106can include a number of point of presence (“POP”) locations116and122that correspond to nodes on the communication network108. Each POP116and122includes a DNS server component118and124made up of a number of DNS server computing devices for resolving DNS queries from the client computers102. Each POP116and122also includes a resource cache component120and126made up of a number of cache server computing devices for storing resources from content providers and transmitting various requested resources to various client computers. The DNS server components118and124and the resource cache components120and126may further include additional software and/or hardware components that facilitate communications including, but not limited, load balancing or load sharing software/hardware components. In an illustrative embodiment, the DNS server components118and124and resource cache component120and126are considered to be logically grouped, regardless of whether the components, or portions of the components, are physically separate. Additionally, although the POPs116and122are illustrated inFIG.1as logically associated with the CDN service provider106, the POPs can be geographically distributed throughout the communication network108to serve the various demographics of client computing devices102. Additionally, one skilled in the relevant art will appreciate that the CDN service provider106can be associated with various additional computing resources, such additional computing devices for administration of content and resources, and the like. The CDN service provider106can further include a routing mode and POP selection service128, pricing data store130, and back-end processing service132. Illustratively, the routing mode and POP selection service128can implement various computational, statistical, or machine learning methods to route the response (e.g., an answer) to a DNS query received at the CDN service provider106(e.g., received at DNS server component118). For example, the routing mode and POP selection service128can determine an appropriate routing mode for the alternative resource identifier (e.g., a CNAME) associated with the second DNS server124to the second POP122of the CDN service provider106or fort the IP address of a cache component in the resource cache126of the second POP122. The routing mode and POP selection service128may include different modules or components, which may facilitate or implement various methods and processes described herein. Further, these modules or components may include additional components, systems, and subsystems for facilitating the methods and processes. Pricing data store130can include pricing information that indicates a price at which the CDN provider106provides content on behalf of the content provider104. Pricing data store130can, additionally or alternatively, include cost information indicating a financial cost of content delivery bandwidth for the CDN service provider106(e.g., the costs to operate provisioned data links at cache servers). For example, in some embodiments, the pricing information can include a flat-rate price for monthly service for the content provider104by CDN provider106. Illustratively, back-end processing service132can include a number of hardware and software components. More specifically, the back-end processing service132may include hardware, software, configuration data, data structures, computer-readable code, or any type of information that can be loaded into memory and processed by back-end processing service132. Aspects of the back-end processing service132will be described in further detail below with respect toFIG.4that illustrates the processing and storing service provided by back-end processing service132. In various embodiments, reference to the routing mode and POP selection service128and back-end processing service132within the present disclosure may include multiple computing devices working in conjunction to facilitate the selecting of alternative resource identifiers or cached IP addresses at alternative POP locations to service content requests. For example, in various embodiments, the routing mode and POP selection service128may be distributed through a network or implemented by one or more virtual machine instances. Additionally or alternatively, it can be appreciated by one skilled in the art that the routing mode and POP selection service128and back-end processing service132may correspond to a combination thereof and/or include any other services, be centralized in one computing device, and/or be distributed across several computing devices. With reference now toFIGS.2-6, the interaction between various components of the content delivery environment100ofFIG.1will be illustrated. For purposes of the example, however, the illustration has been simplified such that many of the components utilized to facilitate communications are not shown. One skilled in the relevant art will appreciate that such components can be utilized and that additional interactions would accordingly occur without departing from the spirit and scope of the present disclosure. With reference toFIG.2, an illustrative interaction for registration of a content provider104with the CDN service provider106will be described. As illustrated inFIG.2, the CDN content registration process begins with registration of the content provider104with the CDN service provider106. In an illustrative embodiment, the content provider104utilizes a registration application program interface (“API”) to register with the CDN service provider106such that the CDN service provider106can provide content on behalf of the content provider104. The registration API includes the identification of the origin server112of the content provider104that will provide requested resources to the CDN service provider106. One skilled in the relevant art will appreciate that upon identification of appropriate origin servers112, the content provider104can begin to direct requests for content from client computing devices102to the CDN service provider106. Specifically, in accordance with DNS routing principles, a client computing device request corresponding to a resource identifier would eventually be directed toward a POP116and122associated with the CDN service provider106. In the event that the resource cache component120and126of a selected POP does not have a copy of a resource requested by a client computing device102, the resource cache component will request the resource from the origin server112previously registered by the content provider104. With continued reference toFIG.2, upon receiving the registration API, the CDN service provider106obtains and processes the registration information. In an illustrative embodiment, the CDN service provider106can then generate additional information that will be used by the client computing devices102as part of the content requests. The additional information can include, without limitation, client identifiers, such as client identification codes, content provider identifiers, such as content provider identification codes, executable code for processing resource identifiers, such as script-based instructions, the like. One skilled in the relevant art will appreciate that various types of additional information may be generated by the CDN service provider106and that the additional information may be embodied in any one of a variety of formats. The CDN service provider106returns an identification of applicable domains for the CDN service provider (unless it has been previously provided) and any additional information to the content provider104. In turn, the content provider104can then process the stored content with content provider specific information. In one example, as illustrated inFIG.2, the content provider104translates resource identifiers originally directed toward a domain of the origin server112to a domain corresponding to the CDN service provider. The translated URLs are embedded into requested content in a manner such that DNS queries for the translated URLs will resolve to a DNS server corresponding to the CDN service provider106and not a DNS server corresponding to the content provider104. Although the translation process is illustrated inFIG.2, in some embodiments, the translation process may be omitted in a manner described in greater detail below. Generally, the identification of the resources originally directed to the content provider104will be in the form of a resource identifier that can be processed by the client computing device102, such as through a browser software application. In an illustrative embodiment, the resource identifiers can be in the form of a uniform resource locator (“URL”). Because the resource identifiers are included in the requested content directed to the content provider104, the resource identifiers can be referred to generally as “content provider URLs.” For purposes of an illustrative example, a content provider URL can identify a domain of the content provider104(e.g., contentprovider.com), a name of the resource to be requested (e.g., “resource.xxx”) and a path where the resource will be found (e.g., “path”). In this illustrative example, the content provider URL has the form of:http://www.contentprovider.com/path/resource.xxx During an illustrative translation process, the content provider URL is modified such that requests for the resources associated with the translated URLs resolve to a POP associated with the CDN service provider106. In one embodiment, the translated URL identifies the domain of the CDN service provider106(e.g., “cdnprovider.com”), the same name of the resource to be requested (e.g., “resource.xxx”) and the same path where the resource will be found (e.g., “path”). Additionally, the translated URL can include additional processing information (e.g., “additional information”). The translated URL would have the form of:http://additional_information.cdnprovider.com/path/resources.xxx In another embodiment, the information associated with the CDN service provider106is included the modified URL, such as through prepending or other techniques, such that the translated URL can maintain all of the information associated with the original URL. In this embodiment, the translated URL would have the form of:http://additional_information.cdnprovider.com/www.contentprovider.com/path/resource.xxx With reference now toFIG.3, after completion of the registration and translation processes illustrated inFIG.2, a client computing device102subsequently generates a content request that is received and processed by the content provider104, such as through the Web server110. In accordance with an illustrative embodiment, the request for content can be in accordance with common network protocols, such as the hypertext transfer protocol (“HTTP”). Upon receipt of the content request, the content provider104identifies the appropriate responsive content. In an illustrative embodiment, the requested content can correspond to a Web page that is displayed on the client computing device102via the processing of information, such as hypertext markup language (“HTML”), extensible markup language (“XML”), and the like. The requested content can also include a number of embedded resource identifiers, described above, that corresponds to resource objects that should be obtained by the client computing device102as part of the processing of the requested content. The embedded resource identifiers can be generally referred to as original resource identifiers or original URLs. Upon receipt of the requested content, the client computing device102, such as through a browser software application, begins processing any of the markup code included in the content and attempts to acquire the resources identified by the embedded resource identifiers. Accordingly, the first step in acquiring the content corresponds to the issuance, by the client computing device102(through its local DNS resolver), of a DNS query for the original URL resource identifier that results in the identification of a DNS server authoritative to the “.” and the “com” portions of the translated URL. After resolving the “.” and “com” portions of the embedded URL, the client computing device102then issues a DNS query for the resource URL that results in the identification of a DNS server authoritative to the “.cdnprovider” portion of the embedded URL. The issuance of DNS queries corresponding to the “.” and the “com” portions of a URL are well known and have not been illustrated. With reference now toFIG.4, in an illustrative embodiment, after completion of the registration and translation processes illustrated inFIG.2, the successful resolution of the “cdnprovider” portion of the original URL identifies a network address, such as an IP address, of a DNS server component118associated with the CDN service provider106. In one embodiment, the IP address is a specific network address unique to a DNS server component118of POP116. In another embodiment, the IP address can be shared by one or more POPs116,122. In this embodiment, a DNS query to the shared IP address utilizes a one-to-many network routing schema, such as anycast, such that a specific POP, POP118, will receive the request as a function of network topology. For example, in an anycast implementation, a DNS query issued by a client computing device102to a shared IP address will arrive at a DNS server component logically having the shortest network topology distance, often referred to as network hops, from the client computing device. The network topology distance does not necessarily correspond to geographic distance. However, in some embodiments, the network topology distance can be inferred to be the shortest network distance between a client computing device102and a POP. With continued reference toFIG.4, in either of the above-identified embodiments (or any other embodiment), a specific DNS server in the DNS component118of a POP116receives the DNS query corresponding to the original URL from the client computing device102. Once one of the DNS servers in the DNS component118receives the request, the specific DNS server attempts to resolve the request. In an illustrative embodiment, a specific DNS server can resolve the DNS query by selecting an IP address of a resource cache component that will process the request for the requested resource. Alternatively, in another embodiment, as will be described further below in reference toFIGS.5A and5B, an alternative resource identifier (e.g., a CNAME) associated with another DNS server component of the CDN service provider106may be selected and returned to the client computing device102. In either case, as will also be further described below, the CDN service provider106, can implement various methods and systems to select a routing mode for an IP address of a cache server component of the CDN service provider106or an alternative resource identifier associated with another DNS server component of the CDN service provider106, such as via the routing mode and POP selection service128as generally discussed above. Returning to the embodiment ofFIG.4specifically, the DNS server component118processes the DNS query in part by selecting an IP address of a resource cache component of the CDN service provider106based at least in part on one or more criteria. The one or more criteria can include aspects of a flat-rate pricing model offered by the CDN service provider106and aspects of a CDN service provider cost information. Accordingly, a threshold network usage and a threshold content delivery bandwidth can be used as one or more criteria during DNS query processing to select an IP address of the resource cache component. To evaluate these thresholds, the data links of the resource cache components120and126are associated with a throughput capability that can be measured. For example, the throughput capability of one data link at a single cache server of resource cache component120(or collectively as several data links at resource cache components120and126) can include a measurement of the available bandwidth, used bandwidth (e.g., network usage), or other metrics that may evaluate the performance of networked components such as data links associated with resource cache components. In some instances, these measurements of bandwidth can be evaluated as a percentile (e.g., a 95th percentile metric) with reference to other data links at a specific resource cache component or several resource cache components, aggregated to use as a statistical metric. With these metrics, the DNS server component118then processes the DNS query in part with reference to these metrics and determines whether a sloppy routing scheme may be used if a certain metric falls above or below a certain threshold (e.g., a threshold network usage of content provider104). Accordingly, various one or more criteria corresponding to metrics of the data links at the resource cache components120and126may be used to determine whether a sloppy routing scheme should be used. In one embodiment of sloppy routing using the CDN service provider cost information as the one or more criteria, the routing mode and POP selection service128of the CDN service provider106can determine to route the response to a DNS query (e.g., a content request) to a different DNS server associated with an alternative resource identifier or a cache IP address that, in this example, is not the optimal service location for that DNS query. In various embodiments, the CDN service provider106can determine a suboptimal routing approach to avoid costs associated with data links of the resource cache components120and126servicing the content requests. In this approach, the CDN service provider106can provision the data links (e.g., hardware) necessary at the resource cache components120and126based on the available content delivery bandwidths of the resource cache components120and126or the DNS query processing of DNS server components118and124for servicing the requested resources. These data links, not illustratively depicted inFIG.3, operating via network108cost the CDN service provider106, in various embodiments, on a per data link basis. The costs of these data links may vary in various scenarios. In some embodiments, the cost of the data links can be determined based on the DNS query processing of the DNS server components118and124or the available content delivery bandwidth at resource cache components120and126. The DNS server components118and124and resource cache components120and126can be located on edge servers at the POPs116and122located within network108. More specifically, the cost of these data links at resource cache components120and126can correspond to a threshold content delivery bandwidth for a particular data link (or all of the data links). In an illustrative example, the CDN service provider106may incur an additional cost when a data link at the DNS server component118is used at a 95th percentile capacity of request handling at one or more cache servers of resource cache component120for a certain period of time corresponding to a time bucket. Time buckets may be used to determine the cost of operating the cache server above the threshold content delivery bandwidth. For example, a five-minute time bucket of operating above the threshold content delivery bandwidth can correspond to a certain cost; a thirty-minute time bucket, an even higher cost. If, on the other hand, the request handling at one or more cache servers of resource cache component120operates below that percentile, the CDN service provider106may incur no cost or a lesser cost. Thus the CDN service provider106can route various content requests to different data links of alternative cache servers (e.g., cache servers at resource cache component126) based on the request handling capacity at cache servers of resource cache component120operating below or above that cost percentile. In another illustrative example, if the data link at resource cache component120is operating at the 98th percentile, the CDN service provider106can determine that another resource cache component126, operating only at a 50th percentile, may be include alternative cache servers to handle the content requests because even this rerouted content request only raises the request handling at the cache servers of resource cache component126to the 55th percentile. With this rerouting approach, eventually, the overloaded 98 percentile data link at resource cache component120may fall below the threshold, for example, now operating at the 90th percentile. With this approach, the CDN service provider106has now lowered its cost operating below the threshold content delivery bandwidth at both data links of resource cache components120and126. This can be referred to as a sloppy routing mode. In this continued example, the CDN service provider106can determine an alternative DNS server associated with an alternative resource identifier (as depicted inFIGS.5A and5B) or a cache IP address (as depicted inFIG.4), both at an alternative POP, where the responses to the DNS queries can be routed. In some embodiments, the threshold content delivery bandwidth (e.g., the cost percentile at which the CDN service provider106incurs costs) can be stored in the pricing data store130. The corresponding time buckets at which the CDN service provider106incurs costs operating above the threshold content delivery bandwidth can also be stored in the pricing data store130. The threshold content delivery bandwidth of data links at cache servers can correspond to peak request handling for resource cache components (e.g., cache servers). In various embodiments, the CDN service provider106can use the back-end processing service132to process stored historical data to determine whether a particular POP exceeds the threshold content delivery bandwidth during regular daily intervals, seasonally, or on a monthly basis. Back-end processing service132may use various methods associated with DNS query processing to track the DNS query at DNS servers or the content delivery bandwidth of data links associated with the resource cache components, located at various POPs (e.g., on edge servers) connected via the network108. In another illustrative example of sloppy routing, the CDN service provider106can determine to route the response of a DNS query (e.g., a content request) to a cache IP address that is not the optimal location for that content request because the content provider104has exceeded a threshold network usage. The CDN service provider106can determine a pricing structure for the content provider104; for example, the content provider104can be charged a flat-rate price for network usage. This may allow the content provider104to determine that the CDN service provider106is cost efficient because a flat rate price is charged for network bandwidth. However with this predictability, the content service provider106can determine that some content providers exceed their threshold network usage that has been predicted corresponding to a determined flat-rate price. When the content provider104has exceeded the flat-rate price, the CDN service provider106can use a sloppy routing approach to reroute content requests to a suboptimal location. In some embodiments, this approach provides a balanced or reasonable latency: not the optimal latency in a default routing approach that determines the optimal cache IP address to service a DNS query, but also not the worst latency that a content provider104might find using an anycast routing approach. Thus the sloppy routing mode, in these embodiments, balances the latency for the content provider104(e.g., a customer) with the flat-rate price that the content provider104has paid for a corresponding network usage at that flat-rate price. In one embodiment, the content provider104can be on a monthly network usage plan: The content provider104is routed to the optimal location in the default routing approach; but, once the customer has exceeded the threshold network usage for the month, the routing mode and POP selection service128determines that the sloppy routing mode can be used to route a response to an alternative DNS server (which may be identified with an alternative resource identifier as depicted inFIG.5A) or an IP address of a resource cache component at another POP location (e.g., the resource cache126at the second POP122as depicted inFIG.3). In another embodiment, once the content provider104has exceeded the threshold network usage, the content provider104may be rerouted automatically for a specific time of day to an alternative DNS server (e.g., the DNS server124) or an IP address of a cache component at another POP location. For example, multiple content requests for movies at 8 p.m. on Friday night can be rerouted to a suboptimal location, if that specific content provider104has already exceeded their threshold network usage for that month. In various embodiments, the back-end processing service132can retrieve from pricing data store130various prices for the content provider104and costs of the CDN service provider106. With this data, the back-end processing service132can process the tracked behavior of the network usage for the content provider104with various monitoring mechanisms and/or approaches used by the CDN service provider106. For example, the CDN service provider106may keep records of the network usage of various content providers104on a daily basis. In various other approaches, the CDN service provider106can store the pricing structure for various content providers104in the pricing data store130. For example, the pricing structure can include a graduated pricing structure for a new customer (e.g., another content provider104) or a discounted pricing structure for a loyal customer (e.g., another content provider104). Using both the pricing data store130and the back-end processing service132, the CDN service provider106can determine the network usage of various content providers104. In another illustrative example, the DNS server component118may use a combination of the one or more criteria to determine a sloppy routing scheme; for example, using both the threshold content delivery bandwidth and the threshold network usage for the content provider104. Continuing in this illustrative example, the CDN service provider106can determine that a flat-rate price corresponds to a marginal increase in cost for a particular data link because that data link corresponds to an optimal routing approach using DNS. Thus if a content provider104has exceeded their monthly network usage, the CDN service provider106can use sloppy routing to reroute the DNS query to another POP (e.g., either via an alternative resource identifier or via a resource cache component operated by that POP) that is operating under the cost percentile for that data link, which corresponds to a threshold content delivery bandwidth. In this combined approach, the CDN service provider106balances the cost of provisioning subsequent data links against the latency introduced to the content provider104for the use of servicing DNS queries. In an additional illustrative example of the sloppy routing mode not depicted inFIG.4, the DNS server component118may keep a list of available IP addresses in the resource cache component120corresponding to data links that the CDN service provider106is operating. If a request for content of the content provider104is received when content provider104has exceeded their threshold network usage for a month, DNS server component118can use the list of available IP addresses with their corresponding content delivery bandwidth percentiles at that particular time. For example, the data link at the first IP address may be operating at the 98th percentile because, in this example, it is the optimal IP address, the data link at the second IP address may be operating at the 94th percentile, and the data link at the third IP address may be operating at the 57th percentile. Because, in this example, content provider104has exceeded their threshold network usage, the routing mode and POP selection service128and back-end processing service132can determine that the third IP address can be used to service a DNS query; thereby minimizing the cost to operate the data links of CDN service provider106. This avoids using the second IP address when a marginal increase in DNS queries could push the data link at the second IP address to operate over the threshold content delivery bandwidth (e.g., the 95th percentile). Additionally, some traffic on the data link at the second IP address can be routed to the third IP address if CDN service provider106determines that the incremental latency experienced is minimal or if the another content provider104operating their content requests on the data link at the second IP address suddenly exceeds their threshold network usage for the month. Further in various sloppy routing schemes using a combination of the one or more criteria, the CDN service provider106can determine various permutations of the sloppy routing mode based on information regarding the pricing structure of a content provider104stored in the pricing data store130. In the same continued example from above, the latency incurred by a flat-rate price on the data link at the third IP address may be a greater cost in terms of latency for the content provider104when compared to the marginal cost incurred by the CDN service provider106when servicing that DNS query on the data link at the second IP address, which would have resulted in exceeding the threshold content delivery bandwidth for the CDN service provider106. Thus the routing mode and POPs selection service128can include a determination that balances the latency for the content provider104operating above a threshold network usage with the cost incurred by operating a data link above the content delivery bandwidth threshold. With this approach in view, various embodiments are possible balancing the latency incurred for the content provider104for a particular cost incurred by CDN service provider106. Thus CDN service provider106can use a pricing structure stored in pricing data store130for content provider104that is based primarily on the latency that content provider104is willing to incur. Further still, this latency criterion can be related to the use case of the content request for that the content provider104. For example, CDN service provider106can have a pricing structure for content provider104that charges more for HD video than for public information or text-based file formats. In some instances, as one of skill in the art can appreciate, HD video may incur greater latency than files with text-based formats or emails. In various embodiments, as one of skill in the art can appreciate, data can be collected by CDN service provider106regarding the content requests of the content provider104according to time of day, or month, or based on certain content requests. This data can be processed in the back-end processing service132. All of these factors can be used as the one or more criteria to determine a sloppy routing approach for the response of a particular DNS query. Various factors can be processed by back-end processing service132as will now be described herein. For example, in various embodiments, latency may not be the criterion to be optimized for sloppy routing, but instead, content accessibility for the content provider104may be the criterion. In this sloppy routing approach using content accessibility as one of the one or more criteria, the CDN service provider106can additionally use hashing algorithms to determine a POP location with the lowest latency to service a particular DNS query. With hashing algorithms, the CDN the service provider106can use the routing mode and POP and selection service128to divide a certain number of POP into stripes to be used in computing likelihoods of content availability. Then, a hashing algorithm can use the name of the content provider104with the DNS query to determine the likelihood that content is stored for that content provider104at a particular POP that would service the resource request with the lowest latency. In this approach, content requests are more likely to be serviced with less latency by using that POP having a higher likelihood of content stored at the resource cache component of that POP than others. In some embodiments, if the content provider104includes several requests for the same content, feedback can be used to indicate that any of the POPs may have an equal likelihood of having the content stored and thus offer nearly equivalent low latencies. Additional criteria can be used by routing mode and POP selection service128during DNS query processing to determine which POP locations or IP addresses to sloppy route for the response of a particular DNS query. Another one or more criteria of sloppy routing include determining POP locations that can enhance security. In this approach, routing mode and POP selection service128can determine that the second POP122is less secure so that the list of available IP addresses at the resource cache component120no longer includes cache IP addresses associated with the second POP122or moves the cache IP addresses associated with the second POP122to the bottom of the list for servicing DNS queries. More specifically, routing mode and POP selection service128can use information that CDN service provider106receives from a look-up table of IP addresses that have security concerns. With this look-up table of IP addresses, routing mode and POP selection service128can compare that list with the list available of cache IP address for sloppy routing at DNS server component118to determine whether a particular IP address should be avoided for the content provider104that has an increased security concerns (e.g., an increased susceptibility factor). In some embodiments, this may be additionally addressed by changing the routing mode. For example, the routing mode at routing mode and POP selection service128can be changed to a regional anycast routing mode or anycast routing mode for enhanced security. In some embodiments, CDN service provider106can financially charge the content provider104more to provide this enhanced security, especially if the content provider104requests secure connection for content requests (e.g., because content provider104is servicing DNS queries that include secure or financial transactions). Another one or more criteria of sloppy routing include using a favored or biased approach for a particular content provider104: the CDN service provider106can determine that a certain content provider104is favored because it has been a customer of CDN service provider106for a long period of time (or pays CDN service provider106more than other content providers). In one embodiment then, even though this favored or loyal customer has exceeded their threshold network usage, the routing mode and POP selection service128can determine that content provider104is not sloppy routed, but, instead, content provider is provided the optimal IP address at DNS server118. In contrast, this favored approach may also be used for new customers. For example, a new customer that has only exceeded their threshold at work usage on day 20 of the month could still be provided the optimal IP address if the CDN service provider106determines that the marginal cost of servicing new customers with a lower latency, even though it incurs a greater cost for the data link, is less than the likelihood that new customer may drop coverage or choose another CDN service provider. With this description, as one of skill in the art can appreciate, various approaches are possible to favor certain content provider104over another based on preferences determined and processed by back-end processing service132. In some embodiments this may include using historical data to analyze content provider104behavior based on: latency, price, security concerns, content accessibility, whether a content provider104is primarily downloading bulk data, whether a particular POP is a peer of CDN service provider106network, or any other relevant factors that affect servicing a particular DNS query. Further, a combination of factors may be used to determine the alternative resource identifier associated with another POP location or cache IP address to be used when routing a DNS query. For example, CDN service provider106may determine that latency and susceptibility factors should be the only factors to be used when selecting a cache IP address from the list of available addresses at DNS server118. With further reference toFIG.4, upon selection of a specific cache server computing device (or a resource cache component120,126), the DNS server component118provides an IP address of the cache server computing device, resource cache component, or load balancer/load share device associated with a resource cache component. As will be described further below in reference toFIG.6, the client computing device102can then utilize Internet communication protocols to request the resource from a specific cache server computing device identified by the IP address. The cache server computing device then processes the request, as will also be described in greater detail below, to provide the resource to the client computing device102. With reference now toFIG.5A, in another embodiment, after completion of the registration and translation processes illustrated inFIG.2, a specific DNS server in the DNS server component118of the POP116receives the DNS query corresponding to the original URL from the client computing device102. Once one of the DNS servers in the DNS server component118receives the request, the specific DNS server attempts to resolve the request. In one illustrative embodiment, as described above and shown in reference toFIG.4, a specific DNS server resolves the DNS query by identifying an IP address of a cache server component that will process the request for the requested resource. As described above and as will be described further below in reference toFIG.6, a selected resource cache component can process the request by either providing the requested resource if it is available or attempt to obtain the requested resource from another source, such as a peer cache server computing device or the origin server112of the content provider104. Returning toFIG.5A, as an alternative to selecting a resource cache component upon receipt of a DNS query as described in reference toFIG.4, the CDN service provider106can process the DNS query to select another POP for further processing a subsequent DNS query associated with the originally requested resource. The selection of another POP can also be based, at least in part, on the same one or more criteria detailed above with respect the selection of a resource cache component inFIG.4. In this embodiment, the CDN service provider106can maintain sets of various alternative resource identifiers. The alternative resource identifiers can be provided by the CDN service provider106to the client computing device102such that a subsequent DNS query on the alternative resource identifier will be processed by a different DNS server component within the CDN service provider's network. In an illustrative embodiment, the alternative resource identifiers are in the form of one or more canonical name (“CNAME”) records. In one embodiment, each CNAME record identifies a domain of the CDN service provider106(e.g., “cdnprovider.com” or “cdnprovider-1.com”). As will be explained in greater detail below, the domain in the CNAME does not need to be the same domain found in original URL or in a previous CNAME record. Additionally, each CNAME record includes additional information, such as request routing information, (e.g., “request routing information”). An illustrative CNAME record can have the form of:http://request_routing_information.cdnprovider.com/path/resource.xxx CNAME_request_routing_information.cdnprovider.com In an illustrative embodiment, the CNAME records are generated and provided by the DNS servers to direct a more appropriate DNS server of the CDN service provider106. As used in accordance with the present disclosure, appropriateness can be defined in any manner by the CDN service provider106for a variety of purposes. In an illustrative embodiment, as will be described in greater detail below in reference toFIG.7, in addition to the one or more criteria noted above, the CDN service provider106can utilize domain information associated with the content provider104, at least in part, to identify the more appropriate DNS server of the CDN service provider106. In particular, the CDN service provider106can use the domain information in the DNS query to identify the content provider104, and in turn, identify a current and threshold network usage for the identified content provider104. As noted above, the threshold network usage for a content provider can be determined based, at least in part, on pricing information for the CDN service provider to provide content on behalf of the content provider104. Specifically, as one example, a content provider may pay a flat fee for unlimited network usage of the CDN service provider's network. However, the CDN service provider106may manage its resources by determining a threshold network usage for the content provider based on its flat fee at or below which the CDN service provider106will process requests at an optimal POP or route requests to an optimal POP. Alternatively, if the current network usage for servicing domains corresponding to the content provider104is above the threshold network usage, the CDN service provider106can select a less optimal POP to process the request. In another embodiment, building on the foregoing example, the CDN service provider106can utilize client location information associated with the client computing device102or its local DNS resolver, at least in part, to identify the more appropriate DNS server of the CDN service provider106. In particular, the CDN service provider106can utilize an IP address associated with a client computing device DNS query to identify a best sub-optimal POP to process the request. Based on the client location information, the CDN service provider106can then select a POP116,122from a set of sub-optimal POPs that are identified as being available to service resource requests under the circumstances. In one example, if more than one POP is identified in the set of sub-optimal POPs, the CDN service provider106can utilize a distribution allocation for selecting a specific POP associated with the client location information. In another example, once a POP is selected, the CDN service provider106can further use health information to determine whether the selected POP is available to service requests before providing the client computing device with a CNAME corresponding to the selected POP. This health information may in one embodiment correspond to a threshold content delivery bandwidth available at the POP as also described above. One skilled in the art will appreciate that the above functionality is illustrative in nature and accordingly should not be construed as limiting. As described above, in addition to the consideration of client location information (either of the end-client or its associated local DNS resolver component), the CDN service provider106can utilize the additional information (e.g., the “additional information”) included in the translated URL to select a more appropriate DNS server. In one aspect, the CDN service provider106can utilize the additional information to select from a set of DNS servers identified as satisfying criteria associated with the client location information or from a set of DNS services identified as satisfying any other criterion or combination of criteria, such as those described in other example embodiments herein. In another aspect, the CDN service provider106can utilize the additional information to validate the DNS server selected in accordance with the client location information or to select an alternative DNS server previously selected in accordance with the client location information. In one example, the CDN service provider106can attempt to direct a DNS query to DNS servers according to additional geographic criteria. The additional geographic criteria can correspond to geographic-based regional service plans contracted between the CDN service-provider106and the content provider104in which various CDN service provider106POPs are grouped into geographic regions. Accordingly, a client computing device102DNS query received in a region not corresponding to the content provider's regional plan may be better processed by a DNS server in region corresponding to the content provider's regional plan. In another example, the CDN service provider106can attempt to direct a DNS query to DNS servers according to service level criteria. The service level criteria can correspond to service or performance metrics contracted between the CDN service provider106and the content provider104. Examples of performance metrics can include latencies of data transmission between the CDN service provider POPs and the client computing devices102, total data provided on behalf of the content provider104by the CDN service provider POPs, error rates for data transmissions, and the like. In still a further example, the CDN service provider106can attempt to direct a DNS query to DNS servers according to network performance criteria. The network performance criteria can correspond to measurements of network performance for transmitting data from the CDN service provider POPs to the client computing device102. Examples of network performance metrics can include network data transfer latencies (measured by the client computing device or the CDN service provider106, network data error rates, and the like). In accordance with an illustrative embodiment, the DNS server maintains a data store that defines CNAME records for various original URLs. If a DNS query corresponding to a particular original URL matches an entry in the data store, the DNS server component118returns a CNAME record as defined in the data store. In an illustrative embodiment, the data store can include multiple CNAME records corresponding to a particular original URL. The multiple CNAME records would define a set of potential candidates that can be returned to the client computing device102. In such an embodiment, the DNS server component118, either directly or via a network-based service, can implement additional logic in selecting an appropriate CNAME from a set of possible of CNAMEs. In an illustrative embodiment, each DNS server component118,124maintains the same data stores that define CNAME records, which can be managed centrally by the CDN service provider106. Alternatively, each DNS server component118and124can have POP specific data stores that define CNAME records, which can be managed centrally by the CDN service provider106or locally at the POP116,122. Returning toFIG.5A, one skilled in the relevant art will appreciate that DNS server component118may select (or otherwise obtain) a CNAME record that is intended resolve to a more appropriate DNS server of the CDN service provider106based one or more criteria, as described above. Then, the CDN service provider106returns the CNAME record to the client computing device102. With reference now toFIG.5B, upon receipt of the CNAME from the DNS server component118, the client computing device102generates a subsequent DNS query corresponding to the CNAME. As previously discussed with regard toFIG.5A, the DNS query process could first start with DNS queries for the “.” and “com” portions, followed by a query for the “cdnprovider” portion of the CNAME. To the extent, however, that the results of a previous DNS queries can be cached (and remain valid), the client computing device102can utilize the cached information and does not need to repeat the entire process. However, at some point, depending on whether the CNAME provided by DNS server component118(FIG.5A) and the previous URL or CNAME share common CDN service provider domains, the current CNAME DNS query will be processed by a different POP provided by the CDN service provider106. As illustrated inFIG.5B, the DNS server component124of POP122receives the current CNAME based on the different information in the current CNAME previously provided by the DNS server component118. As previously described, the DNS server component124can then determine whether to resolve the DNS query on the CNAME with an IP address of a cache component that will process the content request or whether to provide another alternative resource identifier selected in the manners described above. For purposes of illustration, assume that the DNS server component124processes the content request by returning an IP address of a resource cache component. In an illustrative embodiment, the DNS server component124can utilize a variety of information in selecting a resource cache component. In one example, the DNS server component124can default to a selection of a resource cache component of the same POP. In another example, the DNS server components can select a resource cache component based on various load balancing or load sharing algorithms. Still further, the DNS server components can utilize network performance metrics or measurements to assign specific resource cache components. The IP address selected by a DNS server component may correspond to a specific caching server in the resource cache. Alternatively, the IP address can correspond to a hardware/software selection component (such as a load balancer). With reference now toFIG.6, continuing with an illustrative embodiment corresponding toFIGS.5A and5B, assume that the DNS server component124shown inFIG.5Bhas selected the default resource cache component126of the POP122. Upon receipt of the IP address for the resource cache component126, the client computing device102transmits, as shown inFIG.6, a request for the requested content to the resource cache component126. The resource cache component126processes the request in a manner described above and the requested content is transmitted to the client computing device102. Alternatively, in another embodiment corresponding toFIG.4, assume that the DNS server component118has selected a specific resource cache component of another POP, such as POP122based on the one or more criteria as described above. Upon receipt of the IP address for the resource cache component126of the POP122, the client computing device102transmits, as shown inFIG.6, a request for the requested content to the resource cache component126. The resource cache component126processes the request in a manner described above and the requested content is transmitted to the client computing device102. A selected resource cache component (either selected directly by a POP receiving a DNS query as shown inFIG.4or as a default upon selection of an alternative POP via an alternative resource identifier as shown inFIGS.5A and5B) can process the request by either providing the requested resource if it is available or obtaining the requested resource from another source, such as a peer cache server computing device or the origin server112of the content provider104. With reference now toFIG.7, one embodiment of a POP selection routine702implemented by the CDN provider106will be described. One skilled in the relevant art will appreciate that actions/steps outlined for routine702may be implemented by one or many computing devices/components that are associated with the CDN service provider106. Accordingly, routine702has been logically associated as being generally performed by the CDN service provider106, and thus the following illustrative embodiments should not be construed as limiting. At block704, a DNS server component118at a first POP116of the CDN service provider106receives a DNS query corresponding to a resource identifier from a client computing device102. As previously discussed, the resource identifier can be a URL that has been embedded in content requested by the client computing device102and previously provided by the content provider104. Alternatively, the resource identifier can also correspond to a CNAME provided by a content provider DNS server in response to a DNS query previously received from the client computing device102. While not illustrated, the receiving DNS server also obtains, in some embodiments, an IP address associated with the DNS query from the requesting client computing device102(“query IP address”). The query IP address can correspond to an IP address of the client computing device or any local DNS resolver component associated with the client computing device. Next, at decision block706, the CDN service provider106determines whether it has exceeded a threshold content delivery bandwidth at the first POP. As discussed above, the threshold content delivery bandwidth is determined based, at least in part, on CDN service provider cost information, which corresponds to a financial cost to the CDN service provider106for content delivery bandwidth. In particular, in one embodiment, assuming that the first POP, or more specifically the DNS server component at the first POP, receiving the DNS query is the optimal POP or DNS server component for processing the DNS query, this determination at block706corresponds to a determination of whether the resource cache component at the POP receiving the DNS query (which can correspond to either a single cache server or a bank of cache servers at the POP) is operating above a threshold content delivery bandwidth. Continuing with this embodiment, the resource cache component at the first POP can also be referred to as the default or optimal resource cache component. In a further illustrative embodiment, the threshold content delivery bandwidth is lower than a maximum available content delivery bandwidth for the first POP or resource cache component. If the first POP or resource cache component has not exceeded its threshold content delivery bandwidth, the CDN service provider106responsively provides the client computing device102with an IP address of the default or optimal resource cache component at the first POP at block708. Thereafter, at block716, routine702ends. Alternatively, if at decision block706, the first POP or resource cache component has exceeded its threshold content delivery bandwidth (which may be indicative, for example, of the CDN service provider106incurring additional financial costs to provide the requested content from the first POP or its default resource cache component), the CDN service provider106determines whether a content provider corresponding to a domain associated with the DNS query has exceeded a threshold network usage at block710. If the content provider has not exceeded its threshold network usage, the CDN service provider106responsively provides the client computing device102with an IP address of the default or optimal resource cache component of the first POP at block710. Thereafter, at block716, routine702ends. Alternatively, if at decision block710, the content provider has exceeded its threshold network usage (which may be indicative, for example, of the CDN service provider incurring the burden of additional financial costs above a pricing structure, such as a flat fee structure, offered to the content provider), processing continues at block712. As described above, in an illustrative embodiment, the threshold network usage is determined based, at least in part, on pricing information for the CDN provider to provide content on behalf of the content provider. While the routine702illustrates making both determinations at blocks706and710, in another embodiment, the determination at block706may be optional, while in a yet further alternative embodiment, the determination at block710may be optional. Continuing at block712, if either or both of the determinations at blocks706and710result in a “YES” determination, the CDN service provider106selects an alternative resource identifier associated with an alternative POP of the CDN service provider106or an alternative cache IP address associated with an alternative POP. In particular, in one illustrative embodiment, where an alternative resource identifier is selected, the CDN service provider106more specifically selects an alternative resource identifier which would resolve to a particular alternative DNS server at the alternative POP. In another illustrative embodiment, where an alternative cache IP address is selected, the CDN service provider106may select an alternative cache IP address for a particular cache server of a resource cache component at the alternative POP or generally for a group of cache servers at the alternative POP. In this way, the CDN service provider106directs further processing of the request to an alternative POP of the CDN service provider. Next, at block714, the selected alternative resource identifier or alternative cache IP address is transmitted to the client in response to the obtained DNS query for further processing. Thereafter, at block716, routine702ends. In various embodiments, routine702may be performed by the CDN service provider106generally, or by DNS server components118,124or individual DNS servers associated with the DNS server components118,124. The CDN service provider106, DNS server components118,124, or individual DNS servers associated with the DNS server component118,124may themselves include or otherwise access a service to implement the routine702, such as the routing mode and POP selection service128ofFIG.1. In other embodiments, a physical computing device with computer executable instructions may cause the computing device to perform routine702. In some embodiments of the routine702, elements may occur in sequences other than as described above. In addition, as noted above, some elements of the routine may be optional, such as the determinations at either block706or710. One skilled in the art will appreciate that additional variations are possible and within the scope of the present disclosure. With reference now toFIG.8, one embodiment of a routing mode selection routine802will be described. One skilled in the relevant art will appreciate that actions/steps outlined for routine802may be implemented by one or many computing devices/components that are associated with the CDN service provider106. Accordingly, routine802has been logically associated as being generally performed by the CDN service provider106, and thus the following illustrative embodiments should not be construed as limiting. At block804, a DNS server component118at a first POP116of the CDN service provider106receives a DNS query corresponding to a resource identifier from a client computing device102. As previously discussed, the resource identifier can be a URL that has been embedded in content requested by the client computing device102and previously provided by the content provider104. Alternatively, the resource identifier can also correspond to a CNAME provided by a content provider DNS server in response to a DNS query previously received from the client computing device102. While not illustrated, the receiving DNS server also obtains, in some embodiments, an IP address associated with the DNS query from the requesting client computing device102(“query IP address”). The query IP address can correspond to an IP address of the client computing device or any local DNS resolver component associated with the client computing device. Next, at block806, the CDN service provider106responsively determines a routing mode for the response to the DNS query obtained at block804. In some embodiments, this determination can be made during DNS query processing as described above with reference toFIGS.4-5B. In various embodiments, a spectrum of routing modes can exist that the CDN service provider106may determine during DNS query processing (e.g., at the DNS server when the DNS query is obtained). A plurality of available routing modes can include: a default routing mode, a sloppy routing mode, a regional anycast routing mode, and an anycast routing mode. The one or more criteria used in DNS query processing can be used to determine the routing mode for providing the requested resource. Accordingly, the response to the DNS query can be transmitted and/or provided to the client102in accordance with the determined routing mode. With continuing reference to block806, the CDN service provider can determine an appropriate routing mode for providing the requested resource. As discussed previously with reference toFIG.4and in accordance with the present disclosure, appropriateness can be defined in any manner by the CDN service provider106for a variety of purposes. Illustratively, the one or more criteria used to determine the routing mode can include a susceptibility factor (also referred to as security factor) and the latency criteria discussed above with reference toFIG.4. In various embodiments, the determination of an appropriate routing mode from a spectrum of routing modes can be based at least in part on a tradeoff between a susceptibility factor of a routing mode and the latency for providing the requested resource with the routing mode. For example, in one embodiment, the default routing mode can be determined as the appropriate routing mode. The determination of the optimal cache server may be a default based on the DNS query already being routed to the optimal POP and received at the DNS server component118. The DNS server component118, then, simply provides an IP address of an associated cache server at that POP. Alternatively, the DNS server component, upon receiving the DNS query, may be associated with a cache server component, and then selects from one of the cache servers of resource cache component120that may be the logically closest. Accordingly, in various embodiments, the default routing mode can also be referred to as a latency-based routing mode (e.g., a routing mode that provides an optimal cache server, minimizing latency when providing the requested resource). Still further, in another embodiment, this latency-based routing mode can be referred to as a minimal latency-based routing mode that minimizes the latency when providing the requested resources on behalf of the content provider104. As one of skill in the art can appreciate, these examples illustrate how the default routing mode can offer an optimal cache server at the POP location receiving the DNS query or route the response to a DNS query using a cache IP address associated with an optimal cache server. While this optimal cache server minimizes latency, this default routing mode may, however, provide less security. But, at the same time, this default routing mode may provide less security because a single specific cache server is typically the optimal cache server and thus may have a higher susceptibility factor given those security concerns. For example, in one embodiment, because the optimal cache server is associated with a pre-specified IP address, the specific IP address could be leaked or easily discernible to outside parties, which raises the security concerns associated with that optimal cache server. In contrast, the anycast routing mode uses a selected anycast IP address (e.g., a cache IP address or destination IP address) to process resource requests. An anycast IP address (e.g., a global IP address that may be randomly assigned to available cache servers) can be associated or shared by several cache servers that can provide the requested resource. Because several cache servers can service the requested resource, the susceptibility factor of the anycast routing mode is lower than the default routing mode. For example, by providing a DNS server component at the CDN service provider with an option to determine an appropriate routing mode in which to respond to a DNS query, the DNS server component may select the anycast routing mode to provide enhanced security, as compared to the default routing mode. Such a determination offers enhanced security because an original cache server that would service the resource request in a default routing mode can be quickly changed to a secondary cache server (e.g., another cache server) associated with a shared anycast IP address (e.g., a randomly assigned global IP address), if a security concern is discovered to be associated with the POP or DNS server component receiving the DNS query, and hence the original default cache server corresponding to that POP. But, at the same time, such a secondary cache server can be more geographically distant (e.g., traveling through intercontinental optical cables) from the client102, and thus incurring a higher latency, especially when compared with the default routing mode that uses the optimal cache server. In one embodiment, the anycast routing mode, as discussed herein, may correspond to traditional anycast routing as known to those of skill in the art. In another embodiment of determining the appropriate routing mode at block806, a content provider can be assigned a susceptibility factor that relates to the security concerns of each available routing mode. For example, a content provider104(e.g., a customer of the CDN service provider106) that has its content typically served by the CDN service provider106in a geographical location (e.g., region with less security) can have an increased susceptibility factor in a default routing mode. Instead, the anycast routing mode can be determined as the appropriate routing mode to offer enhanced security as an anycast IP address is associated with several cache servers via randomly assigned global IP address. Thus, in contrast to a specific optimal cache server associated with a pre-specified IP address that may be leaked, there are many available cache servers in the anycast routing mode for providing responsive content which are not individually designated and hence specifically targeted. Accordingly, the susceptibility factors may bias the determination of the appropriate routing mode in favor of the anycast routing mode because the anycast routing mode can provide enhanced security. In contrast, a default cache IP address stored at a DNS server may be more easily discernible as it is individually pre-designated. In another example, a regional anycast routing mode can be determined as the appropriate routing mode, at block806. In some embodiments, the CDN service provider106may consider the security factor like the anycast routing mode, but additionally consider the latency factor. This can be undertaken when the one or more criteria indicate that a susceptibility factor is to be associated with the plurality of available routing modes, but also the latency factor associated with the plurality of available routing modes. Continuing in this example, the regional anycast routing mode can be used to route the response to the DNS query so that several cache servers are available with a regional IP address (e.g., a regional IP address can be randomly assigned and associated with several cache servers in a region) used to service the request, thereby enhancing security. This determination can be made dynamically at the DNS server component118or it can have been considered by a central computing component of the CDN service provider106, which, in turn, provides a list of available cache servers from which the DNS server component118can select from. Thus, the one or more criteria can in part dictate, or de facto, determine a routing mode for providing the requested resource. In another example, a particular DNS resolver may service a diverse set of client computing devices102, such as clients that are located in multiple different geographic regions. Such a resolver is hereinafter referred to as a diverse DNS resolver. In this example, since the clients are geographically diverse, some clients' resource requests may experience more latency than others being serviced by the same DNS resolver. With this information, the CDN service provider106may determine that a regional anycast routing mode may be the appropriate routing mode for providing the requested resource at block806. The regional anycast routing mode corresponds to a modified version of an anycast routing mode which utilizes a one-to-many network routing schema, but in this instance the one-to-many network routing schema is limited by region, such as a geographic region. In particular, a regional one-to-many network routing schema provides that a specific POP, DNS server component118, or resource cache component in a particular region will receive the request as a function of network topology in that region. For example, in a regional anycast implementation, a request issued by a client computing device102to a shared IP address will arrive at a POP, DNS server component118, or resource cache component logically having the shortest network topology distance, often referred to as network hops, from the client computing device. The network topology distance does not necessarily correspond to geographic distance. However, in some embodiments, the network topology distance can be inferred to be the shortest network distance between a client computing device102and a POP, DNS server component, or resource cache component. As a further specific example, the regional anycast routing mode can involve the selection of a cache IP address from a grouping or list of cache IP addresses or anycast node locations that are located within a certain geographical and/or regional location of the nodes (e.g., U.S. East Coast, U.S. West Coast, Canada, or Southeast Asia). In other embodiments, the CDN service provider106can select a cache IP address from a list of IP addresses that are associated with a location of nodes in a region specified by the CDN service provider106. Accordingly, the CDN service provider106can specify certain nodes located in one geographical area (e.g., U.S. West Coast). In some embodiments, such a list may not include an IP address that is deemed unsecure (e.g., an IP address corresponding to a cache server that, due to security concerns, cannot provide requested resources). For example, in some embodiments, financial content such as credit card information may need to be routed with a routing mode offering higher security. In other embodiments, an unsecure IP address may be an anycast IP address that has been leaked, thereby making security a concern for that particular IP address. In yet another example of determining the appropriate routing mode at block806, the CDN service provider106may select a sloppy routing mode. As further described above, a sloppy routing mode can be used to service content requests from a suboptimal POP if, for example, if the original provider of the specifically requested content (e.g., the original content provider for that content) has exceeded a threshold network usage or if the CDN service provider106has exceeded a threshold content delivery bandwidth at data links of cache servers servicing requests for content originally provided by the content provider104. Accordingly, in various embodiments, the determined routing mode can be the sloppy routing mode. In one embodiment, as described above, the response to a DNS query utilizing this sloppy routing approach can be either: an alternative resource identifier (e.g., a CNAME) associated with an alternative DNS component at an alternative POP of the CDN service provider or an IP address of a resource cache component (e.g., a cache server) at the alternative POP (e.g., second POP122). In this approach, the response to the DNS query may be routed to one of several cache servers that may be available at the alternative POP (or even several alternative POPs). In addition, in this embodiment, because the response to the DNS query may be routed to one of several cache servers at the alternative POP, the sloppy routing mode can enhance security because several cache servers are available, rather than one cache server (e.g., the optimal cache server that minimizes latency). In contrast to a default routing mode that may only route the response to a DNS query to one cache server (e.g., a default and/or optimal cache server that can minimize the latency in providing the requested resource) at the POP that received the DNS query, the slopping routing mode can provide enhanced security by routing the response to the DNS query to an alternative cache server at an alternative POP. Further, the sloppy routing mode selection allows the CDN service provider106to allocate or direct the response of the DNS query within the network of the CDN service provider106, for example, to an alternative POP (e.g., second POP122), which may minimize latency in servicing the resource request when compared to an anycast routing mode. Thus, the CDN service provider106can minimize latency by analyzing the request handling capacity of alternative POPs available to provide the requested resource. Accordingly, a sloppy routing mode selection can take into account both minimizing the latency when providing the requested resource and a susceptibility factor by providing enhanced security when providing the requested resource. In various embodiments, information stored in pricing data store130can also be used as the one or more criteria to determine an appropriate routing mode. For example, one pricing structure may dictate that a flat-rate price is available for the default routing mode, a flat-rate price is available for the sloppy routing mode, and another flat-rate price is available for the regional anycast routing mode. Using this data from pricing data store130and the network usage, the back-end processing service132can determine whether a content provider104has exceeded their threshold network usage for a particular routing mode at a particular pricing structure. As one of skill in the art can appreciate, various routing modes are available when the one or more criteria are used in combination with a pricing structure (e.g., a price at which the CDN service provider106provides content on behalf of the content provider104). For example, a content provider104can pay more for the CDN service provider106to determine whether a more secure routing mode is available for certain resource requests (e.g., resource requests with financial information). In this example, the back-end processing service132can determine that a regional anycast routing mode is available with less latency than an anycast routing mode, but also with more security than a default routing mode because the susceptibility factor of the DNS cache server servicing a particular content provider104is high. In addition to the example criteria noted above, the one or more criteria can also include utilizing information obtained from the DNS query, at least in part, to identify the more appropriate routing mode. This information may include a domain associated with the content provider104. This information may also include client subnet information associated with content provider104. This information can be used to determine the routing mode. Next, at block808, in response to the obtained DNS query, the selected alternative resource identifier or selected cache IP address is transmitted to the client in accordance with the determined routing mode. For example, if the determined routing mode is the regional anycast routing mode, the selected IP cache address (e.g., selected from a list of IP addresses associated with a location of nodes in a region specified by the CDN service provider106) can be provided and/or transmitted to the client102in accordance with the regional anycast routing mode. Thus, an IP address can be selected that is associated with a location of nodes on the US West Coast for example. Thereafter, at block810, routine802ends. In various embodiments, routine802may be performed by the CDN service provider106generally, or by DNS server components118,124or individual DNS servers associated with the DNS server components118,124. The CDN service provider106, DNS server components118,124, or individual DNS servers associated with the DNS server component118,124may themselves include or otherwise access a service to implement the routine802, such as the routing mode and POP selection service128ofFIG.1. In other embodiments, a physical computing device with computer executable instructions may cause the computing device to perform routine802. In some embodiments of the routine802, elements may occur in sequences other than as described above. One skilled in the art will appreciate that additional variations are possible and within the scope of the present disclosure. With reference now toFIG.9, an alternative embodiment of a POP selection routine902will be described. One skilled in the relevant art will appreciate that actions/steps outlined for routine902may be implemented by one or many computing devices/components that are associated with the CDN service provider106. Accordingly, routine902has been logically associated as being generally performed by the CDN service provider106, and thus the following illustrative embodiments should not be construed as limiting. At block904, a DNS server component118at a first POP116of the CDN service provider106receives a DNS query corresponding to a resource identifier from a client computing device102. As previously discussed, the resource identifier can be a URL that has been embedded in content requested by the client computing device102and previously provided by the content provider104. Alternatively, the resource identifier can also correspond to a CNAME provided by a content provider DNS server in response to a DNS query previously received from the client computing device102. While not illustrated, the receiving DNS server also obtains, in some embodiments, an IP address associated with the DNS query from the requesting client computing device102(“query IP address”). The query IP address can correspond to an IP address of the client computing device or any local DNS resolver component associated with the client computing device. Next, at decision block906, the CDN service provider106determines whether a content provider104corresponding to a domain associated with the DNS query is available. For example, the content provider104may be available if the content provider104has available network usage (e.g., network usage not exceeding a threshold network usage) or if no security concerns exist with providing content of the content provider. However, the content provider104may not be available if the content provider104has exceeded a threshold network usage or if security concerns exist regarding the provision of content originally provided by the content provider104. For example, a cache server at the resource cache component120that is providing requested resources in accordance with a default routing mode for the content provider104may be unavailable due to security concerns associated with providing content of the content provider104. In other embodiments, a content provider104may be physically located in a region or location more susceptible to security concerns and thus can have an increased susceptibility factor associated with the default routing mode. Accordingly, the optimal cache server, physically located in that same region or location, that is providing requested resources in accordance with a default routing mode for a particular content provider104may be unavailable due to security concerns associated with providing content of that particular content provider104. In one embodiment, the CDN service provider106can determine the susceptibility factor for the content provider104associated with each available routing mode of a plurality of routing modes. In the depicted alternative embodiment of the POP selection routine902, the available routing modes are: the default routing mode, the sloppy routing mode, and the anycast routing mode. If the content provider104is available (i.e., the CDN service provider determines that content originally provided by the content provider104is available to be provided based on one or more criteria), the CDN service provider106, responsive to the DNS query, provides and transmits to the client computing device102an IP address of the default or optimal resource cache component of the first POP at block908_in accordance with the default routing mode. In this embodiment, the resource cache component at the first POP can also be referred to as the default or optimal resource cache component. Thereafter, at block918, routine902ends. Alternatively, if at decision block906, the content provider is not available, processing continues at decision block910. At decision block910, the CDN service provider106determines whether an alternative POP is available. This decision can include determining an appropriate routing mode from the remaining available routing modes: the sloppy routing mode and the anycast routing mode. For example, as described above with reference toFIG.8, an alternative POP may be available if a list of IP addresses at the DNS server118includes alternative cache IP addresses associated with alternative POPs. If an alternative POP is not available (e.g., because the list of IP addresses does not include alternative POP locations or cache IP addresses associated with alternative POPs), at block912, the CDN service provider106responsively provides and transmits to the client computing device102an IP address in accordance with the anycast routing mode. In another embodiment not depicted inFIG.9, the CDN service provider106responsively provides and transmits to the client computing device102an IP address in accordance with the regional anycast routing mode. In this embodiment, the IP address can be selected in accordance with the determined routing mode as described above with reference toFIG.8. Thereafter, at block918, routine902ends. While the routine902illustrates making both determinations at blocks906and910, in another embodiment, the determination at block906may be optional; while in a yet further alternative embodiment, the determination at block910may be optional. For example, in various embodiments, routine902can proceed from decision block906, if the content provider is not available, to block912, where the CDN service provider106responsively provides and transmits to the client computing device102an IP address in accordance with the anycast routing mode. Or, in another optional embodiment not depicted inFIG.9, the CDN service provider106can responsively provide and transmit to the client computing device102an IP address in accordance with the regional anycast routing mode. Continuing at block914, if the content provider is not available at decision block906and an alternative POP is available at block910, the CDN service provider106selects an alternative resource identifier associated with an alternative POP of the CDN service provider106or an alternative cache IP address associated with an alternative POP. In particular, in one illustrative embodiment, where an alternative resource identifier is selected, the CDN service provider106more specifically selects an alternative resource identifier which would resolve to a particular alternative DNS server at the alternative POP. In another illustrative embodiment, where an alternative cache IP address is selected, the CDN service provider106may select an alternative cache IP address for a particular cache server of a resource cache component at the alternative POP or generally for a group of cache servers at the alternative POP. In this way, the CDN service provider106directs further processing of the request to an alternative POP of the CDN service provider. Next, at block916, in response to selecting either an alternative resource identifier or an alternative cache IP address at block914, the selected alternative resource identifier or alternative cache IP address is transmitted to the client in response to the obtained DNS query for further processing in accordance with the sloppy routing mode. Thereafter, at block918, routine902ends. In various embodiments, routine902may be performed by the CDN service provider106generally, or by DNS server components118,124or individual DNS servers associated with the DNS server components118,124. The CDN service provider106, DNS server components118,124, or individual DNS servers associated with the DNS server component118,124may themselves include or otherwise access a service to implement the routine902, such as the routing mode and POP selection service128ofFIG.1. In other embodiments, a physical computing device with computer executable instructions may cause the computing device to perform routine902. In some embodiments of the routine902, elements may occur in sequences other than as described above. In addition, as noted above, some elements of the routine may be optional, such as the determinations at either block906or910. One skilled in the art will appreciate that additional variations are possible and within the scope of the present disclosure. Depending on the embodiment, certain acts, events, or functions of any of the methods described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. The various illustrative logical blocks, modules, and method elements described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure. The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The elements of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of computer-readable storage medium known in the art. A storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal. Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” “involving” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B, and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. | 103,084 |
11863418 | DESCRIPTION OF EMBODIMENTS As described above, there is a method of detecting a network anomaly using a threshold value indicating a predicted fluctuation tolerance. For the prediction of the threshold value, for example, specific traffic volumes of a predetermined period of a past date and time are used. The average of specific traffic volumes is used as the reference of the fluctuation tolerance, and a predetermined margin is taken from this average to acquire the upper limit threshold value and the lower limit threshold value. However, the actual traffic volume may trend to be different from the average of the specific traffic volumes used as the reference of the fluctuation tolerance, that is, the predicted traffic volume. For example, in the case of network routes used for business, it is conceivable that the trend of the actual traffic volume may differ from that of the predicted traffic volume according to the season of the business or the change in work styles. From the viewpoint of business season, when the business volume is higher than expected in the busy season of the business, the traffic volume becomes higher than the prediction. Furthermore, when the business volume is lower than expected in the slack season of the business, the traffic volume becomes lower than expected. Furthermore, from the viewpoint of the change in work styles, various cases are expected. For example, in a case where teleworking is started in the middle of the week, it is expected that the traffic volume increases or decreases after the start day according to the business mode. In companies with a high amount of business via the network, it is expected that the traffic volume trends to increase, and in companies with a small amount of business via the network, it is expected that the traffic volume trends to decrease. Similarly, in a case where teleworking is performed until the middle of the week and working style is switched to working in the office from the middle of the week, the traffic volume trend may fluctuate. It is expected that other than teleworking, changes in work styles such as waiting at home and reassignment of personnel may cause various fluctuation in traffic volume trend. As described, the traffic volume trend may fluctuate according to the change in business season or the work style, that is, the actual traffic volume may stay higher or lower than the predicted traffic volume. When the traffic volume trend fluctuates as described above, the predicted traffic volume and the actual traffic volume deviate from each other. When such a deviation occurs, an increase or decrease in the traffic volume that would not have been detected as an anomaly without the deviation may be detected as an anomaly. However, such an increase or decrease in the traffic volume is a traffic fluctuation that is not originally regarded as an anomaly, and thus an unnecessary anomaly is detected, If any unnecessary anomaly detection is performed, there arises a problem that it becomes difficult to identify an anomaly that is originally desired to be detected. In network monitoring, it is desirable to reduce unnecessary anomaly detections as much as possible. However, since such trend fluctuations occur due to sudden circumstances, it is difficult to predict fluctuations in traffic volume trend in advance. In view of the above, it is desirable to reduce unnecessary anomaly detections and detect appropriate abnormalities so as to deal with fluctuations that are difficult to predict. Hereinafter, an example of an embodiment according to the disclosure will be described in detail with reference to the drawings. Before describing the embodiment of the present disclosure in detail, the technique as a premise and the outline of the method of the present embodiment will be described. Note that in the present embodiment, a case of detecting whether there is an anomaly in the traffic in the network will be described as an example, but the application target of the embodiment is not limited to the traffic. For example, it is possible to apply the present disclosure to the CPU usage, hard disk capacity, and the like regarding the server performance of the network, so that it can be detected whether there is an anomaly in the server performance. Furthermore, the present disclosure can be applied not only to network traffic but also to the flow rate of infrastructure such as electric power, water supply, or gas. First, the technique as a premise will be described. As a technique for detecting a traffic anomaly in a network, there is a technique described in Patent Document 2 or the like developed by the inventor of the present disclosure or the like (hereinafter referred to as a reference technique). In the reference technique, the fluctuation tolerance that defines the allowable range of traffic fluctuation, that is, the prediction values that are the references of the normal range are calculated as a time-series waveform from the traffic model based on the past actually measured values of the traffic volume. Whether there is an anomaly is determined by determining whether the traffic is within the normal range. The actually measured values indicate the traffic volumes actually measured. Then, the calculated prediction values are compared with the actually measured values to detect an anomaly. In the present embodiment, the prediction values and the method of detecting an anomaly of the reference technique are used as a base. Thus, a specific method of calculating the prediction values and a specific method of detecting an anomaly will be described. A traffic model is used to calculate the prediction values. As the traffic model, a traffic model using an autoregressive sum of extensions moving average, a traffic model using a regression line, or the like can be used. As the learning data of the traffic model, the traffic volume actually measured for the route of a certain network for each unit time is used. For example, in a case where the unit time is 10 minutes, the sampling time is set every 10 minutes. When the sampling time is 10:00, the measurement result of the traffic volume (Mbps) measured at 10:00 is used as the learning data at 10:00. The average traffic volume during a unit time may be used as learning data, In this case, when the sampling time is 10:00, the average of the traffic volumes sampled at any intervals (for example, every 1 minute, every 2 minutes, etc.) between 9:50 and 10:00 may be used as Learning data at 10:00. The traffic model is generated by using the transition of the traffic volume in a certain learning period as learning data. For example, a traffic model of autoregressive sum of extensions moving average can be expressed as x(t)=β1(t−1)+β2(t−2)+ . . . +βn(t−n). n (n=1, 2, 3, . . . ) represents the number of a week unit of a learning period, and t represents the start point of time of a prediction period of a week unit. If n=4, the learning period is four weeks prior to the start point of time of the prediction period. βnis a coefficient that is the degree of influence on the traffic model for each week, and any value is used. β1(t−1) represents the coefficient from one week before the start point of time to the start point of time. β2(t−2) represents the coefficient from two weeks before the start point of time to one week before the start point of time. When acquiring prediction values, for example, traffic volumes sampled at predetermined time within the prediction period are acquired from the transition of the autoregressive sum of extensions moving average of the traffic model generated from the learning data during the learning period, and the traffic volumes are set as the prediction values. For example, if the learning period is four weeks and the prediction period is three weeks, a traffic model is generated using the learning data for the four weeks prior to the start point of time of the prediction period as a reference, and from the generated traffic model, the prediction values within three weeks from the start point of time are acquired. By thus generating a traffic model from the learning data of week units and acquiring the prediction values, the prediction values reflecting the trend of each day of the week can be acquired. Note that the learning period and the prediction period are examples, and any they can be defined as any periods. Furthermore, 10 minutes as a unit is an example, and any unit time such as 1 minute, 5 minutes, 15 minutes, 20 minutes, etc. can be set. The transition of the traffic volume during the above-described learning period is an example of the past actually measured values of the present disclosure. Here, the case where the predicted traffic volume and the actual traffic volume deviate from each other described in the above-described problem corresponds to the case where the prediction values and the actually measured values deviate from each other in the reference technique.FIG.1is an application example of the reference technique, and is a diagram illustrating an example of a graph comparing the calculated prediction values and the actually measured values. InFIG.1, the vertical axis represents the traffic volume and the horizontal axis represents time. The vertical axis represents the traffic volume (Mbps) expressed using a power, and 4.0E+10 indicates 1,048,576 Mbps, which is the 10th power of 4. On the horizontal axis, the date and time is plotted every three days from 15:00 on 2019-5-21 (May 21, 2019). In the example illustrated inFIG.1, the prediction values and the actually measured values deviate from each other in period A and period B. Period A is a busy season of business and the week of the month-end closing to increase the traffic volume, and the actually measured values and the prediction values deviate from each other such that the actually measured values stay higher than the prediction values. Period B is a week of a next month after the busy season of business and the month-end closing are finished to decrease the traffic volume, and the prediction values and the actually measured values deviate from each other such that the prediction values stay higher than the actually measured values. As described above, different types of deviation occur from period A to period B, which are consecutive periods. Here, with reference to the method of the reference technique, a method of acquiring prediction values with little deviation by making a model that has learned fluctuations in month units or year units can be considered. However, in this case, learning data for several months to several years is required and a long learning period is required, which is disadvantageous. Next, the method of anomaly detection will be described. An anomaly is detected based on a threshold value set for prediction values as a reference. For example, the prediction values are set by the above-described calculation method, and an upper limit threshold value and a lower limit threshold value are set as threshold values. An anomaly is detected depending on whether the actually measured values are within the normal range defined by the upper limit threshold value and the lower limit threshold value. The upper limit threshold value and the lower limit threshold value are acquired, for example, by using the standard deviation a acquired from the actually measured values within a past certain period, and the prediction value+3σ is set as the upper limit threshold value and the prediction value −3σ is set as the lower limit threshold value. Since ±3σ is just an example, appropriate upper limit threshold value and lower limit threshold value such as ±2σ and ±4σ may be set. As the standard deviation σ, for example, the standard deviation of the actually measured values within the past five weeks is used.FIG.2is a schematic diagram schematically illustrating the relationship between the graph of prediction values and the upper and lower limit threshold values. For convenience of description, the graph inFIG.2is a graph that schematically illustrates a waveform graph indicating the traffic volume for each time (the same applies to the following figures). InFIG.2, the graph of the prediction values is drawn as if the prediction values are constant in the peak time zone, but actually, is a waveform graph in which the values are plotted for each unit time as illustrated inFIG.1.FIG.2illustrates the upper limit threshold value and the lower limit threshold value with respect to the prediction values of the time in the peak time zone. The normal range defined by the upper limit threshold value and the lower limit threshold value described here is the range of the traffic volume to be determined there is no anomaly at that time. According to the normal range defined by the upper limit threshold value and the lower limit threshold value, if the actually measured values are within the normal range, it is determined that there is no anomaly, and if any of the actually measured values is not within the normal range, it is determined that there is an anomaly. In this way, anomaly detection is performed by determination using the normal range between the upper limit threshold value and the lower limit threshold value. Note that the concept of the normal range is the same for time zones before and after the peak time zone. The technique as a premise has been described. Next, an outline of the method of the present embodiment will be described. Assuming that the prediction values and the actually measured values deviate from each other due to the issue of the technique described above as a premise, it is desired to correct the prediction values to make them closer to the actually measured values by a method that does not require a long learning period. If such correction enables adjustment to reduce the deviation, it is considered that appropriate anomaly detection is possible. Therefore, in the method of the present embodiment, corrected prediction values acquired by correcting the prediction values are introduced. Regarding the corrected prediction values, the corrected prediction values of the current day is acquired based on the prediction values and the actually measured values of the previous day (the specific calculation method of the corrected prediction values will be described below). That is, it can be said that the corrected prediction values are prediction values corrected by reflecting the trend of the actual traffic volume on the previous day. FIG.3is a diagram illustrating an example of a case where the corrected prediction values of the current day is acquired based on the prediction values and the actually measured values of the previous day. As illustrated inFIG.3, the corrected prediction values of the current day (xth day inFIG.3) are acquired based on the prediction values and the actually measured values of the previous day ((x−1)-th day inFIG.3). In the example ofFIG.3, the actually measured values of the previous day were higher than the prediction values, the corrected prediction values are acquired to be higher than the prediction values. Furthermore, the upper limit threshold value and the lower limit threshold value described above with reference toFIG.2are acquired for the prediction values, and also acquired for the corrected prediction values on the previous day and the current day. (A) indicates the upper limit threshold value of the prediction values, (B) indicates the lower limit threshold value of the prediction values, (C) indicates the upper limit threshold value of the corrected prediction values, and (D) indicates the lower limit threshold value of the corrected prediction values. Here, the difference between the normal range of the conventional method and the normal range of the method of the present embodiment will be described. On the previous day, the corrected prediction values have not been acquired yet, and only the prediction values of the conventional method are used, and thus the normal range is from the upper limit threshold value (A) of the prediction values to the lower limit threshold value (B) of the prediction values. On the other hand, once the corrected prediction values are acquired, in the method of the present embodiment, a range from the upper limit threshold value that is higher among those for the prediction values and the corrected prediction values to the lower limit threshold value that is lower is handled as the normal range of the current day. In the case ofFIG.3, the range from the upper limit threshold value (C) of the corrected prediction values to the lower limit threshold value (B) of the prediction values is handled as the normal range. That is, the normal range in the present embodiment is defined based on the difference between the prediction values and the corrected prediction values. Furthermore, it can be said that the normal range of the present embodiment has a wider range than the conventional normal range based only on the prediction values. The reason why the normal range is increased in this way is that the traffic trend may fluctuate according to the business season or the change of work style as described in the above-described problem. An example of defining the normal range dealing with the fluctuation of the trend will be described.FIG.4is a diagram illustrating an example of a case where the traffic volume pattern changes from an increasing trend to a decreasing trend in the middle of the week. In the example illustrated inFIG.4, it is assumed that the prediction values for 5 days are set in advance based on the past actual data and the like as indicated by the broken line. However, it is assumed that the traffic volume is than the prediction values of the traffic volume and trends to increase from the 1st day to the 3rd day of the week due to, for example, a sudden event such as the fiscal year end. When the anomaly determination using the corrected prediction values is performed, it is possible to reduce inadvertent determination of the increase in the traffic volume due to such a sudden event as an anomaly. However, when, for example, the sudden increase in traffic at the fiscal year end subsides due to the change of month from the 4th day and the traffic volume returns to the normal level, it is determined as abnormal by the anomaly determination using only the corrected prediction values. However, in the method of the present embodiment, the anomaly determination is performed by using the determination of whether the traffic volume is between the corrected prediction values and the prediction values. Thus, incorrect anomaly determination in such a case may be reduced. That is, even when a sudden event subsides and the traffic volume returns to the normal level, the anomaly is determined also using the prediction values of the normal state. Thus, it is possible to deal with the sudden occurrence of an event and subsiding of the event. InFIG.4, the actually measured values and the prediction values largely deviate from each other in the increasing trend from the 1st day to the 3rd day, and the corrected prediction values are acquired such that the deviation is compensated. In the case ofFIG.4, the pattern of increasing trend from the 1st day to the 3rd day is grasped, and the corrected prediction values are acquired based on the prediction values and the actually measured values of the previous day. The corrected prediction values from the 2nd day to the 4th day are acquired to be higher than the prediction values. On the other hand, since the pattern changes to a decreasing trend on the 4th day, the corrected prediction values on the 5th day are acquired to be lower than the prediction values. Furthermore, an upper limit threshold value and a lower limit threshold value are acquired for the prediction values, and also acquired for the corrected prediction values. The normal range from the 2nd day to the 4th day is defined by the upper limit threshold value of the corrected prediction values and the lower limit threshold value of the prediction values. The normal range on the 5th day is defined by the upper limit threshold value of the prediction values and the lower limit threshold value of the corrected prediction values. FIG.5is a diagram illustrating an example of a case where the traffic volume pattern changes from a decreasing trend to an increasing trend in the middle of the week. In the example illustrated inFIG.5, the pattern is the opposite of that inFIG.4, and it is assumed that the traffic volume trends to decrease from the 1st day to the 3rd day of the week. On the other hand, the pattern changes to an increasing trend due to the occurrence of a sudden event on the 4th day. In both cases ofFIG.4andFIG.5, it is difficult to predict the fluctuation of the trend due to the occurrence or subsiding of the sudden event in advance. Therefore, the normal range is defined in consideration of both the prediction values and the corrected prediction values so that the trend fluctuates due to a sudden event can be dealt with. As described above, the normal range is defined based on the difference between the prediction values and the corrected prediction values. Since the corrected prediction values are acquired as values that compensate the deviation between the actually measured values and the prediction values, it can be said that the normal range is proportional to the magnitude of the deviation between the actually measured values and the prediction values. Furthermore, the large deviation indicates that the traffic trend fluctuates significantly. Therefore, in such a highly uncertain situation where the trend fluctuates greatly, the network traffic is monitored with a wider normal range. Therefore, it becomes possible to flexibly deal with the traffic fluctuation that is difficult to predict, and to detect appropriate anomaly without detecting unnecessary abnormalities. In the present embodiment, a case where a network used for business is used as a target route regarding the target of anomaly detection, and an anomaly in the traffic of the network is detected will be described as an example. In the present embodiment, the unit period for which the prediction value is acquired is set to every day (every 24 hours), and an anomaly is detected. Here, the terms related to the unit period for anomaly detection in the present embodiment will be summarized, Hereinafter, “target day”, “target period”, “reference day”, and “reference period” will be described as terms related to the unit period. The “target day” is a day on which an anomaly is detected. In the present embodiment, the corrected prediction value is acquired for each target day for which the prediction value is set. The target day is, for example, a business day when business is performed in a case where anomaly detection is performed in a business network. When Monday to Friday, which are weekdays of the week, are business days, each of the five days from Monday to Friday is set as the target day, If there is a holiday between Monday and Friday, the days excluding the holiday are set as the target days. For example, if Tuesday is a holiday, Monday, Wednesday, Thursday, and Friday are set as the target days. Note that if the network operates all the time regardless of the day of the week, all days of the week may be set as the target days. The “targets period” is a time zone during which an anomaly is detected out of time zones of the target day. For the target period, for example, a time zone from the start of the business to the end of the business may be set. If the business hours are 10:00-12:00 and 13:00-17:00, each time zone is set as the target period. In this way, any time zone in which the network is used for business is set as the target period to perform anomaly detection. Note that if the network operates all the time, all time zones of the target days may be set as the target periods. The target periods of the target days are an example of the first target period of the disclosed technique. The “reference day” is a day on which the actually measured values and prediction values are acquired in order to acquire the corrected prediction values. Regarding the reference day, the previous target day may be set as the reference day, and for example, the latest target day may be set as the reference day. In a case where target days are consecutive in a week, if the target day is expressed as xth day, (x−1)-th day is set as the reference day. Note that a plurality of reference days may be set, and for example, two days ((x−1)-th day and (x−2)-th day) that are the latest target days may be set as reference days. Furthermore, when a plurality of reference days is used, weighting or the like may be performed such that the influence of the later reference day becomes larger. The “reference period” is one or more time zones, out of time zones of the reference day, for acquiring actually measured values and prediction values used for acquiring the corrected prediction values. The reference period is set in this way because it is considered that the deviation between the prediction values and the actually measured values is likely to occur in time zones when the network is intensively used. As the reference period, for example, the peak time zone of business may be set. If the peak time zones are 10:00-11:30 and 14:30-15:30, these time zones are set. By using the actually measured values and the prediction values of such a reference period, corrected prediction values that reduce the deviation from the actually measured values can be acquired, Note that the reference period and the target period may be the same. The reference period is an example of the second target period of the disclosed technique. Note that in a case where the unit period is not every day but 12 hours, which is shorter than 24 hours, the target period and reference period may be defined as every 12 hours. Similarly, in a case where the unit period is 36 hours, which is longer than 24 hours, the target period and reference period may be defined as every 36 hours. Furthermore, in a case where the unit period is 2 days (48 hours), the target period and reference period may be defined as every 2 days. The same applies to other time intervals. Furthermore, the target period and the reference period may be defined for each target route. Hereinafter, the configuration and operation of the embodiment of the present disclosure will be described in detail. As illustrated inFIG.6, a traffic management device100according to the present embodiment includes a transmission/reception unit110, a traffic information storage112, a prediction unit114, a prediction information storage116, a derivation unit118, and corrected information storage120, a calculation unit122, and a detection unit124. The transmission/reception unit110transmits a request for traffic information to each route of the network, and receives traffic information from each route. Traffic management device100sets each route, from which traffic information is received, as a target route for anomaly detection. The traffic information is information about a route that includes the traffic volume of each route. When the transmission/reception unit110receives the traffic information, the transmission/reception unit110stores the traffic information in the traffic information storage112. In the traffic information storage112, the traffic information of each target route is stored. In the present embodiment, the traffic volume for each unit time out of the traffic information is treated as an actually measured value. The prediction unit114acquires and stores the prediction values of the traffic for each route. The prediction values may be acquired by using a traffic model based on the past actually measured values and using the method of the above-described reference technique. The prediction values may be acquired in advance before the target day. As described in the above-described example, for example, the actually measured values for the past four weeks from the start point of time of the prediction period are used as learning data to acquire prediction values within three weeks from the start point of time. The traffic model is learned using the traffic information stored in the traffic information storage112. In the prediction information storage116, the traffic model and prediction values of each target route are stored. The derivation unit118multiplies the prediction values by the correction coefficient a, for each target route to acquire the corrected prediction values for the target period, and stores the corrected prediction values in the corrected information storage120. The correction coefficient αxis calculated using the average value of the ratio of the actually measured values to the prediction values (actually measured values/prediction values) for each unit time of the reference period based on the actually measured values of the reference period and the prediction values of the reference period. The ratio is a value indicating the magnitude of the deviation between the actually measured values and the prediction values. Hereinafter, the way of acquiring the correction coefficient αxand the corrected prediction values will be described in detail. FIG.7is a diagram illustrating an example of a case where the corrected prediction values are acquired based on the correction coefficient αx. In the example ofFIG.7, the target day is expressed as xth day and the reference day is expressed as (x−1)-th day. The correction coefficient αxis calculated from the actually measured values and the prediction values of the reference period of the reference day ((x−1)-th day). Then, values acquired by multiplying the prediction values of the target day (xth day) by the correction coefficient αxare calculated as the corrected prediction values. In the case of the example ofFIG.7, the prediction values are less than the actually measured values within the reference period of (x−1)-th day. Thus, the correction coefficient αxthat makes the corrected prediction values of xth day larger than the prediction values is acquired. On xth day, the prediction values are multiplied by the correction coefficient αxto acquire corrected prediction values larger than the prediction values. FIGS.8A and8Bare a diagram illustrating an example of data of the actually measured values and the prediction values of the reference period used for calculation of the correction coefficient αx. In the example ofFIGS.8A and8B, the reference period includes the time zones between 10:00 and 11:30 and between 14:00 and 15:30 on May 27, 2019, and the unit time is 10 minutes, and thus sampling is performed every 10 minutes. The sampling time points are 10:00/10:10/10:20 . . . /11:30, and 14:00/14:10/14:20 . . . /15:30. The traffic volume for each sampling time point is acquired from the traffic information storage112as an actually measured value. Furthermore, the average of the traffic volumes during the unit time including the sampling time may be acquired as an actually measured value. If the sampling time is 10:00, the traffic volumes at any intervals (for example, every 1 minute, every 2 minutes, etc.) between 9:50 and 10:00 are acquired from the traffic information storage112and the average of the acquired traffic volumes is used as the actually measured value. If the sampling time is 15:30, the average of the traffic volumes at any intervals between 15:20 and 15:30 is acquired from the traffic information storage112as the actually measured value. Furthermore, the prediction value for each unit time of the reference period is acquired from the prediction information storage116. Then, the ratio of the actually measured value to the prediction value is acquired for each sampling time point. At the sampling time of 10:00, the ratio is calculated to be 1.113. Here, data of the actually measured values may include an abnormal value due to a temporary increase or decrease in traffic and the like. Therefore, in order to reduce abnormal values, the acquired ratios are sorted in ascending order and the top 10% and bottom 10% are excluded. Then, the average value of the remaining 80% of the ratios is acquired as the correction coefficient αx. In the example ofFIGS.8A and8B, the top 10% is an example of the predetermined range including the maximum value of the present disclosure, and the bottom 10% is an example of the predetermined range including the minimum value. Note that the top 10% and the bottom 10% is an example, and other percentages may be used. Furthermore, the ratios may be sorted in descending order instead of ascending order as long as predetermined ranges are excluded similarly. As described above, the ratio of the actually measured value to the prediction value is acquired for each unit time of the reference period, excluding an abnormal value, and the correction coefficient αxis acquired as an average of the remaining ratios. Note that the average value of the ratios is an example, and the median value may be used. Furthermore, the time zone and the unit time of this reference period are examples, and other time zone and unit time may be used. Note that in the above description, the case where (actually measured values/prediction values) are used as the ratios of the actually measured values to the prediction values has been described as an example, but the correction coefficient αxmay be acquired by using (prediction values/actually measured values) as the ratios. In this case, the corrected prediction values are acquired by dividing the prediction values by the correction coefficient αx. Furthermore, for the correction coefficient αx, carry-over setting for each week unit is set in advance. The carry-over setting is a setting that defines whether to reset the correction coefficient αxwithout carrying it over to the next week or to carry it over and use the correction coefficient αx. Resetting means that, for example, when Monday is the target day, the corrected prediction value is not used on Monday. That is, in a case of resetting the correction coefficient αx, the correction coefficient αxis not acquired with any day of the previous week as the reference day. Furthermore, carrying over means that, for example, when Monday is the target day, the correction coefficient αxacquired using any day of the previous week as the reference day is used as the corrected prediction value of Monday. The reason for introducing the carry-over setting every week as described above is that the trend of traffic fluctuation may differ according to the season. The carry-over setting may be made in advance, or may be automatically made by taking the statistics on the first days of the week. In this way, setting is made such that on a day defined as the first day of the week, the corrected prediction value is not acquired, or the corrected prediction value is acquired using a predetermined day of the previous week as the reference day. Note that Monday, which is the target day, is an example of the day defined as the first day of the week of the present disclosure. FIG.9is a diagram illustrating an example in which the correction coefficient αxis reset on the first day of the week and the correction coefficient αxis acquired on the next day and the following days to acquire the corrected prediction values. In the example illustrated inFIG.9, the trend indicating that the actually measured values of traffic are larger than the prediction values ends when a week is over, and the actually measured values become smaller. In a case where it is expected that the fluctuation trend of traffic does not continue when a week is over as described above, setting may be made such that the correction coefficient αxis reset without being carried over. In the example illustrated inFIG.9, the correction coefficient αxof the previous week is not carried over, but is reset on the first day and the corrected prediction values are acquired from the next day. FIG.10is a diagram illustrating an example in which the correction coefficient αxof the previous week is carried over to acquire the corrected prediction values. In the example illustrated inFIG.10, the trend indicating that the actually measured values of traffic are larger than the prediction values continues when a week is over. In a case where it is expected that the fluctuation trend of traffic continues when a week is over as described above, setting may be made such that the correction coefficient αxis not reset and is carried over. In the example ofFIG.10, the correction coefficient αxof Monday of the previous week is carried over and used to acquire the corrected prediction values. Note that the correction coefficient αxof any day of the previous week may be carried over, and for example, the correction coefficient αxof the last target day of the previous week may be carried over. Alternatively, the correction coefficient αxsuch as the average of the correction coefficients αxof the previous week or the median value may be calculated and carried over to the first day of the week. The way of acquiring the corrected prediction values by the derivation unit118has been described above. In the corrected information storage120, the correction coefficient αxand the corrected prediction values for each target route are stored. Furthermore, in the corrected information storage120, the setting of the normal range for each unit time for each target route calculated by the calculation unit122described below is stored. The setting of the normal range includes an upper limit threshold value and a lower limit threshold value that define the normal range. The calculation unit122sets the normal range for each unit time for each target route based on the upper limit threshold value and the lower limit threshold value for the prediction values, and also acquired for the corrected prediction values. The upper limit threshold value and the lower limit threshold value for the prediction values and those for the corrected prediction values may be acquired by using the method of the reference technique and using the standard deviation σ. For example, regarding the prediction values, “prediction value+3σ” is set as the upper limit threshold value of the prediction value, and “prediction value−3σ” is set as the lower limit threshold value of the prediction value. Regarding the corrected prediction values, “corrected prediction value+3σ” is set as the upper limit threshold value of the corrected prediction value, and “corrected prediction value−3σ” is set as the lower limit threshold value of the corrected prediction value. The normal range for each unit time is set to a range from the upper limit threshold value that is higher among those of the prediction value and the corrected prediction value to the lower limit threshold value that is lower. In a case where the corrected prediction value>the prediction value is satisfied, the range from the upper limit threshold value of the corrected prediction value to the lower limit threshold value of the prediction value is defined as the normal range. In a case where the corrected prediction value<the prediction value is satisfied, the range from the upper limit threshold value of the prediction value to the lower limit threshold value of the corrected prediction value is defined as the normal range. In this way, the normal range is defined by using one of the prediction value and the corrected prediction value as the upper limit value and the other as the lower limit value. Note that when the prediction value and the corrected prediction value are the same, the upper limit threshold value and the lower limit threshold value of either one may be used because the normal range would be the same. The detection unit124detects an anomaly using the normal range set by the calculation unit122for each target route. The anomaly detection method may be based on the method of the above-described reference technique, and the detection unit124determines, for each unit time, whether the actually measured value is within the normal range. If the actually measured value is within the normal range, the detection unit124determines that there is no anomaly, and if the actually measured value is not within the normal range, the detection unit124determines that there is an anomaly. The detection unit124performs anomaly detection by determining whether there is an anomaly by thus using the normal range. The traffic management device100can be implemented, for example, by a computer20illustrated inFIG.11. The computer20includes a central processing unit (CPU)21, a memory22as a temporary storage area, and a nonvolatile storage23. Note that the computer20also includes an input/output device24, a read/write (R/W) unit25that controls reading and writing of data to and from a storage medium29, and a communication interface (I/F)26connected to a network such as the Internet. The CPU21, the memory22, the storage23, the input/output device24, the R/W unit25, and the communication I/F26are connected to each other via a bus27. The storage23can be implemented by a hard disk drive (HDD), a solid state drive (SSD), a flash memory, or the like. The storage23as a storage medium stores a management program30for causing the computer20to function as the traffic management device100. The management program30includes a prediction process32, a derivation process33, a calculation process34, and a detection process35. Furthermore, the storage23has an information storage area60for storing information for configuring each of the traffic information storage112, the prediction information storage116, and the corrected information storage120. Note that the management program30is an example of an anomaly detection program of the disclosed technique. The CPU21reads the management program30from the storage23, develops the management program30in the memory22, and sequentially executes the processes included in the management program30. The CPU21executes the prediction process32so as to operate as the prediction unit114illustrated inFIG.6. Note that the CPU21executes the derivation process33so as to operate as the derivation unit118illustrated inFIG.6. Furthermore, the CPU21executes the calculation process34so as to operate as the calculation unit122illustrated inFIG.6. The CPU21executes the detection process35so as to operate as the detection unit124illustrated in FIG,6, Furthermore, the CPU21reads information from the information storage area39, and develops each of the traffic information storage112, the prediction information storage116, and the corrected information storage120into the memory22. This enables the computer20that has executed the management program30to function as the traffic management device100. Note that the CPU21that executes the program is hardware. Note that, functions implemented by the management program30can also be implemented, for example, by a semiconductor integrated circuit, in more detail, an application specific integrated circuit (ASIC) or the like. Next, operation of the traffic management device100according to the present embodiment will be described. The operation of the traffic management device100is divided into correction processing performed before the start of the target period and detection processing performed during the target period. Note that it is assumed that the processing of the prediction unit114is performed in advance and the prediction values of each target route are stored in the prediction information storage116. The correction processing and the detection processing are included in an example of the anomaly detection method of the disclosed technique. The correction processing will be described with reference to the flowchart ofFIG.12. The following correction processing is performed for each target route. In step S100, the derivation unit118determines whether it is the correction processing timing, and if it is the correction processing timing, the processing proceeds to step S102, and if it is not the correction processing timing, step S100is repeated in a predetermined time interval. The correction processing timing may be set to any time before the start of the target period of the target route for each target route. In step S102, the derivation unit118acquires actually measured values and prediction values of the reference period of the reference day for the target route. The actually measured values are acquired from the traffic information storage112, and the prediction values are acquired from the prediction information storage116. In step S104, the derivation unit118calculates the correction coefficient αxfor the target route based on the actually measured values and the prediction values of the reference period of the reference day. The correction coefficient αxis calculated from the average value of the ratios of the actually measured values to the prediction values each for a unit time of the reference period. In step S106, the derivation unit118multiplies the prediction values of the target period by the correction coefficient αxfor the target route to acquire the corrected prediction values, and stores the corrected prediction values in the corrected information storage120. In step S108, the calculation unit122calculates, for the target route, the upper limit threshold value and the lower limit threshold value for the prediction values and those for the corrected prediction values for each unit time of the target period. In step S110, the calculation unit122sets the normal range for each unit time of the target period for the target route, and stores the setting of the normal range in the corrected information storage120. When the normal range is set, based on the upper limit threshold value and the lower limit threshold value for the prediction value and those for the corrected prediction value calculated in step S108, the upper limit threshold value that is higher among those of the prediction value and the corrected prediction value is selected and the lower limit threshold value that is lower among those of the prediction value and the corrected prediction value is selected to define the normal range. As described above, the normal range is defined based on the difference between the prediction values and the corrected prediction values. Next, the detection processing will be described. The detection processing will be described with reference to the flowchart ofFIG.13. The detection processing is performed every unit time (for example, every 5 minutes, every 10 minutes, etc.) for each target route. In the following, a case where the detection processing is performed for one target route will be described as an example. In step S200, the calculation unit122determines whether the current time is the target period, and proceeds to step S202if the current time is the target period, and ends the processing if the current time is not the target period. In step S202, the calculation unit122acquires the setting of the normal range of the unit time for the target route. The setting of the normal range is acquired from the corrected information storage120. In step S204, the calculation unit122acquires, from the traffic information storage112, the actually measured value for the unit time corresponding to the setting of the setting method for the target route. For example, when the current time is 10:10, the traffic volume at 10:10 is acquired from the traffic information storage112as an actually measured value. In step S206, the detection unit124determines whether the actually measured value is within the normal range based on the actually measured value acquired in step S204and the setting of the normal range acquired in step S202. If the actually measured value is within the normal range, the processing proceeds to step S208, and if the actually measured value is not within the normal range, the processing proceeds to step S210. In step S208, the detection unit124determines that no anomaly has occurred in the actually measured value for the unit time, and outputs that there is no anomaly. In step S210, the detection unit124determines that an anomaly has occurred in the actually measured value for the unit time, and outputs that there is an anomaly. Note that the processing of acquiring the actually measured value to detect an anomaly from steps S204to S210may be repeatedly performed at intervals shorter than the unit time for performing the detection processing. The correction processing and the detection processing of the present embodiment have been described above. An experimental example of the method according to the present embodiment will be described.FIG.14is a diagram illustrating an experimental example of the method according to the present embodiment. In the experimental example illustrated inFIG.14, the correction coefficient o was reset every week to acquire corrected prediction values for the busy season of the last week of May and the slack season of the first week of June, which is the following month. In this experiment, the corrected prediction values trend to be closer to the actually measured values than the prediction values, and the effectiveness of performing anomaly detection using the corrected prediction value was confirmed. Furthermore, an example of a screen on which a network administrator can check the transition of traffic will be described.FIG.15is a diagram illustrating a screen example indicating the transition of the actually measured values of the traffic, the upper limit values of the normal range, and the lower limit values of the normal range. As illustrated in the screen example illustrated inFIG.15, the transition of the actually measured values, the upper limit values of the normal range, and the lower limit values of the normal range are indicated. Thus, it can be monitored whether there is an anomaly in real time. InFIG.15, the solid line represents the actually measured values, the dotted line represents the upper limit values of the normal range based on the corrected prediction values, and the broken line represents the lower limit values of the normal range based on the prediction values. Furthermore, as illustrated in the screen example illustrated inFIG.16, when the actually measured values fall below the lower limit of the normal range, a message telling that an anomaly has occurred is displayed. As described above, the traffic management device100according to the present embodiment calculates the correction coefficient αxbased on the actually measured values and the prediction values of the reference period for each target route, and multiplies the prediction values of the target period by the correction coefficient αxto acquire the corrected prediction values. The traffic management device100calculates the normal range for the prediction values and the corrected prediction values. Furthermore, the traffic management device100acquires an actually measured value of a predetermined period out of the target period, and detects an anomaly using the normal range. Therefore, it is possible to detect an appropriate anomaly according to a fluctuation that is difficult to predict. (Modification) Next, modifications of the present embodiment will be described. For example, in the above-described embodiment, the case where the correction coefficient αxis used has been described as an example, but the present disclosure is not limited to the case, and instead of the correction coefficient αx, a correction value βx, which is calculated from the difference between the actually measured values and the prediction values of the reference period, may be used. In a case where the correction value βxis used, values acquired by adding the correction value βxto the prediction values are acquired as the corrected prediction values. Furthermore, in this case, differences between the actually measured values and the prediction values are used instead of the ratios. The correction value βxis calculated as an average value calculated using difference values that remain when values included in a predetermined range including the maximum value and values included in a predetermined range including the minimum value are removed from the difference values. Then, corrected prediction values are acquired by adding the correction value βxto prediction values. Furthermore, the detection of an anomaly may be performed by, for example, calculating an anomaly degree by the following expressions (1-1) and (1-2) using the differences between the actually measured values and the prediction values and the differences between the actually measured values and the corrected prediction values. first anomaly degree=((actually measured value)−(prediction value))2(1-1) second anomaly degree=((actually measured values)−(corrected prediction value))2(1-2) The first anomaly degree and the second anomaly degree are compared with threshold values preset respectively, and if either of them is within the threshold values, it is determined that there is no anomaly, and if neither of them is within the threshold values, it is determined that there is an anomaly to perform anomaly detection. In this case, these anomaly degrees are used as references of the present disclosure. Furthermore, the expressions (1-1) and (1-2) may be replaced with the following equations (2-1) and (2-2). first anomaly degree=(((actually measured value)−(prediction value))/(prediction value))2(2-1) second anomaly degree=(((actually measured values)−(corrected prediction value))/(corrected prediction value))2(2-2) Furthermore, the corrected prediction value may be acquired by storing the corrected prediction values in a table or the like that defines the relationship between pieces of data of the prediction values, the actually measured values, and the correction coefficients αxand the corrected prediction values, and reading a corrected prediction value. Furthermore, the corrected prediction value may be acquired by using a method such as deep learning. When a method such as deep learning is used, a correction model that outputs corrected prediction values is learned using, as learning data, each of the actually measured values acquired in the target period, the prediction values and the corrected prediction values of the target period, and the actually measured values and the prediction values of the reference period. The learning data may be accumulated by the method of the present embodiment for a certain period that is, for example, several weeks. The correction model corresponds to the correction coefficient αxof the above-described embodiment. The correction model is trained to optimize the differences between the corrected prediction values and the actually measured values. The derivation unit118may input the actually measured values and the prediction values of the reference period into the trained correction model, and acquire the corrected prediction values as outputs of the correction model. By using the correction model, it is possible to deal with the trend of traffic fluctuation of the target route. All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention. | 56,132 |
11863419 | DESCRIPTION OF EMBODIMENT Prerequisite Technique A technique prerequisite for an example will be described in detail with reference toFIGS.1to22G. FIG.1is a drawing showing an example of a computer network24according to an embodiment of the present invention. In the present embodiment, as shown inFIG.1, a marketplace system (MPS)10, a network operating system (NOS)12, a purchaser terminal14, a vendor terminal16, a plurality of core network systems20, and a plurality of base station apparatuses22are connected to the computer network24such as the Internet. The core network system20is a system that is equivalent to the Evolved Packet Core (EPC) in fourth-generation mobile communications systems (hereinafter referred to as 4G) or the 5G Core Network (5GC) including Access and Mobility Management function (AMF), Session Management function (SMF), and User Plane function (UPF) in fifth-generation mobile communications systems (hereinafter referred to as 5G). The core network systems20according to the present embodiment are implemented by server groups disposed at a plurality of data centers provided at various locations. A plurality of servers is disposed at each data center. Although two core network systems20are shown inFIG.1, the number of the core network systems20according to the present embodiment is not limited to two, but may be one, or three or greater. The base station apparatus22is a computer system that is equivalent to an eNodeB (eNB) in 4G or an NR base station (gNB) in 5G and that is equipped with an antenna22a. The base station apparatus22according to the present embodiment includes one or more servers. The base station apparatuses22may be implemented by server groups disposed at data centers. A virtual DU (vDU) and a virtual CU (vCU), components of a radio access network (RAN), in 4G may be disposed at the base station apparatus22or may be built in part of the core network system20. Similarly, a DU and a CU, components of a RAN, in 5G may be disposed at the base station apparatus22or may be built in part of the core network system20. The MPS10according to the present embodiment is configured, for example, on a cloud infrastructure and as shown inFIG.1, includes a processor10a, a storage10b, and a communicator10c. The processor10ais a program-controlled device, such as a microprocessor, that operates according to a program installed in the MPS10. The storage10bis, for example, a storage element, such as read only memory (ROM) and random-access memory (RAM), a solid-state drive (SSD), or a hard disk drive (HDD). A program and the like run by the processor10ais stored on the storage10b. The communicator10cis, for example, a communication interface such as a network interface card (NIC) and a wireless LAN module. The communicator10cis used to send data to and receive from the NOS12and the purchaser terminal14through the computer network24. The NOS12according to the present embodiment is configured, for example, on a cloud infrastructure and as shown inFIG.1, includes a processor12a, a storage12b, and a communicator12c. The processor12ais a program-controlled device, such as a microprocessor, that operates according to a program installed in the NOS12. The storage12bis, for example, a storage element, such as ROM and RAM, a solid state drive (SSD), or a hard disk drive (HDD). A program and the like run by the processor12ais stored in the storage12b. The communicator12cis, for example, a communication interface such as an NIC and a wireless LAN module. The communicator12cis used to send data to and receive from the MPS10, the purchaser terminal14, the vendor terminal16, the core network systems20, and the base station apparatuses22through the computer network24. In the present embodiment, in response to a purchase request for a network service from a purchaser, the network service for which the purchase request has been made is built on any of the core network systems20and the base station apparatuses22. Then, the built network service is provided to the purchaser. A network service such as a voice communication service and a data communication service is provided to the purchaser such as a mobile virtual network operator (MVNO), for example. Any of the voice communication service and the data communication service provided in the present embodiment is ultimately provided to a customer (an end user) of the purchaser (the MVNO in the above example) who uses user equipment (UE)26shown inFIG.1. The end user can have any of voice and data communications with another user through any of the core network systems20and the base station apparatuses22. The network service provided in the present embodiment is not limited to voice communication services and data communication services. The network service provided in the present embodiment may be an Internet of things (IoT) service, for example. An end user who uses a robot arm, a connected vehicle, or other equipment may be a purchaser of a network service according to the present embodiment, for example. In the present embodiment, an application execution environment of a container type, such as Docker, is installed in the servers disposed on the core network systems20and the base station apparatuses22. Containers can be deployed and run on these servers. The network service provided to the purchaser in the present embodiment is implemented by a cloud-native network function (CNF) that is a container-based functional unit. The purchaser terminal14according to the present embodiment is, for example, a general computer, such as a smartphone, a tablet terminal, and a personal computer, used by the purchaser above. FIG.2is a drawing showing an example of a purchase screen displayed on the purchaser terminal14according to the present embodiment. Radio buttons on the purchase screen shown inFIG.2allow the purchaser to select a type of a network service the purchaser is to purchase. When the purchaser designates a voice communication service and clicks a Next button30, a service requirement input screen shown inFIG.3is displayed on the purchaser terminal14. The service requirement input screen allows the purchaser to enter service requirements for the network service the purchaser is to purchase. In an example of FIG.3, the purchaser can set the number of subscribers, a correspondent IP address, a monitored target, a monitoring interval, a covered region, and a password. The correspondent IP address refers to an IP address that is an access point to a network system owned by the purchaser. When the purchaser enters these service requirements and clicks the Next button32, items of service requirement data corresponding to the requirements entered in the service requirement input screen are sent to the MPS10. The service requirement data, for example, includes subscriber number data showing the number of subscribers, correspondent IP data showing a correspondent IP address, monitored target data showing a target to be monitored, monitoring interval data showing intervals at which the monitored target is monitored, covered region data showing a region covered by the purchased network service, and password data showing a password. The service requirement data does not necessarily include all of these items of the data but may include an item of data showing a requirement other than these requirements. Based on the service requirement data, the MPS10in coordination with the NOS12checks whether or not this network can reserve a server that meets the service requirements represented by the service requirement data. In this example, the MPS determines one of (1) a server that meets the service requirements can be reserved, (2) a server that meets the service requirements can be reserved by setting up a free server, and (3) a server that meets the service requirements cannot be reserved. When a result determined by the MPS is (1) or (2), the purchaser terminal14displays a purchase confirmation screen as shown inFIG.4to show that the service can be promptly provided. When the result determined by the MPS is (3), the purchaser terminal14displays a purchase confirmation screen as shown inFIG.5to show that a predetermined delivery time is required (for example, a delivery time of two weeks is required). When the purchaser then clicks a Purchase button34shown inFIG.4or5, the network service for which the purchase request has been made is built and is provided to the purchaser. Meanwhile, when the purchaser clicks a Cancel button36shown inFIG.4or5, the purchase request is canceled. As described above, according to the present embodiment, a network service that meets various needs of a purchaser is flexibly built. The purchaser can be provided with a desired network service only by specifying several service requirements without being aware of detailed implementation of the network service. The vendor terminal16according to the present embodiment is, for example, a general computer, such as a smartphone, a tablet terminal, and a personal computer, used by a vendor such as a provider of a service related to a network service. In the present embodiment, a continuous integration/continuous delivery (CI/CD) pipeline including a development environment, a verification environment, and a testing environment is provided to a vendor. In the present embodiment, through an on-boarding process using the CI/CD pipeline, a bundled file that is prepared by the vendor and that is verified is on-boarded. The bundled file is compatible with a network service that is a target provided to the purchaser. The bundled file according to the present embodiment is a file (e.g., a tar.gz format file) into which a group of files having a predetermined directory configuration is compressed, for example. FIG.6is a drawing showing an example of a data structure of a bundled file according to the present embodiment. As shown inFIG.6, the bundled file according to the present embodiment includes business section data, technology section data, security section data, and operation section data. The business section data shows, for example, business requirements for the network service, such as a name of the network service, a license requirement, and a definition of a service level agreement (SLA). The business section data according to the present embodiment includes data showing an essential field to be entered and an optional field to be entered about service requirements for the network service. The technology section data shows, for example, a configuration of a functional unit group implementing the network service. The technology section data shows, for example, a configuration of an application and a CNF that make up the network service. The security section data shows, for example, a definition of security on the network service, such as information about a qualification for installation. The operation section data shows, for example, a monitoring policy relating to the network service, such as on metrics of a monitored target and monitoring intervals. FIG.7is a drawing showing an example of an on-boarding screen displayed on the vendor terminal16according to the present embodiment. In the present embodiment, when the vendor specifies a path on which a bundled file is disposed and clicks an On-boarding button40, the bundled file is on-boarded. In the present embodiment, as described above, the vendor can readily on-board the file for the network service without being aware of an actual location where a group of developed files is on-boarded. Functions of the MPS10and the NOS12and a process executed by the MPS10and the NOS12according to the present embodiment will be further described below. FIG.8is a functional block diagram showing an example of functions implemented in the MPS10and the NOS12according to the present embodiment. In the MPS10and the NOS12according to the present embodiment, not all the functions shown inFIG.8may need to be implemented and a function other than the functions inFIG.8may be implemented. As shown inFIG.8, the MPS10includes a bundle manager50, a product catalog storage52, and a purchase manager54in terms of function, for example. The bundle manager50and the purchase manager54are implemented primarily by the processor10aand the communicator10c. The product catalog storage52is implemented primarily by the storage10b. A program that includes commands associated with the above functions and that is installed in the MPS10, which is a computer, may be executed by the processor10aand the above functions may be thereby implemented. The program may be supplied, for example, to the MPS10via a computer-readable data storage medium, such as an optical disk, a magnetic disk, a magnetic tape, a magneto-optical disk, and flash memory, or others such as the Internet. As shown inFIG.8, the NOS12includes a bundle expander60, an orchestrator (E2EO: end-to-end-orchestration)62, a service catalog storage64, an inventory manager66, a configuration management as a service (CMaaS) system68, a service manager70, a slice manager72, a monitoring manager74, a security setter76, a plurality of container managers78, a repository80, an inventory database82, and a bare metal as a service (BMaaS) system84in terms of function, for example. The bundle expander60and the E2EO62are implemented primarily by the processor12aand the communicator12c. The service catalog storage64, the repository80, and the inventory database82are implemented primarily by the storage12b. The inventory manager66, the CMaaS system68, the service manager70, the slice manager72, the monitoring manager74, the security setter76, and the container managers78are implemented primarily by the processor12aand the storage12b. The BMaaS system84is implemented primarily by the processor12a. A program that includes commands associated with the above functions and that is installed in the NOS12, which is a computer, may be executed by the processor12aand the above functions may be thereby implemented. The program may be supplied, for example, to the NOS12via a computer-readable data storage medium, such as an optical disk, a magnetic disk, a magnetic tape, a magneto-optical disk, and flash memory, or others such as the Internet. FIG.8also shows a plurality of servers90that are disposed and dispersed at various locations and that are included in the core network systems20and the base station apparatuses22shown inFIG.1. The plurality of the container managers78according to the present embodiment are associated with respective server groups that are each a part of the plurality of the servers90. A container management tool such as Kubernetes and a package manager such as Helm, for example, are installed in each of the plurality of the container managers78according to the present embodiment. The container managers78perform container life cycle management, including building containers such as container deployment and settings, on the server groups (the plurality of the servers90) associated with the container managers78. The container managers78are not necessarily included in the NOS12. The container managers78may be disposed, for example, in the servers90(i.e., the core network systems20and the base station apparatuses22) managed by the container managers78or in servers provided next to the servers90. In the present embodiment, the bundle expander60accepts, for example, a bundled file from the vendor terminal16. In the present embodiment, based on the accepted bundled file, the bundle expander60generates, for example, a data group, a data structure of which is shown inFIG.9. The data group shown inFIG.9is a data group in which content of the bundled file accepted by the bundle expander60is reconfigured. As shown inFIG.9, the data group generated by the bundle expander60includes product catalog data, service catalog data, inventory template data, CM template data, service template data, slice template data, monitoring script data, security script data, Helm chart data, and container image data. The product catalog data is, for example, data corresponding to the business section data included in the bundled file. As described above, the product catalog data shows information concerning business requirements for the network service, such as a name of the network service, which is displayed on the purchase screen shown inFIG.2, a license requirement, and a definition of a service level agreement (SLA). The product catalog data according to the present embodiment also includes data showing an essential field to be entered and an optional field to be entered about service requirements for the network service. In the present embodiment, for example, based on the product catalog data, the purchase screen shown inFIG.2and the service requirement input screen shown inFIG.3are generated. The service catalog data is, for example, data corresponding to a part of the technology section data included in the bundled file. The service catalog data includes a workflow script used to build the network service. The service catalog data may include requirement-configuration association data showing an association between values in the service requirement data described above and a configuration of a functional unit group (e.g., a CNF group) built in response to a purchase request. For example, the service catalog data may include requirement-configuration association data showing an association between a value in the service requirement data and each of types of a group of functional units and the number of the functional units of the types. For example, the requirement-configuration association data may show an association each between “the number of subscribers20000and one Packet Data Network Gateway (P-GW)”, “the number of subscribers20000and one IP Multimedia System (IMS)”, and “the number of subscribers20000and one Home Subscriber Server (HSS)”. Items associated with the service requirement data are not limited to types and the number of 4G components. The service requirement data may be associated with types and the number of 5G components. The requirement-configuration association data may show, for example, an association between a value in the service requirement data and a location at which each functional unit included in a functional unit group is built. The functional unit group is built in response to a purchase request. In this case, in the requirement-configuration association data, the location associated with the value in the service requirement data may differ depending on the functional unit included in the built functional unit group. The inventory template data is, for example, data corresponding to both a part of the technology section data and a part of the security section data included in the bundled file. The inventory template data is, for example, template data showing logic used by the inventory manager66. The CM template data is, for example, data corresponding to both a part of the technology section data and a part of the operation section data included in the bundled file and is, for example, template data showing logic used by the CMaaS system68. The service template data is, for example, data corresponding to a part of the technology section data included in the bundled file and is, for example, template data showing logic used by the service manager70. The slice template data is, for example, data corresponding to a part of the technology section data included in the bundled file and is, for example, template data showing logic used by the slice manager72. The monitoring script data is, for example, data corresponding to a part of the operation section data included in the bundled file and is, for example, data showing a monitoring script run by the monitoring manager74. The security script data is, for example, data corresponding to a part of the security section data included in the bundled file and is, for example, data showing a script about security run by the security setter76. The Helm chart data is, for example, data corresponding to a part of the operation section data included in the bundled file and is data showing a script template (a Helm chart) used by the container managers78. The container image data is, for example, data corresponding to a part of the operation section data included in the bundled file and is, for example, container image data about a container included in the functional unit group implementing the network service. The container image data includes one or more container images. A container image ID, an identifier of a container image, is related to each of the one or more container images. In the present embodiment, in response to acceptance of a bundled file, the bundle expander60determines a bundle ID related to a data group generated based on the bundled file. The bundle ID is uniquely assigned to each generated data group. The bundle expander60sends product catalog data included in the data group, which is related to the bundle ID, to the MPS10, with the product catalog data being related to the determined bundle ID. The bundle expander60also relates service catalog data included in the data group to the determined bundle ID and outputs the service catalog data to the E2EO62. Then, the E2EO62causes the service catalog storage64to store the service catalog data. The bundle expander60also relates inventory template data, CM template data, service template data, slice template data, monitoring script data, security script data, Helm chart data, and container image data to the bundle ID, which is related to the data group, and causes the inventory manager66, the CMaaS system68, the service manager70, the slice manager72, the monitoring manager74, the security setter76, the container managers78, and the repository80to store the respective items of the data. In this way, in the present embodiment, the product catalog data, the service catalog data, the inventory template data, the CM template data, the service template data, the slice template data, the monitoring script data, the security script data, the Helm chart data, and the container image data are associated with one another by the bundle ID. In the present embodiment, the vendor can readily provide a network service by conducting a simple operation such as specifying a path for a bundled file. In the present embodiment, the bundle manager50receives, for example, the product catalog data related to the bundle ID and sent from the bundle expander60. Then, the bundle manager50causes the product catalog storage52to store the received product catalog data. In the present embodiment, the product catalog storage52stores, for example, the product catalog data related to the bundle ID as described above. In the present embodiment, the purchase manager54accepts, for example, a request for network service building like a purchase request for a network service from the purchaser terminal14. Such a purchase request is related to a bundle ID and service requirement data. The bundle ID related to the purchase request is hereinafter referred to as a purchase bundle ID, and the service requirement data related to the purchase request is referred to as purchase service requirement data. In response to acceptance of the above purchase request, the purchase manager54sends the purchase service requirement data related to the purchase bundle ID to the E2EO62. The purchase manager54, in coordination with the E2EO62and the inventory manager66, specifies a delivery time for the network service the purchaser is to purchase. Then, the purchase manager54informs the purchaser of the specified delivery time. The purchase manager54generates, for example, the purchase confirmation screen in which the specified delivery time is shown and sends the generated purchase confirmation screen to the purchaser terminal14. In the present embodiment, the inventory database82is, for example, a database on which inventory information about the plurality of the servers90, which are managed by the NOS12and disposed on the core network systems20and the base station apparatuses22, is stored. In the present embodiment, inventory data including physical inventory data shown inFIG.10and logical inventory data shown inFIG.11is stored on the inventory database82, for example. The inventory data shows a status of a resource (e.g., a status of resource usage) managed by the NOS12. FIG.10is a drawing showing an example of a data structure of the physical inventory data. The physical inventory data shown inFIG.10is related to any one of the servers90. The physical inventory data shown inFIG.10, for example, includes a server ID, location data, building data, rank data, rack data, an allocation resource pool group ID, an allocation resource pool ID, spec data, network data, and a running container ID list. The server ID included in the physical inventory data is, for example, an identifier of the server90related to the physical inventory data. The location data included in the physical inventory data is, for example, data showing a location (e.g., a location address) of the server90related to the physical inventory data. The building data included in the physical inventory data is, for example, data showing a building (e.g., a name of a building) in which the server90related to the physical inventory data is disposed. The rank data included in the physical inventory data is, for example, data showing a rank in which the server90related to the physical inventory data is disposed. The rack data included in the physical inventory data is, for example, an identifier of a rack on which the server90related to the physical inventory data is disposed. The allocation resource pool group ID included in the physical inventory data is, for example, an identifier of a resource pool group to which the server90related to the physical inventory data is allocated. The allocation resource pool ID included in the physical inventory data is, for example, an identifier of a resource pool to which the server90related to the physical inventory data is allocated. The resource pool indicated by the allocation resource pool ID is any resource pool included in the resource pool group related to the allocation resource pool group ID. In the present embodiment, a free server is allocated to a resource pool group. However, to which resource pool included in the resource pool group the free server is allocated has not been determined yet. In the physical inventory data related to such a free server, a value of the allocation resource pool ID is set to null. The spec data included in the physical inventory data is, for example, data showing specs of the server90related to the physical inventory data, such as the number of cores, a memory capacity, and a hard disk capacity of the server90. The network data included in the physical inventory data is, for example, data showing features such as an NIC included in the server90related to the physical inventory data and the number of ports included in the NIC. The running container ID list included in the physical inventory data is, for example, data showing a list of an identifier (a container ID) of an instance of one or more containers running on the server90related to the physical inventory data. FIG.11is a schematic view showing an example of a data structure of the logical inventory data. As shown inFIG.11, the logical inventory data includes network service (NS) data, network function (NF) data, CNF data, pod data, and container data. The NS data is, for example, data showing an identifier of an instance of a network service equivalent to a virtual radio access network (vRAN) or the like and a type and other attributes of the network service. The NF data is, for example, data showing an identifier of an instance of a network function equivalent to eNodeB or the like and a type and other attributes of the network function. The CNF data is, for example, data showing an identifier of an instance of a CNF equivalent to a vCU, a vDU or the like and a type and other attributes of the CNF. The pod data is, for example, data showing an identifier of an instance of a pod included in the CNF and a type and other attributes of the pod. The pod refers to a smallest unit used to manage a Docker container by Kubernetes. The container data is data showing the container ID of an instance of a container included in the pod and a type and other attributes of the container. Data showing attributes such as a host name and an IP address may be specified in any of the data above included in the logical inventory data. The container data may include data showing the IP address of a container related to the container data, for example. The CNF data may include data showing the IP address and the host name of a CNF shown by the CNF data, for example. The data above has a hierarchical structure. The NS data is associated with one or more pieces of NF data corresponding to one or more network functions included in a network service corresponding to the NS data. The NF data is associated with one or more pieces of CNF data corresponding to one or more CNFs included in a network function corresponding to the NF data. The CNF data is associated with one or more pieces of pod data corresponding to one or more pods included in a CNF corresponding to the CNF data. The pod data is associated with one or more pieces of container data corresponding to one or more containers included in a pod corresponding to the pod data. The instance of a container and the server90on which the instance of the container is running are associated with each other by the container ID in the container data included in the logical inventory data and the container ID included in the running container ID list included in the physical inventory data. In the present embodiment, the network service the purchaser purchases (the network service corresponding to the product catalog data) is not necessarily an equivalent of the network service corresponding to the NS data. The network service the purchaser purchases may be implemented, for example, by a group of functional units equivalent to network functions corresponding to one or more pieces of NF data or may be implemented by a group of functional units corresponding to one or more pieces of CNF data. The network service purchased by the purchaser may be implemented, for example, by a group of functional units corresponding to one or more pods or may be implemented by a group of functional units corresponding to one or more containers. As shown inFIG.11, the logical inventory data according to the present embodiment includes a plurality of pieces of resource pool management data corresponding to resource pool groups. FIG.12is a drawing showing an example of resource pool management data according to the present embodiment. The resource pool management data shows statuses of a plurality of resource pools included in a resource pool group corresponding to the resource pool management data. The resource pool management data shown inFIG.12includes a resource pool group ID, a plurality of pieces of resource pool data, and data about a free server count. The resource pool group ID included in the resource pool management data is an identifier of the resource pool group corresponding to the resource pool management data. The free server count data included in the resource pool management data is data showing a count of free servers allocated to the resource pool group corresponding to the resource pool management data. The resource pool data is data showing the status of a resource pool included in the resource pool group corresponding to the resource pool management data. As shown inFIG.12, a resource pool ID, total core number data, remaining core count data, and CNF type data are included in the resource pool data. The resource pool ID is an identifier of the resource pool. The total core number data is data showing a total number of cores in the server90allocated to the resource pool. The total core number data is a concrete example of total resource quantity data showing a total quantity of a hardware resource included in the resource pool. The remaining core count data is data showing a count of the remaining cores in the server90allocated to the resource pool. The remaining core count data is a concrete example of remaining resource quantity data showing a quantity of the remaining hardware resource included in the resource pool. The CNF type data is data showing one or more types of CNFs associated with the resource pool. The CNF type data is a concrete example of functional unit type data showing one or more types of functional units associated with the resource pool. In the present embodiment, a resource pool group spanning a plurality of locations may be set in advance, or a resource pool group associated with only one location may be set in advance. In any case, the resource pool group is associated with one or more locations shown in the physical inventory data. The inventory manager66, in coordination with the container managers78, can understand the status of the resource as appropriate. The inventory manager66, based on the latest status of the resource, updates the inventory data stored on the inventory database82as appropriate. In the present embodiment, based on service requirement data sent from the purchase manager54, the E2EO62and the inventory manager66specify, for example, a configuration of a functional unit group implementing a network service to be purchased. The E2EO62acquires, for example, service catalog data related to a purchase bundle ID from the service catalog storage64. The purchase bundle ID is related to the purchase service requirement data sent from the purchase manager54. Then, the E2EO62runs a workflow script shown by the service catalog data. The E2EO62and the inventory manager66, based on the purchase service requirement data sent from the purchase manager54, the service catalog data related to the purchase bundle ID, inventory template data related to the purchase bundle ID, and inventory data, generate planned data illustrated inFIGS.13and14. The planned data is, for example, data showing a configuration of the functional unit group implementing the network service to be purchased. This process is triggered when the E2EO62runs the workflow script, and is executed, for example. FIG.13is a drawing showing an example of a data structure of planned data according to the present embodiment.FIG.14is a schematic view showing an example of the planned data according to the present embodiment. The planned data according to the present embodiment includes an inventory key that is an identifier of the planned data. The inventory key is uniquely assigned to the planned data when the planned data is generated. The planned data includes a purchase bundle ID (“0010” in an example ofFIG.14). The planned data includes a user ID that is an identifier of a purchaser (a user) who has made a purchase request. The planned data may include values specified in the purchase service requirement data. The planned data shown inFIGS.13and14includes a correspondent IP data value, a monitored target data value, a monitoring interval data value, and a password data value that are included in the purchase service requirement data. In the present embodiment, the planned data includes pieces of functional unit data about functional units included in the functional unit group implementing the network service to be purchased. The functional unit data, for example, includes CNF type data showing types of the functional units, host name data showing host names, IP address data showing IP addresses, and a plurality of pieces of container constituent data corresponding to containers making up the respective functional units. For example, the E2EO62may specify the number of the functional units in the built group based on the purchase service requirement data. The E2EO62may specify, for example, types of the functional units and the number of the functional units of the respective types in the group implementing the network service to be purchased, based on the purchase service requirement data and requirement-configuration association data included in the service catalog data. For example, when the number of subscribers shown by the service requirement data is 50000, the E2EO62may specify the built functional unit group to be three pieces of P-GW, three pieces of IMS, and three pieces of HSS based on the requirement-configuration association data described above. The E2EO62may output data showing the types of the functional units and the number of the functional units of the respective types in the group, as well as the service requirement data, to the inventory manager66. The inventory manager66may, based on the data and the inventory data, determine host names and IP addresses assigned to the functional units. In this example, host names and IP addresses may be determined so as not to duplicate host names and IP addresses that have been already used. Then, planned data including host name data showing the host names determined in this way and IP address data showing the determined IP addresses may be generated. As described above, the E2EO62may, based on the purchase service requirement data, specify a location where the functional units included in the built functional unit group are built. The E2EO62may, for example, determine a location for the functional units included in the built functional unit group based on covered region data included in the purchase service requirement data and the requirement-configuration association data included in the service catalog data. Different locations may be determined for the functional units. As for each of the functional units, a host name and an IP address available at the location determined for the functional unit may be determined as the host name and the IP address for the functional unit. Then, planned data including host name data showing the host names determined in this way and IP address data showing the determined IP addresses may be generated. The E2EO62may specify, based on the purchase service requirement data, the type of a functional unit and the number of the functional units built at each of a plurality of locations. In this case, in response to a location specified based on the purchase service requirement data, the E2EO62may specify the number of the functional units of the respective types built at the location. The E2EO62may determine the number of the functional units of the respective types built at each location, based on weights set location by location specified based on the purchase service requirement data. The E2EO62may store, for example, expected busy level data shown inFIG.15. The expected busy level data shown inFIG.15shows, for example, a population in an area covered by one or more cells subordinate to each of data centers associated with the estimated busy level data. Estimated busy level values in the data are an example of the weights set location by location described above. The estimated busy level data regarding a data center for a core network system20shows, for example, a population in an area covered by one or more cells of base station apparatuses22communicating with the core network system20. The number of functional units deployed to a location may rise, for example, with an increase in population at the location shown by the estimated busy level data. It is assumed that a total number n of deployed vDUs is specified, for example, based on subscriber number data included in the purchase service requirement data. It is also assumed that based on covered region data included in the purchase service requirement data, a plurality of data centers to which the vDUs are deployed in a covered region, which is shown by the covered region data, are specified. In this case, based on estimated busy level values in data about the specified data centers, the vDUs in numbers into which the specified total number n of vDU is proportionally divided may be deployed to the respective data centers. As shown inFIG.13, the container constituent data, for example, includes a container ID, a container image ID, necessary resource data, a resource pool group ID, a resource pool ID, and a connected container ID list. The container ID is, for example, an identifier uniquely assigned to an instance of a container related to the container constituent data, as described above. The container image ID included in the container constituent data is, for example, set to a container image ID that is assigned to a container image of the container related to the container constituent data. The necessary resource data is, for example, data showing a resource necessary to run the container. In the present embodiment, the inventory template data shows, for each container, a resource necessary to run the container, for example. The inventory manager66sets the necessary resource data to a value based on the inventory template data. The resource pool group ID included in the container constituent data is, for example, set to a resource pool group ID value of the resource pool group to which the container related to the container constituent data is allocated. The inventory manager66may determine, for example, a resource pool group ID with which the container is built, based on the location determined as described above and the inventory data. The resource pool ID included in the container constituent data is, for example, set to a resource pool ID value of the resource pool to which the container related to the container constituent data is allocated. The inventory manager66may determine, for example, a resource pool ID based on a type of the container and the resource pool management data. The connected container ID list is a list of a container ID of a container connected to the container. In the present embodiment, the inventory template data shows, for each container, a type of a container connected to the container, for example. The inventory manager66determines, for example, a value of the connected container ID list based on the inventory template data and the inventory data. To generate planned data, the E2EO62, in coordination with the inventory manager66, specifies a resource pool to which a new functional unit group is deployed, as well as a necessary resource. The E2EO62may specify a resource pool associated with a functional unit that is specified in response to the acceptance of a request for network service building like the acceptance of a purchase request. The E2EO62may specify a resource pool group based on a region covered by a purchased network service. A resource pool group may be specified, for example, based on a covered region shown by covered region data included in the purchase service requirement data. Then, the E2EO62may specify a resource pool to which a new functional unit is deployed, out of resource pools included in the specified resource pool group. The E2EO62determines whether or not a hardware resource (in this example, a server90) to which the new functional unit group is deployed can be reserved. In this example, the E2EO62determines one of (1) a server90can be reserved, (2) a server90can be reserved by setting up an unassigned hardware resource (in this example, a free server) that is not included in any resource pool, and (3) a server90cannot be reserved. In a case of (2), the E2EO62determines whether or not a predetermined functional unit of a specific type can be deployed to the unassigned hardware resource (in this example, the free server). If the functional unit of the specific type is deployed, the E2EO62specifies a resource pool associated with the functional unit of the specific type. In this example, the resource pool is specified based on the resource pool management data. In the present embodiment, the container constituent data items are set to, for example, a resource pool group ID of the resource pool group specified as described above and a resource pool ID of the specified resource pool. In the present embodiment, based on a configuration of a functional unit group specified as described above and template data that is enabled to accept the configuration as a parameter, the CMaaS system68, the service manager70, and the slice manager72specify, for example, a procedure for building the functional unit group. The building procedure, for example, includes a procedure for container configuration management such as deploying a container and configuring settings on the deployed container and a container related to the container. This process is triggered when the E2EO62runs the workflow script, and is executed, for example. Then, the CMaaS system68, the service manager70, the slice manager72, and the container managers78conduct the specified building procedure and thereby build the functional unit group. This process is also triggered when the E2EO62runs the workflow script, and is executed, for example. Each functional unit included in the functional unit group may be built at a location specified for the functional unit. In the group, functional units in a number that is specified based on the purchase service requirement data may be built, for example. Functional units that are, for example, of a type and in a number specified for each of a plurality of locations may be built at each of the locations. The CMaaS system68and the BMaaS system84reserve, for example, a hardware resource (in this example, a server90) to which the new functional unit group is deployed. The CMaaS system68and the BMaaS system84set up system software compatible with the functional unit of a specific type on an unassigned hardware resource. In the present embodiment, a script (e.g., Ansible script) used to perform the setup concerning the functional unit of the specific type is stored, for example, in the CMaaS system68or the BMaaS system84. In the script, a procedure for installing a host OS that is a container execution environment platform and that is of a specific type or a specific version, a host OS kernel setting procedure, and a Basic Input Output System (BIOS) setting procedure, for example, are described. Then, the BMaaS system84runs the script to set up the system software compatible with the functional unit of the specific type on a free server. For example, a setup of the container execution environment host OS and BIOS is performed on the free server. The CMaaS system68and the BMaaS system84update the resource pool management data and add the unassigned hardware resource on which the system software has been set up to the specified resource pool. Such addition of a hardware resource to the resource pool is detected by the container manager78that manages the hardware resource. Then, the inventory manager66updates the inventory data related to the added hardware resource (the server90). In this way, the resource pool includes the hardware resource on which the system software compatible with the functional unit of the specific type has been set up. In this example, a vDU is a functional unit of a specific type. It is also assumed that the number of cores necessary for the vDU is five and the number of cores in the free server is 50. In this case, when a network service including the vDU is purchased, a resource pool associated with the vDU is specified. In an example ofFIG.12, the resource pool with a resource pool ID of C is specified. Then, whether or not the remaining hardware resource of the resource pool is satisfactory is checked. When the remaining hardware resource is unsatisfactory, system software compatible with the vDU is set up on one free server. Then, the server90on which the system software has been set up is added to the resource pool C, and the resource pool management data is updated to data shown inFIG.16. In this way, in the present embodiment, on a hardware resource included in a resource pool corresponding to the resource pool data, system software compatible with functional units of one or more types associated with the resource pool is set up. In some cases, general-purpose servers having a general configuration cannot offer satisfactory performance depending on the type of the functional unit. Hence, it is preferred that system software for host OS, BIOS, and other systems that is designed specifically for functional units of such a specific type be set up on a hardware resource such as a server. In this case, it is conceivable that only a predetermined number of hardware resources on which such specifically-designed system software is set up may be prepared in advance of the start of providing a network service, and a functional unit of the type may be deployed to the prepared hardware resource as needed. However, before the start of providing a network service, it is difficult to estimate an optimum quantity of hardware resources on which system software compatible with the functional unit of the specific type is to be set up in advance. If system software compatible with the functional unit of the specific type is set up on many hardware resources to provide a margin, some of the resources are wasted because other functional units are not suited to be deployed to such hardware resources. In the present embodiment, as described above, when a functional unit of a specific type is deployed to an unassigned hardware resource, system software compatible with the functional unit of the specific type is set up on the unassigned hardware resource. Then, the unassigned hardware resource, on which the system software has been set up, is added to a resource pool associated with the functional unit of the specific type. In this way, the technique according to the present embodiment enables efficient use of a hardware resource to which a functional unit of every type implementing a network service is deployed. In the present embodiment, a functional unit may be specified based on a result of demand forecasting. A functional unit forecast to be inadequate in the near future based on a result of demand forecasting may be specified, for example. A resource pool associated with the functional unit specified in this way may be specified. Then, an unassigned hardware resource on which system software compatible with the functional unit has been set up may be added to the resource pool. When the hardware resource to which the new functional unit group is to be deployed is reserved, the service manager70, for example, instructs the container managers78to deploy the new functional unit group based on the planned data described above and service template data that is stored on the service manager70and related to the purchase bundle ID. The service template data is data that is enabled to accept a part or all of the planned data as a parameter. In one example, the service template data describe above is a CNF descriptor (CNFD).FIG.17is a drawing showing an example of a CNFD. The service manager70, based on the planned data and the CNFD, generates, for example, a day0 parameter (a CNF instance) shown inFIG.18. The day0 parameter in which the CNFD host name and IP address shown inFIG.17are set to values shown inFIG.18is generated, for example. The CNFD may include templates corresponding to a plurality of deployment flavors. The service manager70may generate, for example, a day0 parameter based on a template corresponding to a deployment flavor that suits the purchase service requirement data. The service manager70may specify a location to which the day0 parameter is output. For example, one or more container managers78that are each a destination to which the corresponding day0 parameter is output may be specified. The service manager may specify, for example, a container manager78associated with the server90that is disposed at a location in the resource pool shown in the container constituent data of the planned data. Then, the service manager may generate a day0 parameter output to each of the specified locations. For example, a day0 parameter output to each of one or more container managers78that are each a destination may be generated. The service manager70outputs each of the one or more generated day0 parameters to the corresponding container manager78that is a destination location to which the corresponding day0 parameter is output. The purchase bundle ID is related to the day0 parameter. The container manager78, based on the accepted day0 parameter, deploys the new functional unit group. The container manager78specifies, for example, a container image that is deployed and a resource pool to which a container is deployed, based on Helm chart data related to the purchase bundle ID as well as the accepted day0 parameter. The container manager78acquires the container image from the repository80and deploys the container related to the container image to the specified resource pool. In this example, a manifest file is generated based on the Helm chart data related to the purchase bundle ID and the received day0 parameter. Then, deployment of the container is executed using the manifest file. The CMaaS system68, for example, based on the planned data described above and CM template data stored on the CMaaS system68and related to the purchase bundle ID, generates planned CM data including a day1 parameter. The CM template data is data that is enabled to accept a part or all of the planned data as a parameter. The day1 parameter shows, for example, a configuration management procedure including settings for a deployed functional unit group and at least one functional unit related to the functional unit group (e.g., a functional unit communicating with the deployed functional unit group). The day1 parameter concerning the base station apparatuses22shows, for example, a radio field intensity, a direction and an angle of the antenna22a, and a serial number. The day1 parameter concerning the Serving Gateway (S-GW) shows, for example, information about a correspondent node (information about the Mobility Management Entity (MME) of an opposite communication party, an Access Point Name (APN), etc.) and a host name or an FQDN of a Remote Authentication Dial In User Service (RADIUS) server. The CMaaS system68performs configuration management such as configuring functional unit settings based on the day1 parameter included in the generated planned CM data. This process is triggered when the E2EO62runs the workflow script, and is executed, for example. The slice manager72instantiates, for example, a network slice concerning the network service to be purchased, based on the planned data described above and slice template data stored on the slice manager72and related to the purchase bundle ID. The slice template data is data that is enabled to accept a part or all of the planned data as a parameter. This process is triggered when the E2EO62runs the workflow script, and is executed, for example. The slice manager72may give the CMaaS system68an instruction for configuration management relating to instantiation of the network slice. Then, the CMaaS system68may perform configuration management such as configuring settings in conformity with the configuration management instruction. In this example, the CMaaS system68may perform configuration management relating to the new functional unit group when the deployment of the new functional unit group ends and subsequently perform configuration management relating to instantiation of the network slice. Alternatively, the CMaaS system68may update the generated day1 parameter once, based on the configuration management instruction given by the slice manager72. Then, the CMaaS system68may perform configuration management relating to the new functional unit group and the instantiation of the network slice in one go. In the present embodiment, the monitoring manager74specifies, for example, a monitoring policy shown by the purchase service requirement data, based on the planned data described above and monitoring script data stored on the monitoring manager74and related to the purchase bundle ID. The monitoring manager74implements a monitoring setting in conformity with the specified monitoring policy. Then, the monitoring manager74monitors the built functional unit group in conformity with the specified monitoring policy. In this example, a monitored target shown by the purchase service requirement data may be monitored at monitoring intervals shown by the purchase service requirement data. This process is triggered when the E2EO62runs the workflow script, and is executed, for example. The monitoring manager74may deploy, for example, sidecar to output a log on values of the metrics of the monitored object, which are associated with the container that is monitored, at the monitoring intervals described above. The side car may output the log to the monitoring manager74in conformity with the monitoring setting described above. The monitoring manager74may accumulate the log. The monitoring manager74may send, for example, the log to the purchaser terminal14in response to a request from the purchaser terminal14. In the present embodiment, the security setter76implements, for example, a security setting such as a password setting in conformity with a value in the purchase service requirement data, based on the planned data described above and security script data stored on the security setter76and related to the purchase bundle ID. With reference to flow diagrams shown inFIGS.19A and19B, a process executed by the vendor terminal16, the MPS10, and the NOS12when the vendor clicks the On-boarding button40on the on-boarding screen shown inFIG.7will now be described. First, the vendor terminal16sends bundled data, which is disposed on the path specified on the on-boarding screen, to the bundle expander60of the NOS12(S101). The bundle expander60expands the bundled data received in the step indicated with S101and generates a data group shown inFIG.9(S102). The bundle expander60determines a bundle ID related to the data group shown in step S102(S103). The bundle expander60sends product catalog data included in the data group shown in step S102to the bundle manager50of the MPS10, with the product catalog data being related to the bundle ID determined in the step indicated with S103. Receiving the product catalog data, the bundle manager50of the MPS10causes the product catalog storage52to store the received product catalog data (S104). The bundle expander60outputs service catalog data included in the data group shown in step S102to the E2EO62, with the service catalog data being related to the bundle ID determined in the step indicated with S103. Receiving the service catalog data, the E2EO62causes the service catalog storage64to store the received service catalog data (S105). The bundle expander60causes the inventory manager66to store inventory template data that is included in the data group shown in step S102, with the inventory template data being related to the bundle ID determined in the step indicated with S103(S106). The bundle expander60causes the CMaaS system68to store CM template data that is included in the data group shown in step S102, with the CM template data being related to the bundle ID determined in the step indicated with S103(S107). The bundle expander60causes the service manager70to store service template data that is included in the data group shown in step S102, with the service template data being related to the bundle ID determined in the step indicated with S103(S108). The bundle expander60causes the slice manager72to store slice template data that is included in the data group shown in step S102, with the slice template data being related to the bundle ID determined in the step indicated with S103(S109). The bundle expander60causes the monitoring manager74to store monitoring script data that is included in the data group shown in step S102, with the monitoring script data being related to the bundle ID determined in the step indicated with S103(S110). The bundle expander60causes the security setter76to store security script data that is included in the data group shown in step S102, with the security script data being related to the bundle ID determined in the step indicated with S103(S111). The bundle expander60causes the container manager78to store Helm chart data that is included in the data group shown in step S102, with the Helm chart data being related to the bundle ID determined in the step indicated with S103(S112). In this example, the bundle expander60may cause the plurality of the container managers78to store a Helm chart that is included in the data group shown in step S102. A piece of the Helm chart data corresponding to a container manager78may be stored on the container manager78. The bundle expander60causes the repository80to store container image data that is included in the data group shown in step S102, with the container image data being related to the bundle ID determined in the step indicated with S103(S113). This ends the process shown in this process example. With reference to a flow diagram shown inFIG.20, a process executed by the purchaser terminal14, the MPS10, and the NOS12when the purchaser clicks the Next button32on the service requirement input screen shown inFIG.3will be described below. First, the purchaser terminal14sends purchase service requirement data related to a purchase bundle ID to the purchase manager54of the MPS10(S201). The purchase bundle ID is a bundle ID of a network service selected by the purchaser on the purchase screen shown inFIG.2. The purchase service requirement data is service requirement data showing requirements entered in the service requirement input screen shown inFIG.3. Then, receiving the purchase service requirement data related to the purchase bundle ID in the step indicated with S201, the purchase manager54of the MPS10sends the received data to the E2EO62of the NOS12(S202). Then, the E2EO62of the NOS12generates availability inquiry data based on service catalog data related to the purchase bundle ID (S203). In this example, the E2EO62generates availability inquiry data that shows types of functional units and the number of the functional units of the respective types in the functional unit group implementing the network service to be purchased. The E2EO62outputs the availability inquiry data generated in the step indicated with S203to the inventory manager66(S204). Then, accepting the availability inquiry data, the inventory manager66generates data about the availability based on the accepted inquiry data, inventory data, and inventory template data (S205). In this example, regarding a hardware resource to which the functional unit group shown in the accepted availability inquiry data is deployed, the inventory manager generates data about the availability showing one of (1) a hardware resource can be reserved, (2) a hardware resource can be reserved by adding a free server to the resource pool, and (3) a hardware resource cannot be reserved. The inventory manager66sends the data about the availability, which is generated in the step indicated with S205, to the E2EO62(S206). Then, receiving the availability data in the step indicated with S206, the E2EO62generates response data based on the received availability data (S207). In this example, when the availability data shows the above (1) or (2), response data showing OK is generated. When the availability data shows the above (3), response data showing NG is generated. The E2EO62sends the response data generated in the step indicated with S207to the purchase manager54of the MPS10(S208). Then, receiving the response data in the step indicated with S208, the purchase manager54generates a purchase confirmation screen based on the received response data (S209). In this example, when the received response data shows OK, the purchase manager generates, as shown inFIG.4, a purchase confirmation screen showing that the service can be promptly provided. Meanwhile, when the received response data shows NG, the purchase manager generates, as shown inFIG.5, a purchase confirmation screen showing that a predetermined delivery time is required (for example, a delivery time of two weeks is required). The purchase manager54sends the purchase confirmation screen generated in the step indicated with S209to the purchaser terminal14(S210). Receiving the purchase confirmation screen in the step indicated with S210, the purchaser terminal14shows the received purchase confirmation screen on a display of the purchaser terminal14(S211). This ends the process shown in this process example. With reference to a flow diagram shown inFIG.21, a process executed by the purchaser terminal14, the MPS10, and the NOS12when the purchaser clicks the Purchase button34on the purchase confirmation screen shown inFIG.4or5will be described below. First, the purchaser terminal14sends a purchase request for the network service to the purchase manager54of the MPS10(S301). The purchase bundle ID and the purchase service requirement data sent in the step indicated with S201are related to the purchase request. Receiving the purchase request related to the purchase bundle ID and the purchase service requirement data in the step indicated with S301, the purchase manager54sends the received purchase request to the E2EO62(S302). Receiving the purchase request, the E2EO62specifies service catalog data related to the purchase bundle ID, which is related to the received purchase request (S303). The E2EO62acquires the service catalog data specified in the step indicated with S303from the service catalog storage64and runs a workflow script shown by the service catalog data (S304). This ends the process shown in this process example. With reference to flow diagrams shown inFIGS.22A to22G, a process shown in step S304will be described in detail. First, the E2EO62and the inventory manager66generate planned data based on the service requirement data related to the purchase request, the service catalog data, the inventory template data, and the inventory data (S401). A process executed in step S401, for example, includes a process for specifying a resource pool to which a functional unit group is deployed, as well as a necessary resource. The inventory manager66stores the generated planned data on the inventory database82(S402). The inventory manager66outputs an inventory key included in the generated planned data to the E2EO62(S403). Then, accepting the inventory key, the E2EO62outputs the accepted inventory key to the CMaaS system68(S404). Then, accepting the inventory key, the CMaaS system68acquires the planned data including the accepted inventory key from the inventory database82(S405). The CMaaS system68, based on the planned data acquired in the step indicated with S405, generates planned CM data including a day1 parameter and keeps the generated data (S406). The CMaaS system68outputs an instruction about a setup including reserving a necessary hardware resource to the BMaaS system84(S407), and the BMaaS system84performs the setup including reserving a necessary hardware resource in accordance with the instruction (S408). At this stage, system software compatible with a functional unit of a specific type may be set up, and a free server may be added to the resource pool, as needed. In the present embodiment, a free server may be added to the resource pool with a margin (a buffer) provided. For example, a plurality of the servers90may be added to the resource pool in one go. When the BMaaS system84outputs a notice of ending to the CMaaS system68(S409), the CMaaS system68updates the resource pool management data (S410). In this example, the value in the remaining core count data of the resource pool from which the reserved hardware resource is taken may be subtracted. The count of free servers or the value in the total core number data may be updated. In the step indicated with S410, the BMaaS system84rather than the CMaaS system68may update the resource pool management data. Following an instruction from the CMaaS system68, the inventory manager66may update the resource pool management data. The CMaaS system68outputs a notice of ending to the E2EO62(S411). Then, the E2EO62outputs the inventory key, which was accepted in the step indicated with S403, to the service manager70(S412). Then, accepting the inventory key, the service manager70acquires the planned data including the accepted inventory key from the inventory database82(S413). The service manager70, based on the planned data acquired in the step indicated with S418, specifies a location to which the functional unit group is deployed (S414). The service manager70generates a day0 parameter (a CNF instance) for every location specified in the step indicated with S414(S415). The service manager70outputs the day0 parameter related to the container manager78to the corresponding container manager78associated with the corresponding location each specified in the step indicated with S414(S416). Accepting the day0 parameter, the container manager78deploys a container based on the accepted day0 parameter (S417). The container manager78outputs a notice of ending to the service manager70(S418). Then, the service manager70outputs a notice of ending to the E2EO62(S419). The E2EO62outputs a configuration management instruction based on a day1 parameter to the CMaaS system68(S420). Then, the CMaaS system68performs configuration management for a container group based on the day1 parameter included in the kept planned CM data (S421). The CMaaS system68outputs a notice of ending to the E2EO62(S422). Then, the E2EO62outputs the inventory key, which was accepted in the step indicated with S403, to the slice manager72(S423). Then, accepting the inventory key, the slice manager72acquires the planned data including the accepted inventory key from the inventory database82(S424). The slice manager72, based on the planned data acquired in the step indicated with S429, instantiates a network slice (S425). In the step indicated with S425, the slice manager72, as described above, may give the CMaaS system68an instruction for configuration management relating to instantiation of the network slice, for example. Then, the CMaaS system68may perform configuration management such as configuring settings in conformity with the configuration management instruction. As described above, without execution of the steps indicated with S420to S422, in the step indicated with S425, the CMaaS system68may update the day1 parameter, based on a configuration management instruction given by the slice manager72. Then, the CMaaS system68may perform configuration management such as configuring settings in conformity with the configuration management instruction. The slice manager72outputs a notice of ending to the E2EO62(S426). Then, the E2EO62outputs the inventory key, which was accepted in the step indicated with S403, to the monitoring manager74(S427). Then, accepting the inventory key, the monitoring manager74acquires the planned data including the accepted inventory key from the inventory database82(S428). The monitoring manager74, based on the planned data acquired in the step indicated with S428, implements a monitoring setting in conformity with the monitoring policy shown by the purchase service requirement data (S429). The monitoring manager74outputs a notice of ending to the E2EO62(S430). Then, the E2EO62outputs the inventory key, which was accepted in the step indicated with S403, to the security setter76(S431). Then, accepting the inventory key, the security setter76acquires the planned data including the accepted inventory key from the inventory database82(S432). The security setter76, based on the planned data acquired in the step indicated with S432, implements a security setting (S433). The security setter76outputs a notice of ending to the E2EO62(S434). This ends the process shown in this process example. The present invention should not be limited to the embodiment described above. The network service provided to the purchaser may be implemented by, for example, a virtualized network function (VNF) that is a virtual machine (VM)-based functional unit using hypervisor- or host-based virtualization technology, rather than the container-based functional unit CNF. When a functional unit of a specific type is deployed to an unassigned hardware resource (in this example, a free server), system software compatible with the functional unit of the specific type may be set up on a host OS that is a virtual machine environment platform. The division of roles for the functions, which is shown inFIG.8, is not limited to the one shown inFIG.8, for example. Part of the process executed by the E2EO62may be executed by, for example, the inventory manager66. Part of the process executed by the inventory manager66may be executed by, for example, the E2EO62. In the present embodiment, the NOS12may not include the repository80. The bundle expander60may cause the container manager78to store the Helm chart data and the container image data. The container manager78may deploy the container image stored on the container manager78. In the present embodiment, the NOS12may not include the bundle expander60. A bundled file may be on-boarded from, for example, an external file transfer server. Example An example of the present disclosure will be described based on the prerequisite technique described above. The following description is primarily given on differences between the example and the prerequisite technique, and details that are already described in Prerequisite Technique are omitted as appropriate. In the following description, components identical or equivalent to the components of the prerequisite technique are assigned with the same reference numerals. Challenges that a network service management system of an example is designed to solve are described. Challenge 1: as described in Prerequisite Technique, a vendor such as a service provider who provides a network service uploads a verified bundled file into a network operating system (NOS). The bundled file defines various items of information about the network service to be provided. The NOS builds the network service shown by the bundled file. Before providing the network service to a purchaser, it is necessary to conduct tests on the network service in a plurality of environments such as a test environment and a production environment and check that the network service properly operates. Up to now, a piece of test data that specifies test content for the network service needs to be prepared for each of different environments in which tests are conducted. This takes much time and effort. Challenge 2: as described in Prerequisite Technique, a service in which a company possessing a network platform (e.g., a telecommunications company) allows other companies to use the company's network platform so that the other companies can run their network service on the company's network platform (hereinafter also referred to as a “platform service”) is expected to come into wide use in the future. To promote the use of such a platform service, it is important to help a user of the platform service (the other companies described above, such as a network service provider) to build a network service with less time and effort. A network service management system of an example, designed to solve the challenge 1 described above, includes an accepter, a first generator, a first tester, a second generator, and a second tester. The accepter corresponds to a CI/CD assist device106(a bundle accepter116) described later. The accepter accepts a test template equivalent to data that specifies test content for a network service from a vendor. A part of fields of the test content is set to a variable field in which a value is variable. The first generator corresponds to an NOS102aor an NOS102b(the E2EOs62) described later. The first generator generates test data that specifies test content compatible with a first environment (e.g., a test environment) by setting a value compatible with the first environment in the variable field of the test template. The first tester corresponds to a test device104aor a test device104bdescribed later. The first tester conducts a test on the network service in the first environment by using the test data generated by the first generator. The second generator corresponds to an NOS102c(the E2EOs62) described later. The second generator generates test data that specifies test content compatible with a second environment (e.g., a production environment) by setting a value compatible with the second environment in the variable field of the test template. The second tester corresponds to a test device104cdescribed later. The second tester conducts a test on the network service in the second environment by using the test data generated by the second generator. The first environment may be a test environment while the second environment may be a production environment. The first tester may conduct a test on the network service in the test environment in an event that the test template is accepted. The second tester may conduct a test on the network service in the production environment in an event that the network service is purchased by a purchaser. The network service management system may further include a manager to manage testing both in the first environment and in the second environment. The manager corresponds to the CI/CD assist device106described later. The first environment may be a test environment while the second environment may be a production environment. The manager may give the test template to the first generator to enable testing of the network service in the test environment. When a test on the network service in the test environment is passed, the manager may give the test template to the second generator to enable testing of the network service in the production environment. The test template may include a specific field in which data about either of the first generator and the second generator is to be set. The specific field corresponds to an environment field described later. The manager may set data about the first generator (e.g., data about the test environment) in the specific field of the test template given to the first generator, and may set data about the second generator (e.g., data about the production environment) in the specific field of the test template given to the second generator. A network service management system of an example, designed to solve the challenge 2 described above, includes an accepter, a builder, and a tester. The accepter corresponds to a CI/CD assist device106(a bundle accepter116) described later. The accepter accepts configuration data and test data collectively from a vendor that provides a network service to a customer by using a platform service. The configuration data specifies a functional unit required to provide the network service, and the test data specifies test content for either of the network service and the functional unit. The builder corresponds to a platform as a service (PaaS) in each of the NOS102a, the NOS102b, and the NOS102cdescribed later. Specifically, the PaaS includes the inventory manager66, the CMaaS system68, the service manager70, the slice manager72, the monitoring manager74, the security setter76, and the container managers78in each of the systems. The builder automatically builds the functional unit specified by the configuration data that is accepted by the accepter. The tester corresponds to any of the test device104a, the test device104b, and the test device104cdescribed later. The tester automatically conducts a test on either of the network service and the functional unit based on the test data. The network service and the functional unit are built by the builder. The network management system may further include a first generator and a second generator. The first generator corresponds to an NOS102aor an NOS102b(the E2EOs62) described later. The second generator corresponds to an NOS102c(the E2EOs62) described later. The test data accepted by the accepter may be a test template in which a part of fields is set to a variable field where a value is variable. The builder may include a first builder to build the functional unit specified by the configuration data in the test environment and a second builder to build the functional unit specified by the configuration data in the production environment. The first builder corresponds to the PaaS in either of the NOS102aand the NOS102bdescribed later. The second builder corresponds to the PaaS in the NOS102cdescribed later. The first generator may generate test data for the test environment by setting data about either of the network service and the functional unit in the variable field of the test template, in which the network service and the functional unit are built in the test environment. The second generator may generate test data for the production environment by setting data about either of the network service and the functional unit in the variable field of the test template, in which the network service and the functional unit are built in the production environment. The tester may include: a first tester to conduct a test on either of the network service and the functional unit based on the test data for the test environment, the network service and the functional unit being built in the test environment; and a second tester to conduct a test on either of the network service and the functional unit based on the test data for the production environment, the network service and the functional unit being built in the production environment. The first tester corresponds to a test device104aor a test device104bdescribed later. The second tester corresponds to a test device104cdescribed later. The network management system may further include a manager to manage building of and testing of the network service both in the test environment and in the production environment. The manager corresponds to the CI/CD assist device106described later. The manager may give the configuration data and the test data to the test environment to enable building of and testing of the network service in the test environment. When a test on the network service built in the test environment is passed, the manager may give the configuration data and the test data to the production environment to enable building of and testing of the network service in the production environment. At least one of the configuration data and the test data may include a specific field in which data about either of the test environment and the production environment is to be set. The specific field corresponds to an environment field described later. The manager may set data about the test environment in the specific field of at least one of the configuration data and the test data given to the test environment, and may set data about the production environment in the specific field of at least one of the configuration data and the test data given to the production environment. FIG.23is a drawing illustrating test types in an example. A test conducted individually on a single network function such as UPF, SMF, and AMF is referred to as a “unit test”. A test on a system, such as 5GC and gNB, that includes a plurality of network functions is referred to as a “system test”. InFIG.23, a test conducted to check coordination between UPF, SMF, and AMF for normality is equivalent to a system test. A test on systems directly coordinating with each other is referred to as a “connection test”. InFIG.23, connection tests are shown by dashed lines. For example, a test conducted between 5GC and eNB is equivalent to a connection test. A test conducted to check an end-to-end network service for normality is referred to as an “E2E test”. InFIG.23, a test conducted between UE and Internet shown with bold lines is equivalent to an E2E test. The E2E test in a production environment described later is also referred to as a “deployment test”. A configuration of a network service management system of an example will be described in detail. FIG.24shows a configuration of a network service management system100of an example. The network service management system100includes a network operating system (NOS)102a, a test device104a, and a server90athat are disposed in a sandbox environment. The network service management system100also includes an NOS102b, a test device104b, and a server90bthat are disposed in a staging environment. The network service management system100also includes a marketplace system (MPS)10, an NOS102c, a test device104c, and a server90cthat are disposed in a production environment. These devices are connected to each other via a communications network including a LAN and a WAN. The production environment is an environment where an application for a network service provided to a user actually runs. The production environment can also be called an actual environment or a commercial environment. The staging environment is a test environment where a resource available to the server90bis similar (or equal) to that in the production environment. The sandbox environment is a development environment where a resource available to the server90ais greatly limited and can also be called a minimum test environment. The sandbox environment, the staging environment, and the production environment each include a physical information processing device and a communications network. Each of the NOS102a, the NOS102b, and the NOS102ccorresponds to the NOS12described in Prerequisite Technique. Each of the NOS102a, the NOS102b, and the NOS102cincludes a functional block similar to that of the NOS12described in Prerequisite Technique (for example, a plurality of functional blocks described inFIG.8). Each of the server90a, the server90b, and the server90ccorresponds to the server90described in Prerequisite Technique and includes one or more information processing devices. The server90a, the server90b, and the server90care implemented with a plurality of network functions (e.g., UPF, SMF, and AMF) that constitute a network service. In the example, the network functions are each virtualized (in other words, turned into software). A plurality of CNFs designed to implement a plurality of network functions are disposed on the server90a, the server90b, and the server90c. Hereinbelow, the server90a, the server90b, and the server90care sometimes collectively referred to as the server90. The test device104ais an information processing device used to conduct a test on a network service (a functional unit group) built in the sandbox environment, based on test data providing test content for the network service. The test device104bis an information processing device used to conduct a test on a network service (a functional unit group) built in the staging environment, based on test data. The test device104cis an information processing device used to conduct a test on a network service (a functional unit group) built in the production environment, based on the test data. The network service management system100further includes a continuous integration/continuous delivery (CI/CD) assist device106disposed across the sandbox environment, the staging environment, and the production environment. The CI/CD assist device106is an information processing device corresponding to the CI/CD pipeline described in Prerequisite Technique. The CI/CD assist device106communicates with the devices each disposed in the sandbox environment, the staging environment, and the production environment to manage building of and testing of the network service in each of the sandbox environment, the staging environment, and the production environment. FIG.25is a block diagram showing functional blocks of the CI/CD assist device106inFIG.24. The blocks shown in the block diagrams of this specification of this specification are implemented by any of devices, electronic circuits, and mechanisms such as computer processors, CPUs, and memory in terms of hardware and by any of computer programs and applications in terms of software. The drawings shown herein illustrate functional blocks implemented through coordination of these components. Thus, it will be understood by those skilled in the art that these functional blocks can be implemented in various forms by combinations of the hardware and software. The CI/CD assist device106includes a controller110and a storage112. The controller110executes a variety of data processing done by a CI/CD pipeline. The storage112stores data referred to or updated by the controller110. The storage112stores a bundled file uploaded from a vendor terminal16. The storage112stores data about the respective devices disposed in the sandbox environment, the staging environment, and the production environment. The storage112may store, for example, an IP address and a host name of an E2EO62or an IP address and a host name of a CMaaS system68in each of the NOS102a, the NOS102b, and the NOS102c. The storage112may store respective IP addresses and host names of the test device104a, the test device104b, and the test device104c. The controller110includes a bundle accepter116, a first bundle expander118, a second bundle expander120, a third bundle expander122, and an error handler124. These functions may be implemented as application programs that use functions of Jenkins, which is well-known CI/CD pipeline software. A processor (e.g., a CPU) in the CI/CD assist device106may read Jenkins' program and the application programs stored on the storage112into main memory and run these programs to fulfill functions of the functional blocks. The bundle accepter116accepts a bundled file uploaded from the vendor terminal16. In the example, the bundled file is prepared by a vendor who provides a user (i.e., a purchaser) with a network service of the vendor using a platform service provided by the network service management system100. In the example, the bundled file includes a configuration file template and a test script template. The configuration file template is a file that includes data required to deploy a network service and manage the network service. In the example, the configuration file template includes product catalog data, service catalog data, inventory template data, CM template data, service template data, slice template data, monitoring script data, security script data, Helm chart data, and container image data shown inFIG.9for the prerequisite technique. The bundled file may include business section data, technology section data, security section data, and operation section data described in Prerequisite Technique (FIG.6) instead of the configuration file template. In this case, the bundle accepter116may prepare, based on these pieces of the data, product catalog data, service catalog data, inventory template data, CM template data, service template data, slice template data, monitoring script data, security script data, Helm chart data, and container image data. The test script template is equivalent to data that specifies test content for a network service built based on the configuration file template. The test script template, for example, includes respective contents of a unit test, a system test, a connection test, an E2E test, and a deployment test. The test script template may, for example, include a command (a ping command, a curl command, etc.) used to check communication between a plurality of network functions and CNFs that constitute test objects and a parameter, an attribute, or an argument needed to run the command. In the example, the unit test and the system test are conducted in the sandbox environment, the connection test and the E2E test are conducted in the staging environment, and the deployment test is conducted in the production environment. At least a part of the test contents conducted in the sandbox environment, the staging environment, and the production environment may be shared. In the staging environment and the production environment, a full set of a unit test, a system test, a connection test, and an E2E test (or a deployment test with the same content as the E2E test has) may be conducted. In the test script template, a part of fields of the test content is set to a variable field in which a value is variable. The variable field may represent a parameter, an attribute, or an argument needed to run a command (a ping command, a curl command, etc.) used to test the network service (the functional unit group). The test script template may include an IP address or a host name for the plurality of the network functions and the CNFs, which constitute test objects, as a variable field. The value in the variable field is not set by the vendor but is set dynamically by the network service management system100. The test script template may include input data entered for the network service or a network function (e.g., the plurality of the CNFs disposed on the server90) and data (hereinafter also referred to as “correct data”) that ought to be output from the network service or the network function (e.g., the plurality of the CNFs disposed on the server90) in response to the input data. Each of the test device104a, the test device104b, and the test device104cmay enter the input data specified by the test script template (test data described later) into the CNF disposed on the server90in each of the environments and may determine whether the test is passed or failed based on whether or not data output from the CNF matches the correct data. The configuration file template and the test script template each include a field (hereinafter also referred to as an “environment field”) in which data is to be set concerning a subject for building of and a subject for testing of the network functions in each of the sandbox environment, the staging environment, and the production environment. The environment fields include IP addresses and host names of the E2EOs62in the respective environments. The environment fields include IP addresses and host names of builders in the respective environments. Each of the builders is a subject that builds the network service and includes, as described above, the inventory manager66, the CMaaS system68, the service manager70, the slice manager72, the monitoring manager74, the security setter76, and the container managers78. The environment fields include IP addresses and host names of the test devices in the respective environments. In the example, in a similar way to the variable field, each of the environment fields of the configuration file template and the test script template is a field in which a value is variable and the value is set dynamically. The first bundle expander118, the second bundle expander120, and the third bundle expander122each correspond to the bundle expander60in Prerequisite Technique. The first bundle expander118sends a bundled file to the NOS102ain the sandbox environment. The second bundle expander120sends a bundled file to the NOS102bin the staging environment. The third bundle expander122sends a bundled file to the MPS10and the NOS102cin the production environment. When a result of a test in the sandbox environment or in the staging environment is below a qualifying standard, the error handler124executes predetermined error handling. An operation run on the network service management system100configured as described above will be described. FIG.26is a flowchart showing an operation run on the network service management system100of the example. The vendor terminal16, in response to an operation by a person in charge at the vendor, sends a bundled file prepared by the vendor to the CI/CD assist device106. The bundle accepter116of the CI/CD assist device106is on standby until it receives the bundled file sent from the vendor terminal16(N in step S500). When receiving the bundled file sent from the vendor terminal16(Y in step S500), the bundle accepter116stores the bundled file on the storage112. The first bundle expander118of the CI/CD assist device106sets a parameter value for the sandbox environment that is stored on the storage112in advance in the environment fields of the configuration file template and the test script template included in the bundled file (S502). The first bundle expander118expands the configuration file template and the test script template to the sandbox environment, with the parameter value for the sandbox environment being set in the environment fields. (S504). Specifically, the first bundle expander118, in a similar way to the bundle expander60in Prerequisite Technique, sends a plurality of types of the configuration file template included in the bundled file to the E2EO62and the builder in the NOS102a. The first bundle expander118also sends the test script template included in the bundled file to the E2EO62in the NOS102a. The first bundle expander118instructs the E2EO62of the NOS102ato start workflow for building the network service. As described in Prerequisite Technique, the E2EO62and the builder in the NOS102a, by coordinating with each other, automatically build functional units (e.g., CNFs) specified by the configuration file template to automatically build the network service specified by the configuration file template (S508). Settings (e.g., monitoring frequency) concerning a monitoring process by the monitoring manager74and the number of CNFs (e.g., the number of CNF instances that ought to be run) are basically determined based on service requirements and other information entered by the purchaser. For example, the number of CNF constituents may be determined according to the number of accommodated persons selected by the purchaser on the network service purchase screen. The purchaser may select once per one second or once per five seconds as monitoring frequency on the network service purchase screen. However, since no information is entered by the purchaser at tests in the sandbox environment and the staging environment, the E2EO62in each environment may determine settings (parameters) that suit the environment. The E2EO62and the builder in the NOS102ain the sandbox environment may build CNFs by applying a parameter value for a minimum configuration to the configuration file template, for example. The E2EO62and the builder in the NOS102bin the staging environment may build CNFs by applying a parameter value for a maximum configuration to the configuration file template. Meanwhile, in the production environment, various parameters are determined based on service requirements and other information selected or entered by the purchaser. As described in Prerequisite Technique, the bundled file may specify, for example, sets of options for the number of accommodated persons at the MPS10and the number of CNF constituents associated with the respective options. Then, the number of CNF constituents may be determined according to the option selected by the purchaser at the MPS10. The builder in the NOS102amakes a functional unit group specified by the configuration file template by following the procedure described in Prerequisite Technique. Specifically, the inventory manager66generates planned data. The service manager70generates day0 data (a day0 parameter), and the container manager78, in accordance with the day0 data, deploys a CNF group corresponding to the functional unit group to the servers90. The number of CNF constituents described above may be reflected in the planned data. The CMaaS system68generates day1 data (a day1 parameter) and inputs settings based on the day1 data into the CNF group deployed to the servers90. The slice manager72and the CMaaS system68configure a network slice and a network slice subnet based on the CNF group deployed to the servers90. The monitoring manager74implements a monitoring setting. The similar procedure is followed when the builder in the NOS102bbuilds a functional unit group as well as when the builder in the NOS102cbuilds a functional unit group. In response to a request from the E2EO62, the builder in the NOS102asends information about a CNF instance generated in the sandbox environment to the E2EO62. Information about the CNF instance may include information that enables identification of the CNF instance in the sandbox environment, such as an IP address and a host name of the CNF instance. Information about the CNF instance may include a type of the CNF instance and the number of the CNF instances. The builder may send information about the CNF instance generated in the sandbox environment to the E2EO62without waiting a request from the E2EO62. Accepting the test script template from the CI/CD assist device106, the E2EO62of the NOS102asets information about the CNF instance, which is acquired from the builder, in the variable field of the accepted test script template and thereby prepares test data for the sandbox environment, with a value being put in the variable field of the test script template (S510). The E2EO62sends the test data for the sandbox environment to the test device104a. When accepting the test data from the NOS102a, the test device104aconducts a test for the sandbox environment, which is specified by the test data, on the CNF group (the functional unit group constituting the network service) built in the sandbox environment in step S508(S512). The test, for example, includes a unit test and a system test described with reference toFIG.23. The test device104ainforms the CI/CD assist device106about a result of the test in the sandbox environment as well as that the test in the sandbox environment has ended (S514). The CI/CD assist device106stores a pass/fail standard (e.g., a threshold for determining whether or not the test is passed) in advance that is defined for test results in the sandbox environment. The pass/fail standard may be set by the vendor or may be included in the bundled file. When the result of the test in the sandbox environment, which is informed by the test device104a, is a fail by the pass/fail standard (N in step S516), the CI/CD assist device106executes predetermined error handling (S518) and ends the process based on the bundled file. In the error handling, the CI/CD assist device106may send the vendor terminal16the result of the test in the sandbox environment together with information indicating a test failure in the sandbox environment. When the result of the test in the sandbox environment, which is informed by the test device104a, is a pass by the pass/fail standard (Y in step S516), the CI/CD assist device106goes to a test in the staging environment (A in the drawing). FIG.27is a flowchart followingFIG.26, showing an operation run on the network service management system100. The flowchart shows an operation that is run when the result of the test in the sandbox environment, which is informed by the test device104a, is a pass by the pass/fail standard. The second bundle expander120of the CI/CD assist device106sets a parameter value for the staging environment that is stored on the storage112in advance in the environment fields of the configuration file template and the test script template included in the bundled file, which is accepted in step S500(S520). The second bundle expander120expands the configuration file template and the test script template to the staging environment, with the parameter value for the staging environment being set in the environment fields. (S522). Specifically, the second bundle expander120, in a similar way to the bundle expander60in Prerequisite Technique, sends a plurality of types of the configuration file template included in the bundled file to the E2EO62and the builder in the NOS102b. The second bundle expander120also sends the test script template included in the bundled file to the E2EO62in the NOS102b. The second bundle expander120instructs the E2EO62of the NOS102bto start workflow for building the network service. As described in Prerequisite Technique, the E2EO62and the builder in the NOS102b, by coordinating with each other, automatically build functional units (e.g., CNFs) specified by the configuration file template to automatically build the network service specified by the configuration file template (S526). In response to a request from the E2EO62, the builder in the NOS102bsends information about a CNF instance generated in the staging environment to the E2EO62. Accepting the test script template from the CI/CD assist device106, the E2EO62of the NOS102bsets information about the CNF instance, which is sent from the builder, in the variable field of the accepted test script template and thereby prepares test data for the staging environment, with a value being put in the variable field of the test script template (S528). The E2EO62sends the test data for the staging environment to the test device104b. When accepting the test data from the NOS102b, the test device104bconducts a test for the staging environment, which is specified by the test data, on the CNF group (the functional unit group constituting the network service) built in the staging environment in step S526(S530). The test, for example, includes a connection test and an E2E test described with reference toFIG.23. The test device104binforms the CI/CD assist device106about a result of the test in the staging environment as well as that the test in the staging environment has ended (S532). The CI/CD assist device106stores a pass/fail standard in advance that is defined for test results in the staging environment. When the result of the test in the staging environment, which is informed by the test device104b, is a fail by the pass/fail standard (N in step S534), the CI/CD assist device106executes predetermined error handling (S540) and ends the process based on the bundled file. In the error handling, the CI/CD assist device106may send the vendor terminal16the result of the test in the staging environment together with information indicating a test failure in the staging environment. When the result of the test in the staging environment, which is informed by the test device104b, is a pass by the pass/fail standard (Y in step S534), the third bundle expander122of the CI/CD assist device106sets a parameter value for the production environment that is stored on the storage112in advance in the environment fields of the configuration file template and the test script template included in the bundled file, which is accepted in step S500(S536). The third bundle expander122expands the configuration file template and the test script template to the production environment, with the parameter value for the production environment being set in the environment fields. (S538). Specifically, the third bundle expander122, in a similar way to the bundle expander60in Prerequisite Technique, sends a plurality of types of the configuration file template included in the bundled file to the E2EO62and the builder in the NOS102c. The third bundle expander122also sends the test script template included in the bundled file to the E2EO62in the NOS102c. FIG.28is a flowchart showing an operation run on the network service management system100. The NOS102cis on standby until a purchaser purchases a network service at the MPS10(N in step S550). When a purchaser purchases a network service on the purchase screen at a marketplace, the MPS10informs the NOS102cabout the purchase. When being informed about the purchase of the network service (Y in step S550), the E2EO62and the builder in the NOS102c, as described in Prerequisite Technique, by coordinating with each other, automatically build functional units (e.g., CNFs) specified by the configuration file template based on service requirements set by the purchaser (e.g., parameter values such as the number of accommodated persons and SLA information) as well as the configuration file template to automatically build the network service specified by the configuration file template (S554). In response to a request from the E2EO62, the builder in the NOS102csends information about a CNF instance generated in the production environment to the E2EO62. Accepting the test script template from the CI/CD assist device106, the E2EO62of the NOS102csets information about the CNF instance, which is sent from the builder, in the variable field of the accepted test script template and thereby prepares test data for the production environment, with a value being put in the variable field of the test script template (S556). The E2EO62sends the test data for the production environment to the test device104c. When accepting the test data from the NOS102c, the test device104cconducts a test for the production environment, which is specified by the test data, on the CNF group (the functional unit group constituting the network service) built in the production environment in step S554(S558). The test, for example, includes a deployment test described with reference toFIG.23. In the example, the test device104cinforms the NOS102cabout a result of the test in the production environment as well as that the test in the production environment has ended. The NOS102cstores a pass/fail standard in advance that is defined for test results in the production environment. When the result of the test in the production environment, which is informed by the test device104c, is a pass by the pass/fail standard (Y in step S560), the NOS102cstarts providing the purchaser with the network service built in the production environment (S562). The NOS102cmay inform the purchaser terminal14that the network service has been started. Meanwhile, when the result of the test in the production environment, which is informed by the test device104c, is a fail by the pass/fail standard (N in step S560), the NOS102cexecutes predetermined error handling (S564). In the error handling, the NOS102cmay inform the purchaser terminal14that the network service cannot be provided. At the same time, in the error handling, the NOS102cmay send the vendor terminal16the result of the test in the production environment together with information indicating a test failure in the production environment. In a modification example, the CI/CD assist device106may determine a pass or a fail concerning the test in the production environment and execute error handling. Regarding the challenge 1 described above, effects produced by the network service management system100of the example will be described. According to the network service management system100, tests can be automatically conducted in a plurality of environments through a single test template. This helps to improve efficiency of testing conducted on a network service in a plurality of environments. According to the network service management system100, a test in a test environment is conducted in response to uploading of a bundled file, while a test in a production environment is conducted in response to purchase of a network service. This allows the tests to be conducted in a timely manner from the viewpoint of prompt network service provision. Regarding the challenge 2 described above, effects produced by the network service management system100of the example will be described. According to the network service management system100, only by uploading a bundled file, a network service vendor can build a network service on a network platform of a telecommunications company and provide the network service to a customer. According to the network service of the example, the network service management system helps a user of the platform service to provide a network service to a customer with less time and effort. The network service management system100of the example includes a manager (the CI/CD assist device106) to manage building of and testing of a network service in a plurality of environments. This enables building of and testing of the network service in the plurality of the environments in a timely manner. The manager sets data about the test environment in the test template. This helps to automate and improve efficiency of testing in the plurality of the environments. The present invention has been described through the example. It will be understood by those skilled in the art that the example is illustrative only, constituent elements or combined processes can be modified, and such modified examples are covered by the scope of the present invention. In the example described above, the test in the sandbox environment is automated. In a modification example, the test in the sandbox environment may be manually conducted by a person in charge at the vendor. The person in charge at the vendor may upload a bundled file including a configuration file template and a test script template concerning a functional unit group from the vendor terminal16to the CI/CD assist device106, on condition that the result of the test on the functional unit group in the sandbox environment is a pass. In this case, the network service management system100may, based on the bundled file, automate building of and testing of the network service in the staging and subsequent environments. In spite of not being described in the example above, the bundled file may further include a parameter file in which a plurality of parameter values such as IP addresses and host names are recorded. The values in the variable field and the environment field of the test script template may be set by referring to values in the parameter file. In this case, with information concerning each environment being put in the parameter file, the CI/CD assist device106may set a value in the environment field of the test script template. With information concerning functional unit groups built in the environments being put in the parameter file, the E2EOs62of the NOS102a, NOS102b, and NOS102cmay each set a value in the variable field of the test script template. Any combinations of the prerequisite technique, the example, and modification examples described above are also effective as embodiments of the present disclosure. A new embodiment resulting from any of the combinations has effects of the combined constituent elements. It will be understood by those skilled in the art that functions fulfilled by the constituent elements described in claims are implemented by each single element or coordination of the constituent elements shown in the prerequisite technique, the example, and modification examples. INDUSTRIAL APPLICABILITY The technique of the present disclosure can be applied to a system that manages a network service. REFERENCE SIGNS LIST 100network service management system,102aNOS,102bNOS,102cNOS,104atest device,104btest device,104ctest device,106CI/CD assist device,116bundle accepter,118first bundle expander,120second bundle expander,122third bundle expander | 115,553 |
11863420 | DETAILED DESCRIPTION The technologies described herein fall into at least three different categories, including (I) discovery and identification of layer 2 coax problems in MoCA networks, (ii) automated testing of MoCA networks and (iii) service-based testing.FIGS.1-10are mostly related to the category of discovery and identification of layer 2 coax problems in MoCA networks,FIGS.11-18are mostly related to the category of automatically testing MoCA networks andFIGS.19-23are mostly related to the category of service-based testing. Discovery and Identification Category Regarding the discovery and identification category,FIG.1is picture of a pixelated image seen on a television. High speed digital video delivered to a television is sent using a real-time protocol, where there is no time to detect missing packets and resend. The loss of even 1 packet in 50,000 can create a viewer-noticeable glitch on the screen. When a customer experiences a failure such as this on the network, it may be difficult to isolate and correct the problem. Observable failures may be intermittent and difficult to reproduce. In a MoCA network, unlike with Ethernet, failures are most likely due to a fault in the physical cable layout within the home. That is, failures are more likely to be attributed to a bad cable, a bad connection, a faulty splitter, excessive cable length, or too many cable segments. The usual way of troubleshooting a failure on a MoCA network is to use tools that characterize the physical cable directly. For example, a digital voltage ohm meter, an RF tester, or even a spectrum analyzer may be used to determine whether a particular cable or cable connection is bad. Attenuation may be caused by excessive cable length or cascading of splitters on a single path. Probe testing is used in Ethernet LAN's to isolate problems such as configuration problems such as networking configuration errors. As mentioned before, the physical Ethernet itself is unlikely to be the cause of a service failure. IP-based testing is used to diagnose failures at these high levels of the networking stack. The technology disclosed herein relies on probe testing for the purpose of diagnosing the physical cable infrastructure. It is different from prior MoCA troubleshooting techniques in the cable layout is not directly measured. In fact, it is not necessary to directly access every cable connector in the home in order to perform the test. The disclosed technology is also different from Ethernet testing because it is the physical network that is being diagnosed, not software configuration of the network. FIG.2is a block diagram of an example MoCA network including both a MoCA WAN and a MoCA LAN, with a test device connecting directly into MoCA network. The network comprises an Optical Network Terminal (ONT)211that connects a fiber optic cable carrying broadband to the home with the MoCA wide area network (WAN). The illustrated network includes a Broadband Home Router (BHR)235that connects the MoCA WAN with the MoCA local area network (LAN) within the home. A high definition (HD) set top box (STB)217and a HD digital video recorder STB237are also on the MoCA LAN. When a device is plugged into the MoCA network, the device listens for a beacon on a particular frequency to discover the location of a network controller. The network controller allocates time slots for the newly joined device to send and receive data to/from each other device on the network. Each time slot is reserved for traffic from one particular device to another particular device (i.e. one way point-to-point traffic). In particular, when the test device231joins the MoCA network, the network controller creates a schedule for the test device to communicate with the BHR. As can be seen inFIG.2, there are multiple potential failure spots. Individual ports on Splitter215could fail or connections may be loose, the connection between the coax and the device may be loose, a cable may be defective, or a long coaxial cable length may cause signal attenuation resulting in less bandwidth. FIG.3illustrates an example user interface for configuring a test device to troubleshoot a problem on a MoCA LAN, according to an implementation of the invention. The test device may include a display, and the user interface may be provided directly on the test device display. Alternatively, the test device may be communicatively coupled with a test device controller that provides a display for the user interface, and user commands may be sent to the testing device and results may be received from the testing device for display to the user. In an implementation, the test device joins the MoCA Network as seen in311(MoCA-RF option is selected at the bottom of the screen). Screen313illustrates selecting a frequency band for the test device on the MoCA network. In screen317menu item10: All Devices Packet Loss is selected. In this implementation, the BHR235continues to participate in the MoCA network and will be a target for test packets from test device231. The IP addresses of the devices on the MoCA LAN are discovered. In an implementation, the range of IP addresses used by each vendor of network devices is configured into the test device, or delivered to the test device upon request. A ping packet (also referred to herein as a probe or a probe packet) is sent to every IP address within the configured address ranges. Returning acknowledgement packets identify the IP address assigned to a device. The acknowledgement packet includes the MAC address of the responding device. The MAC address may be used to determine which devices are on the MoCA network and to filter out IP addresses for devices not on the MoCA network. The MAC addresses on the MoCA network are known to the testing device. At the end of the discovery process, the test device has constructed a list of IP addresses of every (minimally functional and reachable) device on the MoCA LAN. FIG.4illustrates a flow chart showing a process for determining packet loss, according to an implementation of the invention. At410, the test device automatically discovers the other devices on the MoCA LAN, which in this example includes HD STB217, HD/DVR STB237, and BHR235. At430, test packets are sent directly to each of the discovered devices on their respective channel. The transmission of packets attempts to simulate the transmission of IP video traffic, so a very large number of test packets are sent in rapid succession to each device. In an implementation, multiple devices being tested may receive and respond to probe packets concurrently and the probe packets transmitted asynchronously and interleaved in time. For example, a probe packet may be sent to one device before and after sending a probe packet to different device. Thus, the devices may not be tested serially. Unlike with Ethernet, using dedicated point-to-point channels for each transmission avoids interference between one packet and another. The packets sent to each device implement a protocol in which the receiving device responds to the test packet. An example of such a protocol is ICMP echo, where a “ping” is sent to a device and a response is expected back. A failure is assumed when no response is received back. Ping may be used to identify a path that includes an unresponsive device, broken cable, and/or loose connection. The technique disclosed herein is different from an administrator or network operator determining the availability of a device. An administrator may use probe packets to verify that a particular device is up and reachable. Usually, knowing that a different device is up and reachable is not helpful in performing the diagnosis. However, because of the coax cable network topology, test results for multiple devices may be useful for isolating a portion of the cable or connections that are failing. For example, if the cable segment between the splitter and the home router is the only failing component, the test device would observe packet loss for the router, but no packet loss for any of the other devices. Another distinction between a network operator/administrator using ping for diagnosing a network and the technique described herein is that troubleshooting IP video streaming requires sending a large number of very fast packets sent over the network, which is generally not needed when diagnosing IP connectivity problems. At450, the number of packets sent for which no corresponding response was received, may be totaled and compared to the number of packets that were sent to the device. A packet loss rate is determined. At470packet loss information for each MoCA device may be reported to the user. The absolute number of packets transmitted and received may be reported, and/or a proportion of failed or successful packets may be reported. In a different implementation, the test device having additional functionality may replace the BHR.FIG.5is a block diagram of a MoCA network with the test device275connecting directly into MoCA network and replacing the BHR, according to an implementation of the invention.275is a MoCA device with combined router and testing functionality. Building device275may be realized in a variety of ways. The test device, in addition to discovering devices on the network and probing the devices, may be adapted to perform the functions of the BHR, and the test device may replace the BHR temporarily during the test. For example, the BHR responds to DHCP requests to assign an IP address to a device on the network. Once the test device assembles the list of active IP address as normal procedure in preparation for testing, the test device can use that list for allocating new IP addresses in response to DHCP requests while the router is disconnected. The screen shown in335illustrates configuring the test device275to replace the BHR. In an alternative implementation, the test device probing functionality may be added into the BHR235device so that the testing capability is always available. The BHR already maintains the active IP addresses on the MoCA network, so no additional discovery is needed for the purpose of testing. Having the testing capability built into the router may obviate the need for a repair person to come on site into the home to gather the packet loss information. FIG.6is a block diagram of a MoCA network with the test device connecting to BHR over Ethernet, according to an implementation of the invention. In this configuration, no changes are made to the MoCA network. The test device does not join the MoCA network and need not have a MoCA interface. Instead, test device231connects to the BHR235over an Ethernet LAN. (Though not shown in the figure, screen311would have the 10/100/1G option selected). The test device231is able to discover the IP devices through the BHR and send IP traffic to those devices through the BHR. The response messages are received through the BHR by the test device over the Ethernet connection. FIG.7illustrates an example user interface for performing the “all devices packet loss” test on the MoCA network, according to an implementation of the invention. In screen711, a discovery process is conducted. In this example, two devices have been discovered at IP addresses 192.168.1.100 and 192.168.1.101. Screen733illustrates continuing to search for IP devices on the network. Screen755illustrates that a third device is discovered at IP address 192.168.1.1. Actiontec manufacturers routers, so MAC address 68:a4:ad might correspond to the home router. Once the devices are discovered, pressing the “start” button on screen755starts the packet loss test. Probe packets are generated and sent to each IP address in the discovered list. FIG.8shows example screen shots for viewing detailed test results for each device being tested. Screen811shows the status after 58 test packets have been sent each of the devices. In this example, no packets were lost, and thus the percentage of packets lost is also zero. Screen855shows that the operator stopped the test after sending 614 packets, and the device at address 01:fe:04 lost 60 packets amounting to 10% of packets lost. FIG.9illustrates an example user interface for configuring packet loss thresholds, according to an implementation of the invention. Screen911shows configuration options. In this example, options for video testing are selected. Screen933shows selecting to view and edit packet loss thresholds. The packet loss thresholds may be used to determine a status of all devices on the MoCA LAN based on the absolute number or proportion of packets lost. The test status may be determined by comparing the number or proportion of lost packets to a user-configured threshold that may be specified through a user interface. Example screen955shows configuring a packet loss threshold of 0.2%. If 0.2% of the transmitted packets to a device are lost, the test status for the device will be indicated as failed. FIG.10is an example screenshot of a quick test results summary. Above the summary remarks, there is one status line for each device being tested. The green check mark indicates that the packet loss if any was in an acceptable range below configured thresholds and the device passed the test. Automated Testing Category Regarding automated testing category, a brief introduction is provided beforeFIGS.11-19are discussed. Introduction The increasing demands for network bandwidth in the home, for applications such as high definition (HD) television, interactive gaming, video on demand (VOD) and digital video recorders (DVR), requires increasingly complex in-home networking. Technologies such as MoCA and data over cable service interface specification (DOCSIS) are being implemented to address these demands. However, incompatibilities between competing technologies and inevitable failure of active and passive devices create a need for expert knowledge in the field to maintain and repair equipment. This growing need challenges technical support organizations. Tens of millions of homes have been equipped with MoCA technology, and the number continues to grow. It will be impractical to scale a support workforce possessing the technical competence currently required to service this growing infrastructure. The technology disclosed reduces the technician expertise needed in the field by providing an expert system, such as a rule-based or directed graph expert system. In the hands of a relatively inexperienced operator, an expert system can select and perform a sequence of complex tests running on special purpose hardware to solve a problem generically identified by the operator. This can decrease the service time for a truck roll, reduce inappropriate equipment replacement, and improve customer satisfaction. Historically, cable delivery of home entertainment has involved the use of MPEG video transport. Cable television providers encode and transmit cable channels using quadrature amplitude modulation (QAM) over coaxial cable or “coax”. As IP-based packet traffic becomes more integrated with video devices in the home, new standards, such as those created by the MoCA organization, have been developed to take advantage of this integration. MoCA is a technology that allows a coax network to carry additional signals that travel on a frequency band mostly unused by cable TV transmissions. Using the technology disclosed, diagnostic errors by technicians can be reduced by automatically capturing data from the network itself, such as DHCP addressing for devices, technical information about the signal strength and bandwidth of wireless transceivers, and operational frequencies for various MoCA channels. Use of an expert system helps to avoid human bias towards preconceived solutions. Once the disclosed expert system is invoked, it can emulate a variety of devices and perform a number of role-specific tests. Results of tests are immediately and automatically used to select further tests and, ultimately, identify a fault and direct a technician's response to correct the fault. MoCA Home Network FIG.11illustrates an example home network in which the technology disclosed can be used to find faults or errors. Home networks of this sort offer integrated wired and wireless Ethernet traffic with coax-based MoCA traffic. Problems with MoCA and wireless portions of the network to be diagnosed can include replay of videos stored in a DVR, Ethernet device errors in traffic through a Broadband Home Router (BHR) or Access Point (WAP), and physical layer issues with cabling, connectors and splitters. The environment illustrated inFIG.11includes a connection to a Wide Area Network (WAN)1103with an internet connection1105and an entry point adapter such as an Optical Network Terminal (ONT) or coax connection. The WAN can be MoCA-compliant. The home MoCA LAN1100typically includes a coax cable1115, illustrated as double lines between devices, and an Ethernet cable1135, drawn as a single line. The WAN1103can connect to a Central Office1101with a head end that exchanges data with the home in-home LAN1100. The home MoCA LAN1100includes a Broadband Home Router (BHR)1120connected through a passive first splitter1110, via a coax cable1115. The BHR is also coupled to the internet connection1105. In one example, the BHR1120is an Actiontec MI424WR wireless broadband router, which acts as a server for DHCP, NAT, signal decryption, and other common home router services. The BHR1120can integrate both the Wi-Fi and Ethernet networks with QAM in compliance with MoCA standards. In this example, a first computer1122is connected to the BHR1120by an Ethernet cable. A Wireless Access Point (WAP)1130, that supports wireless technologies such as IEEE 802.11b, a, g, n, and ac, is also connected to the BHR1120via an Ethernet cable or integrated into the BHR. A second computer1132is connected wirelessly to the WAP1130or BHR1120. Similarly, an Ethernet switch1140can be connected to the BHR1120via an Ethernet cable1135or integrated into the BHR. A first gaming system1142can be connected to the Ethernet switch1140via an Ethernet cable. A second splitter1150can be connected to the first splitter1110via coax cable, and a Set Top Box (STB)1154via coax cable. The set top box1154can be connected to a first television1152with a technology such as HDMI or Component Video. A MoCA bridge1160can be connected to the second splitter1150via coax cable, and to a second gaming system1162via Ethernet. A third splitter1170can be connected to the second splitter1150. And a Digital Video Recorder (DVR)1174can be connected to the third splitter1170via a coax cable. A second television1172can then be connected to the DVR1174with a technology such as HDMI or Component Video. A Flex device (also referred to herein as a Configurable Test Subsystem)1134using the technology disclosed can be inserted into the home MoCA LAN1100in a plurality of positions, which physically isolate portions of the MoCA LAN1100for testing. For example, the Flex device1134can assume the role of the first computer1122with an Ethernet connection, the second computer1132with a wireless connection, the BHR1120, the WAP1130, a gaming system1142,1162, the STB1154, a TV1152,1172, or the DVR1174. The Flex device1134is capable of performing a plurality of tests for a home MoCA LAN1100. These tests can be run individually by an operator, with technical results of each test presented to the operator for expert analysis. In this scenario, known as classic mode, the operator decides which test of the plurality of tests is to be run, the order that tests are to be run, and the significance of any results obtained by each individual test. These tests can be extremely complicated, with equally complicated results. In an expert system mode, the technology disclosed can direct the operator to a starting place toward the best place in the home MoCA LAN1100to insert the Flex device1134based on a general problem type identified by the operator. The technology disclosed can then choose one or more tests to run autonomously, process the results of the tests, and instruct the operator as to whether there is a problem in that portion of the home MoCA LAN1100, how to remediate the problem. Details of the Flex device1134are shown in the block diagram illustrated inFIG.12. Software Block Diagram FIG.12illustrates a block diagram of software1200for an example Flex device. The Flex device1134comprises a set of specialized hardware, screen files1203that define the text and graphics displayed on the device, and action files1240that describe the actions to be taken based on various inputs. Screen display-related files1203can include an XSD or other schema file1210, a display XML or other markup data file1220, and graphic files1230. An XSD file1210can be used to define the schema of the display XML file1220. The display XML file1220contains the information that is displayed on each screen, and links to the actions that can be followed based on inputs. The actions that can occur include the rendering of other screens defined within the display XML file1220, the rendering of graphics files1230, such as animations, or the invocation of action files1240. An action file1240defines the rules of an expert system. Screen files are rendered on the display by a GUI server1205, such as a browser or light weight client app. Action files are processed by a rules and test engine1209, which can communicate with the GUI server1205to request more information from an operator, or to supply information and instructions to the operator. The specialized hardware of the Flex device1134is described with reference toFIG.13. Hardware Description FIGS.13A &13Billustrate a form factorFIG.13Aand block diagramFIG.13Bof the example Flex device1134, which is a third generation device on which the expert system disclosed can operate. A first generation device was a dedicated tester device with a display, described without an expert system. An example first generation device is described in U.S. Pat. No. 8,146,125. A second generation device divided functions between a special purpose test device, without a display, and a tablet computing device, as described in the related U.S. patent application Ser. No. 13/353,026. In an example second generation device, sometimes called a brick, the processor of the brick can provide some or all of the logic required to carry out the testing routines, using a tablet as a display for the brick. The technology disclosed can be practiced using either an integrated device with a display or a brick and tablet combination. Using the brick and tablet combination, the expert logic can run either on the brick or the tablet. When running on the brick, the brick can act as a server and the tablet as a client. A browser or app on the tablet can access the server on the brick. The example integrated Flex device1134illustrated inFIG.13A, includes a memory1320that stores screen files1203and action files1240. It also includes a processor1330, a touch screen display1340that renders text and graphics and that can also act as an input device, input interfaces1350, and physical (PHY) interfaces1360. The hardware can also include a special purpose chipset1370, such as the Entropic EN2710 chipset, for signal processing. Additional components of the example system include a GUI server1205and a rules and test engine1209running on the receiver. The GUI server1205renders text and images defined in the screen files1203onto the touch screen display1340. The GUI server can accept inputs from the touch screen display. The touch screen display can be divided into different input areas based on definitions stored in the screen files. The hardware can also include input interfaces1350in the form of physical data entry with key and navigation buttons and physical interfaces1360for coax, twisted pair, and Wi-Fi interface connections. A special purpose chipset1370, such as the Entropic EN2710 chipset, can support MoCA, RF, Ethernet and Wi-Fi testing. Wi-Fi protocols supported tree chip set include b, g, n, a, and c networks. The Flex device can be used to solve issues with RF broadcast video tiling, tiling on IPTV, VOD, or DVR; data speeds below SLA, internet connectivity problem and Wi-Fi connectivity problems. The Flex device1134includes hardware that can be used to can diagnose video tiling, noise, and faulty wiring and equipment issues. For example, to solve RF broadcast video tiling issues, the Flex supports tests and RF measurements, used to isolate a root cause to faulty wiring, splitter, and/or equipment. Tests used in the expert system can diagnose issues with tiling on IPTV, VOD, or DVR playback by evaluating packet loss on MoCA and Ethernet interfaces to isolate root cause to noise, faulty wiring or faulty equipment. Tests of assess video quality (VMOS) on MoCA and Ethernet interfaces to find issue in home or upstream and view video on the Flex device to isolate faulty TVs in the home. The system can run speed tests on various interfaces such as MoCA and Ethernet to verify upload and download speeds and segment issues on home or upstream networks to resolve issues with data speeds below Service Level Agreements (SLAs). Internet connectivity problems can be solved by the use of a complete suite of embedded IP tests to verify connectivity to various internet sites and determine location of failure. Additionally, Wi-Fi connectivity problems can be solved by scanning Wi-Fi signals and testing specific channel performance to isolate issues to interference, coverage or equipment location, or faulty equipment. The hardware illustrated inFIGS.13A and13Bis used by the expert system on the Flex device1134to test MoCA networks. MoCA networking comprises a physical (PHY) layer, which uses 50 MHz channels located in the spectrum 850-1525 MHz, which can overlap with QAM. These MoCA channels are organized into multiple bands. Only one channel per band is used on a physical network, though multiple MoCA networks can be formed over the same coaxial cable plant using different bands. At the physical layer, MoCA uses a technique called Adaptive Constellation Multi-Tone (ACMT), modeled after Orthogonal Frequency-Division Multiplexing (OFDM), to carry data between nodes. Units of data named ACMT symbols are mapped onto 224 discrete orthogonal sub-carriers that occupy the 50 MHz MoCA channel bandwidth. Each of the sub-carriers is modulated independently using 1-8 bits per symbol (BPSK-256 QAM). The MoCA MAC layer, MoCA QoS, and MoCA Link Privacy layers build upon the MoCA PHY layer. MoCA can be an abstraction layer for the discovery and identification of transport protocols such as Wi-Fi, Ethernet, and HomePlug, which allows the transport protocols to be transmitted over coax, and which also further complicates remediation efforts. The disclosed expert system can measure communication between components, isolate potentially faulty segments of the network, and emulate components of the system to ensure an efficiently functioning MoCA network. Expert System Interface FIGS.14A and14Billustrate an example GUI interface and message map, respectively, supporting user interaction with a rule-based expert diagnostic system.FIG.14Aincludes an example graphic image that can be displayed on a Flex device134, used by a technician to generically indicate the category of problem being diagnosed, that can indicate which expert system to select.FIG.14Bshows a message map that includes a sequence of messages between an operator and a Flex device1134. The GUI interface ofFIG.14Acan provide access to one or more expert systems that can diagnose electronic video devices, data, and the cabling infrastructure of the in-home network. The display of the Flex device1134renders images and text. The number of areas and size of each area on the Flex device display is programmable. In one example, a video testing area1403(button) is defined, and labeled with an image of a video device and/or the text “VIDEO”. The video testing area1403can be tied to a hardware button such as F1 or one can be a touch sensitive on-screen button as an input area in which an operator can touch the display to indicate a selection of video rules and tests. A data testing area1407can also be defined, and can be used to render an image of a data device that includes the text “DATA”. The data testing area1407can be programmed as an input area in which an operator can touch to indicate a selection of data rules and tests. FIG.14Billustrates an example message map between an operator1401, a GUI server1205, and a rules and test engine1209. The GUI server1205can render images on the display, such as video testing area1403and data testing area1407. The GUI server1205can also accept input from operator1401and, based on that input, invoke the expert systems through the rules and test engine1209. The GUI server1205can also relay results and instructions from the rules and test engine1209to the operator1401. In one example of an expert system message exchange, the operator1401identifies a problem from an initial screen for video1403or data1407diagnostics1423. The expert system for video1403and data1407can address video and data problems experienced on a home network. In another implementation, a toolbox1413can be used to select smaller, discrete tests when the expert system is not needed. And a highly trained field technician can select classic tests1417to perform discrete tests and view technical result details. The GUI server1205accepts input from the operator1401, and prompts the rules and test engine1209to solve the selected problem or begin the selected test set1427. In one application, the rules and test engine1209begins analysis of a MoCA network. As the test proceeds, the rules and test engine proceeds based on measurements from or tests performed on the device under test. It can use the GUI server1205to prompt the operator to enter more data or to perform additional actions as needed1433. The rules and test engine1209can run a test set1439that addresses a selected problem or from a selected test regime. It sends instructions to the operator1401via the GUI server1205, requesting additional information or additional user actions as needed1443. As results from the test set1439isolate network segments and identify issues within the home network under test, the GUI server1205can report those results to the operator1453. Unilaterally and automatically, the rules and test engine can choose to run additional test sets until the problem is resolved1467. For example, a Wi-Fi test can be performed by the rules and test engine1209to solve a user identified problem with a Wi-Fi device that is not working. The Wi-Fi test may, in a first step, find a fault with the Wi-Fi device in a first test set, or may find no fault with the Wi-Fi device and direct a further test setup to isolate a fault in a cable connecting the Wi-Fi device to the home network. MRDVR Message Map Example FIG.15illustrates an example of messaging during a Multi-Room Digital Video Recorder (MRDVR) test after operator1401has selected a video test button1403. The video testing1403button leads to choices among expert subsystems for diagnosing Video On Demand (VOD), Multi Room Digital Video Recorder (MRDVR), and Live TV. In this section, an expert system controlled MRDVR diagnosis example is described.FIGS.16A-16Gillustrate images presented by a Graphical User Interface (GUI)1205that are used to lead a technician through isolating network segments using the messaging illustrated inFIG.15. FIGS.15and16walk through an MRDVR diagnostic sequence, illustrating message exchanges and examples of animations that guide an operator through positioning the Flex to isolate segments of the network. Parts of the example (e.g.,FIGS.16A,16B and16G) are explained with input files for display screens and screen sequencing. The rules and test engine1209is capable of performing automated decision making during analysis of an MRDVR issue with minimal human intervention, in communication via a GUI server1205with an operator1401.FIG.15illustrates a testing and messaging sequence directed by the rules and test engine. An MRDVR test is useful when, for example, a Digital Video Recorder1174is connected over a MoCA network with at least one Set Top Box (STB)1154in another room. Problems with playback from the MRDVR can result from faults in the STB, the DVR, the coax cable1115between them, or any splitters1150,1170in the coax infrastructure. Some faults, such as a broken coax connector, are easily recognized. Diagnosis of other problems can require expert analysis. Quick and efficient diagnosis is accomplished using the technology disclosed, which automatically sequences test steps without requiring operator analysis of intermediate results. In this example, an operator1401selects a Multi Room DVR diagnosis1513from a GUI. Data indicating this selection is returned to a GUI server1205. The GUI server1205triggers the MRDVR test1517to be processed by the rules and test engine1209. The Flex device1134GUI screens illustrated inFIG.16AandFIG.16Bare examples of instructions and information to the operator that can be displayed.FIG.16Aillustrates one implementation of a MRDVR test, in which the systems under test are a DVR1604, a STB1602, and the cabling infrastructure between them. Although this example shows one DVR1604and one STB1602, the MoCA network can contain a plurality of DVR and/or STB devices, all of which can be addressed with the MRDVR test. Each of the primary screens, such as the one shown inFIG.16A, can have an associated “Tips” screen1608, as illustrated inFIG.16B. In this example, the operator can access the “Tips” screen by selecting the Back control1606. Continuing throughFIG.15, the rules and test engine1209uses a query to determine whether the issue is an isolated MRDVR1527function, by asking the operator whether the only issue being diagnosed involves DVR recording1523. If the problem is the recording service of the DVR, the Flex device instructs the operator1401to replace the MRDVR1533, which completes this diagnostic process (subject to restarting diagnosis if the user identifies additional problems). However, if the problem is not with the DVR recording capability, the rules and test engine1209guides the operator1401through a controlled segmentation of the MoCA network1100, since a generic MRDVR problem can be traced to the STB, to any segment of infrastructure between the STB and the DVR, or to the DVR itself. When the user indicates that the DVR recording works, testing can begin from a STB1602in different room from the MRDVR1543. A reusable portion of the MRDVR test sequence, the STB test set1545, causes the Flex device to display graphic instructions (FIGS.16C-16D) for the operator to connect the Flex device1134to the STB1553. FIG.16Cillustrates the beginning of an animation to prepare the STB for the MRDVR test. The graphic instructs the operator to detach the end of the coax cable1614that connects the STB to a first splitter while identifying the location of the coax connection on the Flex device1134.FIG.16Dillustrates the end of the animation to prepare the STB for the MRDVR test, showing the operator a good connection between the Flex device and the STB. The expert system in the Flex device, particularly the STB test set1559, emulates the DVR offering services to the STB. Once the Flex device has established MRDVR emulation with the STB, a simulated playback is performed. The Flex then prompts the operator to indicate whether an appropriate image appears on a display connected to the STB. If the emulated DVR services to the STB appear correct to the operator, the test advances to the next stage. If the emulated DRV services to the STB do not appear correct to the operator, the Flex instructs the user to connect the Flex device to the STB with a known, good cable, and the test is repeated. In classic mode or in another test sequence, the Flex can perform MoCA-specific tests of the STB, such as response to remote control button presses that are directed to head end equipment. The tests of the STB also can include effective handling by the STB of image distortion, packet or frame loss, freezing, and jitter. If the rules and test engine1209discovers a problem with the STB, it informs the operator1401of appropriate steps to take to resolve the issue or issues, and the diagnosis is complete. If no problems are found with the STB, the rules and test engine1209leads the operator1401through additional steps to prepare for cable segment testing1567. The Flex plays a graphic instruction asking the operator to substitute the Flex device1134for the STB1563. Throughout the MRDVR test sequence, the Flex device visually guides the operator in positioning the Flex relative to active and passive components.FIG.16Eillustrates a GUI animation of segmentation for infrastructure testing, starting with using the Flex to replace and emulate STB. In this example, the operator is instructed to detach one end of a coax cable1622that connects a first splitter to the STB.FIG.16Fshows the Flex device connected to the coax cable1624that was disconnected from the STB. Animation frames (not shown) can illustrate unfastening the STB connection, rerouting the cable, and making the Flex connection. The rules and test engine1209can detect that the connection is established or the user can advise that the connection has been made. The rules and test engine1209begins one or more tests of segments between the Flex device1134and the MRDVR. It emulates the STB1154and, in this example, performs a Packet Loss Test (PLT) to the DVR. A PLT is an example of one of several tests available to the Flex device1134. Versions of this test apply to MoCA, DOCSIS, Wi-Fi, and Ethernet networks. If the PLT test1569from the STB1154to the DVR shows no errors1577, after the STB has passed testing, then the problem can be an intermittent problem with the STB, and further testing of the STB can be performed or the STB can be replaced. If the PLT test1569shows an error, then the segment isolation progresses to the next passive or active component. In this example, the infrastructure connecting the STB and MRDVR consists of passive coax cables and splitters. A wired Ethernet example might include diagnosis of an active wired switch, but the cabling in our MoCA example only includes passive components. FIG.16Gillustrates part an animation used to direct an operator1573to replace the infrastructure coax between the Flex device1134and the first splitter1634with a good test coax cable1632. Once the good coax cable1632has been installed, the rules and test engine1209conducts a PLT test1579with the replacement coax. The test will show either packets still being lost, or no packets lost1587. If no packets are being lost, then the problem was in the replaced coax cable between the first splitter and the STB, and diagnostics are complete. However, if packets continue to be lost, then the operator is instructed to connect the Flex device1134bypassing the first splitter1583. Once the first splitter has been bypassed, the rules and test engine1209repeats the PLT test to determine performance of the bypassed splitter1589. If the PLT or other test after bypassing splitter1589shows no errors, then the bypassed splitter is faulty and can be replaced. The diagnostics are complete. However, if the PLT test1589shows errors, the GUI server1205instructs the operator1401to loop back through the process with the next physical segment1593, replacing the next coax segment with the good test coax. Any number of coax and splitter segments can be diagnosed in sequence. After the operator has tested the passive coax and splitter components between the STB and the DVR, the final test is to connect the Flex device1134directly to the DVR1174. In some test sequences, a Packet Loss Test can be performed. Or, playback from the MRDVR can be triggered and playback images viewed. In other examples, tests for image distortion, loss, freezing, and jitter, as well as MoCA-specific testing can be performed by the technology disclosed with minimal input from the operator. These tests can be conducted and video quality measured from packet statistics, without viewing of the playback. While this test sequence is described as tracing from the STB to the MRDVR, of course the testing can run in the other direction, from MRDVR to STB. Screen File and Action Files In this section, we describe a GUI technology that supports user interaction and graphical animation playback. The screens and animations represented byFIGS.16A-16Gcan be defined by one or more screen files1203, and one or more action files1240. Other file formats such as JSON or another tagged or key-value representation can be substituted for XML. These files functionally control what is rendered on the display, and what actions are to be taken based on inputs from the operator or from the physical Flex device ports. The order in which the screens appear on the display is can be thought of as workflow steps.FIG.16Aillustrates an example of a screen entitled SVA0011.FIG.16Billustrates another example of a screen entitled SVA0012. FIG.16Athe screen ID is“SVA0011”. An example XML file1220can combine several screen descriptions. The description for SVA011 invokes a graphic file and assigns responses to function keys F1, F3, and F4. The graphic file1230named “Summary_multiDVR.mp4” is rendered as the image for screen SVA0011. Example XML form file1220section for screen SVA0011 follows: <Screen ShortDescription=″Indicates equipment and wiring tested forMulti Room DVR Service″ screenType=″SummaryRecommendation″ID=″SVA0011″><UserInteractionScreen><Title>SUMMARY</Title><Description>Animation showing equipment and wiring tested for Multi Room DVR Service.</Description><FKeys><F1Key>QUIT</F1Key><F3Key>BACK</F3Key><F4Key>NEXT</F4Key></FKeys><Graphic>Summary_multiDVR.mp4</Graphic></UserInteractionScreen></Screen> A style sheet or other program that parses this XML can be used to produce the display inFIG.16A. The corresponding action file1240segment for screen SVA0011 shows the actions to be taken for the values assigned to the function keys in the display XML file1220. For example, pressing the F4 key with the screen SVA0011 rendered on the display passes the value “NEXT” to the associated action file1240section, which identifies the action as loading a screen titled UVA0026-000. The example of the action file section follows: <Screen ScreenID=″SVA0011-000″><!-- Indicates equipment and wiring tested for Multi Room DVRService --><Input Value=″QUIT″><NextScreen ScreenID=″FINAL″/></Input><Input Value=″BACK″><NextScreen ScreenID=″SVA0012-000″/></Input><Input Value=″NEXT″><NextScreen ScreenID=″UVA0026-000″/></Input></Screen> In another example, pressing the F3 key during the display of the screen SVA0011 will cause the value of “BACK” to be passed to the action file1240as an input value, which is interpreted by the action file as an instruction to load a ScreenID of “SVA0012-000”, the tip screen illustrated inFIG.16B. The display XML file1220for screen SVA0012 follows: <Screen ShortDescription=″Tips screen for Multi Room DVR Summary″screenType=″SummaryRecommendation″ ID=″SVA0012″><UserInteractionScreen><Title>TIPS</Title><Description>Tips screen for Multi Room DVR Summary</Description><ScreenText>* Multi room DVR test only involves the STB, the DVR, and the coaxpath between the two * The router is NOT required for MRDVR towork * Flex will perform Packet Loss Test between the STB and theDVR</ScreenText><FKeys><F1Key>QUIT</F1Key><F3Key>BACK</F3Key><F4Key>NEXT</F4Key></FKeys></UserInteractionScreen></Screen> In this example, the tip text1608is displayed. Nothing will occur if the operator presses F2, as there are no values associated with that button in the display XML file1220section for SVA0011, or in the action file1240below. If the operator presses F1 or F3 for values “QUIT” or “BACK” respectively, the action file performs a “FINAL” action that concludes diagnostics, and returns the operator to an initial screen. If the operator presses the F4 button, then the value “NEXT” is passed from the screen SVA0012 to the action file SVA0012-000″, and the action file1240causes the GUI server1205to render the screen “SVA0011-000”, thereby returning to the previous screen. Action file1240section associated with screen “SVA0012” follows: <Screen ScreenID=″SVA0012-000″><!-- Tips screen for Multi Room DVR Summary --><Input Value=″QUIT″><NextScreen ScreenID=″FINAL″/></Input><Input Value=″BACK″><NextScreen ScreenID=″FINAL″/></Input><Input Value=″NEXT″><NextScreen ScreenID=″SVA0011-000″/></Input></Screen> As outlined above, the user can press the F4 button during a display of a ScreenID “SVA0011”, which passes the value “NEXT” to its action file1240section. The action file then invokes the ScreenID “UVA0026-000”, which has the following definition within the XML file1220: <Screen ShortDescription=″Video Pixelates at DVR or at STB″screenType=″UserInteraction″ ID=″UVA0026″><UserInteractionScreen><Title>Select where Video Pixelates</Title><Description>Video Pixelates at DVR or at STB</Description><ScreenText>Select where Video Pixelates</ScreenText><FKeys><F1Key>QUIT</F1Key><F3Key>BACK</F3Key></FKeys><SelectableGraphics><Graphic Value=″PIXELATION_DVR″ Label=″Recording Issue at DVR″graphicFile=″pix_DVR.png″/><Graphic Value=″PIXELATION_NON_DVR″ Label=″Recording Issueonly at Remote STB″ graphicFile=″pix_not_DVR.png″/></SelectableGraphics></UserInteractionScreen></Screen> In this example, the operator is offered a choice of selecting a pixilation issue with a DVR, or a pixilation issue with a STB. This is just one example of a plurality of options that can be made available to an operator. As the operator identifies the options for test selection, the expert system invokes a plurality of reusable procedures. These procedures can be illustrated by a directed graph, of which one example is shown inFIG.17A. Rule-Based Expert Systems and Directed Graphs The expert systems embedded within the Flex device1134can be represented as an ordered or directed graph, which is illustrated inFIG.17A, represented as a set of rules, which is illustrated inFIG.17B, or in a combination of rules and graphs. The directed graph is a set of connected vertices, includes edges that are directed from one vertex to another. In one implementation, an Integrated Development Environment (IDE) can be used to create and edit a directed graph, which clearly identifies the possible flows based on conditions specified within the directed graph. The directed graph can then be processed into rule tables for a rule based expert system inference engine by traversing the directed graph ofFIG.17Ainto an equivalent rules table as illustrated inFIG.17B.FIG.17Bis a partial example of a rule table generated by a directed graph. A more completed rule table would include branching instructions. For example, node TTA0002-01727A has one outcome of “Pass”, and another outcome of “Fail”. A more complete rule tableFIG.17Bcould contain two additional columns; one which identified branching instructions for a first outcome, and one that identified branching instructions for a second outcome if one existed. For the sake of brevity,FIG.17Boutlines only those outcomes necessary to illustrate the technology disclosed, and the processing order is from top to bottom of the rule table. In this example, the nodes of the directed graph are named per a naming convention with 7 characters, in which the first 3 characters are alphabetic and the remaining 4 characters are numeric. The naming convention also allows for a suffix, which is used to identify the instance of a set of shapes that constitute a reusable code segment. In the example shown inFIG.17A, the first character can be a “T”, “R”, “S”, or “U”: T—Test R—Recommendation S—Summary U—User Input The second character can be a “V”, “D”, or “T”: V—Video D—Data T—Toolbox The third character can be a “W”, “M”, “R”, “E”, or “A”: W—Wi-Fi M—MoCA R—RF E—Ethernet A—All or more than one of the above In this example, a node named UVM00011701aindicates a screen file for a User Input, Video, and MoCA screen. A node indicating a user interface, such as UVM0001, can have entries in both a screen file and an action file. The remaining 4 digits of the 7 character name can be used to ensure unique labeling. For example, a screen file, represented in display XML file2220, can include an entry, listed below, for UVM00011701a, which contains an animation showing how to replace a device for diagnostic purposes with the Flex device1134. <Screen ShortDescription=″Animation showing Replace Device Undertest with Flex″ screenType=″UserInteraction″ ID=″UVM0001″><UserInteractionScreen><Title>Flex to BHR eth</Title><Description>Animation showing Replace Device Under test with Flex</Description><ScreenText>Connect Flex as shown then click ″NEXT″ to proceed</ScreenText><FKeys><F1Key>QUIT</F1Key><F2Key>ACTION</F2Key><F3Key>BACK</F3Key><F4Key>NEXT</F4Key></FKeys><Graphic>LiveTv_B2.mp4</Graphic></UserInteractionScreen></Screen> Action file1240can include an entry, listed below, for UVM00011701awhich contains the actions to be taken, based on the user's entry to the user interface. In this example node, UVM00011701ain the directed graph is equivalent to rule UVM00011701bin the rule table. <Screen ScreenID=″UVM0001-000″><!-- Animation showing Replace Device Under test with Flex --><Input Value=″QUIT″><NextScreen ScreenID=″FINAL″/></Input><Input Value=″ACTION″><NextScreen ScreenID=″UVA0027-001″/></Input><Input Value=″BACK″><NextScreen ScreenID=″UVA0027-001″/></Input><Input Value=″NEXT″><NextScreen ScreenID=″UVA0034-000″/></Input></Screen> As an illustration, the next code segment shows the value “NEXT” being passed from the screen file to the action file, which sets the FixedValue value to to“N1” for the VariableName equal to “Instance” in step UVA00341731a/b. That is, the screen file entry for UVA00341731a/bis used to set the instance variable and no information is presented on the display. <Screen ShortDescription=″Sets Instance variable to 1″screenType=″UserInteraction″ ID=″UVA0034″><VariableMapScreen><FixedToVariable FixedValue=″N1″ VariableName=″INSTANCE″/></VariableMapScreen></Screen> In this example, setting the FixedValue variable to “N1” informs subsequent nodes of reusable code of the identity of the calling node. This allows the nodes of reusable code to perform as a subroutine call, with a return location, in a script without a subroutine structure. Action file1240also includes an entry for UVA00341731a/bthat passes control to a node named TTA00001725a/b. <Screen ScreenID=″UVA0034-000″><!-- Sets Instance variable to 1 --><Input Value=″NEXT″><NextScreen ScreenID=″TTA0000-000″/></Input></Screen> Next, we describe an example cleanup node, which can be used to clean up memory and variables within the Flex device1134. The node named TTA0000, which is a test node from the toolbox used for all tests, is used to clean up memory and variables as an initial step for a new test, and as a final step for any failed test components. For example, TTA0000-01725a/bis used to initially cleanup for node TTA00021725a/b, while TTA0000-0031745a/bis used to cleanup if errors are encountered with nodes such as TTA00021727a/b, TTM00011747a/b, TTA0211767a/b, or TTM00051776a/b. The instance of TTA0000-0001725a/bcalls the screen file entry for TTA0002-0001727a/b, which enables the MoCA hardware on the Flex device1134. <Screen ShortDescription=″Enables MoCA Module″ screenType=″Active″ID=″TTA0002″><ActiveTestScreen><Title>Enabling MoCA</Title><Description>Enables MoCA module</Description><FKeys/><EnableHardware><hardware-component>MOCA</hardware-component></EnableHardware></ActiveTestScreen></Screen> Once initial cleanup is complete, the next step in the test sequence activates the MoCA hardware in the Flex device1134. The action file entry for TTA0002-0001727a/baddresses a successful activation of the MoCA hardware by passing control to TTM00011747a/b, and addresses a failure to activate the MoCA hardware by passing control to TTA0000-0031745a/b. <Screen ScreenID=″TTA0002-000″><!-- Enables MoCA Module --><Input Value=″PASS″><NextScreen ScreenID=″TTM0001-000″/></Input><Input Value=″FAIL″><NextScreen ScreenID=″TTA0000-003″/></Input></Screen> TTM00011747a/bpasses control to TTA00211767a/bso that the Flex device134a/bcan acquire a DHCP address, as if it was a STB. Node TTA00211767a/bthen passes control to node UVA00101759a/b, When the test modules that comprise UVA00101759a/bhave run to completion with no failures, the AutoRoutineCompleted variable is set. If a module within the routines called by UVA00101759a/bindicates that the Flex device1134needs an update to a screen file, an action file, or device firmware, then node TTA00121779a/bis activated. A failure of an update to a screen file, action file, or firmware does not signal the system to stop processing. Once testing is complete, in order to change the IP filter to the DVR IP address, UVA00101759a/bpasses control to node TTA00301788a/b, which in turn obtains all other IP addresses on the current subnet by passing control to TTM00051776a/b. Node TTM00051776a/bthen passes control to additional nodes that perform additional tests on the subnet. Data Speed Message Map Example FIG.18, likeFIG.15, illustrates an example of messaging during a test. This test looks for bottle necks in access speed to the Internet from devices connected through the Broadband Home Router (BHR) via various physical layers. The Flex can be positioned in place of a STB, home computer, laptop or other device that accesses the Internet. Once speedy access to the Internet from the BHR position is confirmed, bottlenecks within the home can be analyzed. Data issues can occur in MoCA, DOCSIS, Ethernet or Wi-Fi home networks, and can include connectivity, speed, or partial data loss issues. In one implementation, a speed test can be useful in discovering the location of a fault within a network, including a Local Area Network (LAN) and a Wide Area Network (WAN). In some cases, such as with a broken connector or a dead device, the problem is obvious. However, some problems required an expert understanding of the intricacies of a data network. In one implementation, an operator1401selects DATA diagnosis1803from a GUI server1205, as illustrated in1407ofFIG.14A. The GUI server invokes the DATA test1809to be processed by the rules and test engine1209. In this example, the data test uses a speed test as a first step in analysis of the home network. The speed test can identify if network connectivity is available. The speed test can also be used by the rules and test engine1209to analyze the quality of the network connection. The operator1401is instructed to connect the Flex device1134to a BHR1120LAN port with a known good Ethernet cable1813. This will isolate the issue to the BHR1817. The rules and test engine1209then runs a speed test1819, which first validates that a connection to the Internet is available, and second validates that a good connection works at an expected speed1828. The expert system recognizes that the first step in the data test sequence is to validate the WAN connectivity. If the connection to the Internet through the BHR LAN port is not available, or the quality of the connection is not as expected, the problem can be the BHR. The rules and test engine1209can instruct the operator1401to replace the BHR with the Flex device1823, which then simulates the actions of the BHR. If the connection is still not available, or the quality of the connection is still not as expected, the rules and test engine1209will send a message to the operator1401through the GUI server1205to report a bad WAN connection, and the diagnostic is complete. However, if the connection to the Internet by the Flex device performs as expected, the operator is instructed to replace the old BHR with a new BHR and retest. If the connection to the Internet through the BHR LAN performs as expected, the rules and test engine can send a message to the operator to select an additional test1833. In this example, an additional test can include a Wi-Fi test1859, an Ethernet test1869, a DOCSIS test1879, or a MoCA test1889. A Wi-Fi test can include a Wi-Fi service embedded in the BHR, or a Wi-Fi service provided by a device such as a Wireless Access Point (WAP) that is connect to the BHR by coax or Ethernet cable. If the operator selects a Wi-Fi test for a Wi-Fi service embedded in the BHR, the operator1401is instructed by the rules and test engine1209to stand 6 feet from the BHR in a direct line of site1853, with the Flex coax connector pointed at the router. The rules and test engine1209then connects to the Wi-Fi service within the BHR using information such as SSID and password supplied by the operator, then invokes a speed test1859. If the speed test finds that the Wi-Fi service in the BHR is performing as expected, the summary results of the speed test are sent to the operator1401, and the diagnostic is complete. The operator can also be instructed to move the BHR to a location where the Wi-Fi service offered by the BHR is closer to the devices using the Wi-Fi service. If the speed test finds that the Wi-Fi service in the BHR is not performing as expected, the operator is instructed to replace the BHR with a new BHR and retest. If the operator selects a Wi-Fi test for a Wi-Fi service provided by in a WAP, the operator1401is instructed by the rules and test engine1209to stand 6 feet from the WAP in a direct line of site, with the Flex coax connector pointed at the WAP. The rules and test engine1209then connects to the Wi-Fi service within the WAP using information such as SSID and password supplied by the operator, then invokes a speed test1859. If the speed test finds that the Wi-Fi service in the WAP is performing as expected, the summary results of the speed test are sent to the operator1401, and the diagnostic is complete. The operator can also be instructed to move the WAP to a location where the Wi-Fi service offered by the WAP is closer to the devices using the Wi-Fi service. If the speed test finds that the Wi-Fi service in the WAP is not performing as expected, the operator is instructed to replace the WAP with the Flex device and retest. WAP devices can be connected to the LAN with cable such as coax or Ethernet. If the cabling is coax, the infrastructure between the WAP under test and the BHR can have zero or more bridges, one or more segments of coax, and zero or more splitters. For each segment of coax, and for each splitter, segmentation can occur wherein the infrastructure is tested for each segment moving from the WAP to the BHR. If the cabling is Ethernet, the infrastructure between the WAP under test and the BHR can have one or more segments of Ethernet cabling, and zero or more switches or hubs. For example, inFIG.11, the WAP1130is connected to the BHR1120by one segment of Ethernet cable. In this example, the Wi-Fi speed test1859would have only one segment to test after the WAP1130. If the operator selects a data Ethernet test1863for an Ethernet device such as a game console1162, the rules and test engine1209would instruct the operator to substitute the Flex device1134for the game console1162, and test each segment between the game console and the BHR, alternating between Ethernet cable, a MoCA bridge1160, coax1115, and splitters1150,1110. If the operator selects a DOCSIS test1879, the operator is instructed to prepare for a DOCSIS test1873and a segmentation equivalent to the Wi-Fi speed test1859is invoked by the rules and test engine1209. If the operator selects a MoCA test1889, the operator is instructed to prepare for a MoCA test1883and a segmentation equivalent to the Wi-Fi speed test1859is invoked by the rules and test engine1209. Example Active and Passive Tests Invoked by Expert System The following list of tests and other implementations of the technology disclosed can include one or more of the following tests or test features and/or tests or test features described in connection with additional tests disclosed. In the interest of conciseness, the combinations of tests disclosed in this application are not individually enumerated and are not repeated with each base set of tests or test features. The reader will understand how tests or test features identified in this section can readily be combined with sets of base tests or test features identified as implementations impacting details of test implementation and analysis and of setting thresholds based on test results. Each of the active and passive tests below, as components of expert system tests such as Video On Demand, MRDRV, Live TV, and Data. These tests can be incorporated and can be executed on the PHY interfaces1360identified inFIG.13. Active Tests TST-BURST Provides a network quality test with bursty traffic, possibly useful to evaluate a traffic policing and traffic shaping framework. TST-ECHO Provides an overall snapshot of basic network health, measuring important characteristics such as jitter, latency, and loss. It is useful for verifying SLA compliance and in determining network integrity for VoIP and other real-time protocols that are heavily affected by these parameters. TST-ETHDM-Y1731 For Y.1731-enabled networks, provides an Ethernet-layer tool focused on frame delay measurements between the unit and a known endpoint. This type of OAM testing is sometimes referred to as “ETH-DM” TST-LINKTRACE-xx Provides a layer 2 (MAC/Ethernet) tracing tool for reporting the hops a test frame traverses from source to destination, compliant with a subset of either IEEE 802.1ag or ITU-T Y.1731. It can serve as an important tool for determining the topology of a LAN service, especially for services that involved multiple bridged LANs to complete the end-to-end path. TST-LPBK-xx Provides a tool for fault verification and isolation, as well as measurement of important network characteristics such as latency and loss. The test operates at the Ethernet layer on an 802.1ag and Y.1731-enabled networks, where loopback request frames are sent to a target NE (MEP/MIP), which should then return the frames to the unit for analysis. This type of OAM testing is sometimes referred to as “ETH-LB”. TST-MLPBK-Y1731 Performs a multicast loopback to identify MEPs on the service. The test operates at the Ethernet layer, where a multicast loopback request frame is transmitted into the network and then returned to the originator by all available MEPs. TST-PING Verifies that a specified IP destination (endpoint) can be reached, as a basic test of network-level connectivity. The destination device must support ICMP ping. TST-RFC2544 Run a group of tests originally designed to benchmark the performance characteristics of a network interconnecting device, adapted to refer to overall service or link performance. TST-THRUPUT Determines the maximum data rate that can be sent between the unit and a remote echo server, up to the CIR specified when the test session was established. This test is most useful when testing data services rather than real-time transport such as VoIP or video. For this reason, the test is best suited for evaluating data flow performance for processes such as file transfers and web page traffic. This test can be used as a speed test, as it measures throughput. TST-TRACERT Provides a standard ICMP over UDP traceroute function that runs three concurrent traceroute processes and reports every router “hop” along the path. The results provide a topological view of the route that packets are using to reach the destination. TST-TWAMP Initiates and manages a Two-Way Active Measurement Protocol (TWAMP) client session to analyze and report metrics such as network jitter, latency, and loss. Passive Tests MON-ETH-CAPTURE Launches a capture of Ethernet frames according to any filter and/or capture settings provided. Following the capture, a capture file is generated that can be transported off the unit for external analysis. MON-ETH-CONN Tracks and reports the conversation statistics between either pairs of Ethernet MAC addresses or pairs of IP addresses, including the number of bytes and frames being transmitted in each direction of the conversation. MON-ETH-FRMSIZE Provides a comparative view of the size of Ethernet frames flowing through the service. Results are presented as comparative frame counts and percentages of total frame count. MON-ETH-PRTDIST Reports on IP packet distribution based on TCP/IP link, Internet, and transport-level protocols, such as OSPF, and TCP, or application-level protocols such as HTTP and SNMP. It provides a comparative view of network usage according to logical protocols. MON-ETH-PTY Provides a high-level view of traffic on the link sorted by VLANs and VLAN priorities, or VLANs and DSCP (class of service/CoS) values, as detected within Ethernet frame and IP packet headers, as applicable. MON-ETH-TOPU Reports the top users of the network based on bandwidth consumption. Depending on the test setup, a “user” can be a device sending traffic or an application protocol in use on the network. This information provides a high-level view of the primary users of the network and what the network is being used for. MON-ETH-VLAN Provides comprehensive layer 2 statistics on the link sorted by VLAN, as detected within Ethernet frame headers. It includes overall data measurements such as frame and data counts, link utilization, frame sizes, and a variety of other information. Service Based Testing Category Regarding service based testing category, a brief introduction is provided beforeFIGS.19-23are discussed. Introduction Historically, cable delivery of home entertainment has involved the use of MPEG video transport. Cable television providers encode and transmit cable channels using quadrature amplitude modulation (QAM) over coaxial cable or “coax”. As IP-based packet traffic becomes more integrated with video devices in the home, new standards, such as those created by the MoCA organization, have been developed to take advantage of this integration. MoCA is a technology that allows a coax network to carry additional signals that travel on a frequency band mostly unused by cable TV transmissions. Tens of millions of homes have been equipped with MoCA technology, and the number continues to grow. The increasing demands for network bandwidth in the home, for applications such as high definition (HD) television, interactive gaming, video on demand (VOD) and digital video recorders (DVR), requires increasingly complex in-home networking. Technologies such as MoCA and data over cable service interface specification (DOCSIS) are being implemented to address these demands. However, incompatibilities between competing technologies and inevitable failure of active and passive devices create a need for expert knowledge in the field to maintain and repair equipment. This growing need challenges technical support organizations. It will be impractical to scale a support workforce possessing the technical competence currently required to service this growing infrastructure. On premises testing methods typically entail technicians using special purpose devices visiting the installation to measure the contributions of various packet errors to signal degradation, including packet loss, jitter, and latency. These devices made it possible to monitor traffic between two active devices, tap into the line to measure throughput, and provide raw statistics to technically trained personnel. Technicians with strong technical insight were capable of deciphering the statistics delivered and determining what actions to take to identify and repair faults in a home network. One such example device, used to test MoCA signals routed to a set-top-box and evaluate performance of the set-top-box, employs portions of the technology disclosed in commonly owned U.S. Pat. No. 8,146,125 entitled, “Computerized Device and Method for Analyzing Signals in a Multimedia over Coax Alliance (MoCA) Network and Similar TDM/Encrypted Networks” which is hereby incorporated by reference for all purposes. The disclosed technology describes a Configurable Test Subsystem in which tests can be implemented as services, e.g., a cable company has a customer with an issue, and the services running in “Docker” (or similar technology) containers can be invoked to perform testing and provide results for analysis to a central office of the cable company, for example. Containers are useful to encapsulate processes and the filesystem environment of the process—enabling the process to be deployed and executed in a variety of environments. (Docker is a product of Docker, Inc. of San Francisco, CA). A request to run the test comes from a central office through the internet into the containers. Alternatively, a local trigger, such as can be provided by a scheduler on board a device, can be used to initiate running a test. In one implementation, the central office will “push” test cases to a container once a problem is reported. Additionally, there are cases where it may be desirable to block PC traffic, in which case the container can receive instructions to block traffic from specific machines. Such techniques can be useful when, for example a network card fails and fills a network with miscellaneous traffic. This can serve as an interim patch until service personnel can come out to replace the card. Devices that are targets for containerized tests can be Linux based. In another scenario, the technology disclosed provides for automatically scheduling a test set for the home MoCA LAN1100housed by the Configurable Test Subsystem1134based on a scheduling paradigm selected under control of the central office. The technology disclosed can then initiate one or more tests to run autonomously according to a schedule, process the results of the tests, and provide results to an operator back at the home office. Results can instruct the operator as to whether there is a problem in that portion of the home MoCA LAN1100, and potentially whether an on-site follow up visit is required to remediate the problem. Details of the Configurable Test Subsystem1134are shown in the block diagram illustrated inFIG.19. Active and Passive Testing Testing regimes can include implementations having one or both of active monitoring and passive monitoring of network components, links, devices and the like. Active monitoring typically entails injecting test traffic onto a network and monitoring the flow of that traffic. Active monitoring techniques are especially useful for conducting simple test protocols; for example, timing the latency between two devices on a wide area network, as well as more complex tasks such as collecting measurements to verify quality of service (QoS) agreements are being met. Active monitoring techniques that provide some control over an experimental scenario can enable collection of data on particular aspects of network performance. Passive monitoring techniques typically involve less control over an experimental scenario and more observation. For example, rather than injecting an artificial traffic flow into an MoCA network under test, passive monitoring techniques can include monitoring traffic that is already present on the network. Passive monitoring techniques are typically conducted using a device on the network to capture network packets for analysis. This can be done using one or more probes configured to capture network data, or with built-in capabilities on switches or other network devices, or combinations thereof. Passive network monitoring can collect large volumes of data which can be used to derive a wide range of information; for example, TCP headers contain information that can be used to derive network topology, identify services and operating systems running on networked devices, and detect potentially malicious activity by hackers or the like. Configurable Test Subsystem System Stack FIG.19illustrates a block diagram of a configurable test subsystem for an example MoCA device. InFIG.19, a Configurable Test Subsystem1134is illustrated with reference to a component portions making up a system stack of the subsystem. Hardware layer1910includes one or more network connections, processor and memory and various input and output devices. Specific components, interconnections and configurations can vary, however for a more detailed view of one example implementation of hardware1910components, reference may be had toFIG.23herein and accompanying text. Single instance OS1912resides in storage, transitory and/or non-transitory, and provides control over access to system resources, such as the devices making up hardware layer1910, manages storage and provides resources accessible to other programs and applications of higher layers. A Test Applications Manager1914is deployed as a part of configurable test subsystem134as part of the manufacturing process pre-installation, and enables receiving and installing of test sets, such as test case A1920A, test case B1920B, test case N1920N, etc., as well as other applications1922that can have non-test related functionality. Test Applications Manager1914supports common libraries1916to provide functionality accessible by any of test case A1920A, test case B1920B, test case N1920N, and so forth. Further with continued reference toFIG.19, in an example scenario of conducting testing, test case A1920A during execution generates and sends outbound test traffic1937via OS1912and network interface1976of device hardware1910. A second network device160, reachable by the MoCA, serves as a “target” that receives the outbound test traffic1937sent by test case A1920A and replies with a response traffic1939. Response traffic1939is received by network interface1976and copied into ring buffer1932residing in a shared memory1998portion of the device hardware1910. Test case A1920A retrieves response traffic1939from the ring buffer1932. Direct retrieval of response traffic1939from the ring buffer1932in shared memory portion of hardware1910advantageously provides test case A1920A the capability to circumvent delays from accessing response traffic via system utilities. For example, and with reference toFIG.11andFIG.19, in order to test connectivity between device BHR1120and bridge1160, test case A1920A which resides in a container deployed in configurable test subsystem1134of BHR1120could generate a “ping” as outbound traffic1937directed to bridge1160of the MoCA. Bridge1160would respond with a ping response, the response traffic1939. Depending on the network conditions, e.g., the “health” of the connections and devices1110and1150as well as target device, bridge1160, and originating device BHR1120, the ping response will exhibit a certain latency to traverse the network path. Depending upon the latency measured, and the latency expected, a health of the devices along the path can be inferred. If no ping response is received (timeout) a failure of one or more of these devices is implicated. While the foregoing has used a relatively simple “ping” test scenario to illustrate the operation of just one example configuration of select devices, other more complex test scenarios are readily incorporated into containerized entities for deployment onto network devices of various types and configurations without significantly departing from the techniques described herein. Ring Buffer FIG.20illustrates a block diagram of a ring buffer for an example MoCA device. Ring buffer1932ofFIG.20is depicted schematically and includes a read pointer2002that points on a next object to be read from the buffer, in this case, a first test result2012. A second test result2014would be read subsequently in FIFO order, when, after reading the first test result, the ring buffer read pointer2002is incremented to point on the second test result2014. Ring buffer1932also includes a write pointer2004that points on a first empty position2016, into which a next test result can be written. After writing the next test result into position2016, write pointer2004would also be “incremented” “counterclockwise” to point to the next available location for storing results. While the foregoing has used a relatively simple “ring” schematic block diagram scenario to illustrate the operation of just one example configuration of shared memory buffer, many complex shared memory configurations not illustrated byFIG.20for clarity, such as linked lists, doubly linked lists, queues, and so forth can be used to realize the operation of a circular queue implemented in shared memory space and are readily incorporated into containerized entities for deployment onto network devices of various types and configurations without significantly departing from the techniques described herein. Test Process Latency Reduction FIG.21Ais a flowchart showing a method2100A of removing latency in performance tests conducted on an in home network by deployable containerized tests. Flowchart2100A can be implemented at least partially with a computer or other data processing system, e.g., by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated inFIG.21A. Multiple actions can be combined in some implementations. For convenience, this flowchart is described with reference to the system that carries out a method. The system is not necessarily part of the method. At action2102, a test is deployed onto an active element of the home network via a network interface. The test can be containerized into an open platform distributed system container that shares with one or more other containers a single instance of an operating system. At action2104, the test is executed to generate traffic and to send the traffic to one or more other active elements in the home network that provide responses to the active element. At action2106, the responses are retrieved from a ring buffer managed by the single instance operating system by directly accessing a mapped memory portion implementing the ring buffer. Retrieving the responses from the ring buffer by directly accessing the mapped memory portion implementing the ring buffer enables the responses to be retrieved with less latency than retrieving responses using system calls. This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above. FIG.22Aillustrates one example2200A of messaging that includes a sequence of messages between a central office1101, a test case1920A, ring buffer1932and a device under test2210. The Central Office1101can invoke test cases, such as test case1920A deployed on one or more devices comprising a MoCA network. Test case1920A includes a containerized test set that can be deployed on a device as part of the manufacturing process or downloaded by the Central Office after customer install. Central Office activation of the one or more test cases can occur under human operator control, as part of an expert system initiated test operation, in response to a scheduler process activating the test case, or various combinations thereof. The test cases, such as test case1920A can operate to perform self-testing of the device upon which the test case resides, on other devices on the MoCA network, or combinations thereof. The test case1920A can relay results from conducting the testing to the central office1101. In one example, the central office1101identifies a problem with a particular customer installation of a MoCA from a customer contact or data diagnostics and invokes test set2201. Problem reports can include example video displays, data, diagnostics or combinations thereof. Some implementations can include an expert system deployed at the central office1101to examine video and data problems experienced on the home network and autonomously invoke an appropriate test set. Alternatively, a system operator can invoke discrete tests when the expert system is not used. The test case1920A accepts input from the central office1101commanding the container to begin the selected test set1920A. In one application, test set1920A begins analysis of a MoCA network by conducting tests in the test set that generate test traffic to a target device (under test)2210in the MoCA network, retrieve and analyze responses. As testing proceeds, the test set1920A proceeds based on measurements from, or tests performed, on various devices, such as target device2210, in the network under test. While many test sets are configured to run autonomously without human intervention, in some instances the test case1920A can prompt a human operator to enter data or to perform additional actions as needed. The test case1920A can include a test set that addresses a selected problem or a selected test regime. Test case1920A can generate and send traffic to one or more active elements (e.g.,2210) in the home network to conduct testing2203. Results from the test(s) are received2205from the target device under test2210at a ring buffer1932implemented in a mapped memory portion of an operating system hosting the containerized test1920A. The test case1920A includes logic to receive2207responses from the ring buffer1932thereby reducing latency from returning the results via the operating system of the device hosting test case1920A. As results from the test case1920A isolate network segments and identify issues within the home network under test, the test case1920A can report2209those results to the central office1101. Unilaterally and automatically, some test case1920A implementations can, via use of rules, choose to run additional test sets to gather further information and/or further isolate the problem. For example, a Wi-Fi test can be performed by the test case1920A to solve a user identified problem with a Wi-Fi device that is not working. The Wi-Fi test may, in a first step, find a fault with the Wi-Fi device in a first test set, or may find no fault with the Wi-Fi device and direct a further test setup to isolate a fault in a cable connecting the Wi-Fi device to the home network. Deploying Test Sets Using Provisioning A test set is deployed onto a device being configured as an active element of the home network via a provisioning server. Test sets can be deployed by a manufacturer (or other supply chain actor). The test sets can be provided by a network provider (e.g., Verizon, Comcast or the like) to the manufacturer (or other supply chain actor) of devices under an order for the devices to be configured. Orders can be sent to the manufacturer (or other supply chain actor) via network or the like and provisioned onto devices being configured by a provisioning server. FIG.21Bis a flowchart showing a method2100B depicting operation of a provisioning process for provisioning containerized test sets onto devices being configured as active elements for use in an MoCA network. Flowchart2100B can be implemented at least partially with a computer or other data processing system, e.g., by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated inFIG.21B. Multiple actions can be combined in some implementations. For convenience, this flowchart is described with reference to the system that carries out a method. The system is not necessarily part of the method. At action2112, a provisioning server receives via a network interface an order for devices including at least one test set to be provisioned onto the devices being configured for a home network. At action2114, the provisioning server establishes a connection to one device selected from the devices being configured. At action2116, the provisioning server provisions the test set onto the one device prior to installation of the one device into a home network. Such provisioning can enable the device, after installation to the home network, to execute the test set responsive to a triggering input made via a network interface substantially independently of further intervention. This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above. FIG.22Billustrates one example2200B of messaging during provisioning test sets onto devices as part of manufacturing. Message map2200B includes a sequence of messages between a central office1101, a provisioning server2220, and a target device2210onto which test sets can be provisioned. The test sets can be provided by a network provider (e.g., Verizon, Comcast or the like) to the manufacturer (or other supply chain actor) of devices under an order for the devices to be configured. Orders can be received2211from the central office1101of the network provider by one or more provisioning servers2220of the manufacturer (or other supply chain actor) via network or the like and provisioned onto devices2230being configured by a provisioning server. A provisioning server2220establishes2214a connection with the target device2230being provisioned. Once connection is established, the provisioning server2220provisions2215the test set appropriate to the device and the order onto the target device2230. When provisioning server2220detects that provisioning is complete, either through tracking state or receiving an acknowledgement2217from the target device2230, the provisioning server2220can perform the process again repeatedly if needed to provision additional devices until the order is completed. Once the order is completed, the provisioning server2220can report2219that the order is complete to the central office1101. Test Process Examples FIG.21Cis a flowchart showing a method2100C depicting an example operation of a testing process implementable using containerized test sets onto devices configured as active elements for use in an MoCA network. Flowchart2100C can be implemented at least partially with a computer or other data processing system, e.g., by one or more processors configured to receive or retrieve information, process the information, store results, and transmit the results. Other implementations may perform the actions in different orders and/or with different, fewer or additional actions than those illustrated inFIG.21C. Multiple actions can be combined in some implementations. For convenience, this flowchart is described with reference to the system that carries out a method. The system is not necessarily part of the method. At action2122, a passive testing program is conducted. Passive testing of the network can be conducted using a containerized test case that monitors regular network traffic (e.g., does not generate specific test traffic). Passive monitoring can be initiated at installation, in response to a triggering event, such as a scheduler process activation, or occurrence of specific detected events affecting the network (e.g., installation of new equipment) or combinations thereof. At action2124, based on a result of the passive testing program, a determination is made whether to conduct active testing. At action2126, if it is determined to conduct active testing, an appropriate active testing program is instantiated and run to obtain a second result. At action2128, based on the second result, a determination can be made whether to conduct further active testing, return to passive testing, schedule an onsite service visit to conduct more invasive or extensive testing, ship replacement equipment to the site or combinations thereof. Such flexible and layered testing architectures provide the device, after installation to the home network, with the ability to execute the test set(s) responsive to one or multiple triggering inputs, detection of event(s) or combinations thereof. This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above. In an example implementation, a new network component is installed to the network. The network component can be an upgrade, new installation or a replacement of a prior failing component. A determination can be made to conduct passive testing program for some period of time to verify the performance of the new component. For example, a component failure can be identified using active testing program; one or more failing components can be replaced and a passive testing program initiated to verify the service has successfully repaired the problem by replacing the unit. Computer System FIG.23is a block diagram of an example computer system2300that can implement device hardware1910ofFIG.19.FIG.23is a block diagram of an example networked device, implemented as a computer system according to one implementation. The processor can be an ASIC or RISC processor. It can be an FPGA or other logic or gate array. It can include graphic processing unit (GPU) resources. Computer system2310typically includes at least one processor2372that communicates with a number of peripheral devices via bus subsystem2350. These peripheral devices may include a storage subsystem2326including, for example, memory devices and a file storage subsystem, user interface input devices2338, user interface output devices2378, and a network interface subsystem2376. The input and output devices allow user interaction with computer system2310. Network interface subsystem2376provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems. User interface input devices2338may include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system2310. User interface output devices2378may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system2310to the user or to another machine or computer system. Storage subsystem2324stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processor2372alone or in combination with other processors. Memory2322used in the storage subsystem can include a number of memories including a main random access memory (RAM)2334for storage of instructions and data during program execution and a read only memory (ROM)2332in which fixed instructions are stored. A file storage subsystem2336can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem2336in the storage subsystem2326, or in other machines accessible by the processor. Bus subsystem2350provides a mechanism for letting the various components and subsystems of computer system2310communicate with each other as intended. Although bus subsystem2350is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses. Computer system2310can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system2310depicted inFIG.23is intended only as one example. Many other configurations of computer system2310are possible having more or fewer components than the computer system depicted inFIG.23. Particular Implementations In one implementation, a method of removing latency in performance tests conducted on an in home network by deployable containerized tests is described. The method can be used to diagnose faults in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) using devices already existing as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, existing device(s) on the MoCA network include a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes deploying a test onto an active element of the home network via a network interface. Tests are deployed on the device as part of the manufacturing process or downloaded by a central office via a network connection after installation of the device to the MoCA. Tests can be containerized into an open platform distributed system container. The open platform system container shares a single instance of an operating system with one or more other containers. Executing the test generates traffic that is sent to one or more other active elements in the home network that provide responses to the active element. The responses are retrieved from a ring buffer managed by the operating system of the device hosting the containerized test by directly accessing a mapped memory portion implementing the ring buffer. The retrieving of the responses from the ring buffer by directly accessing the mapped memory portion implementing the ring buffer enables the responses to be retrieved with less latency than retrieving responses using system calls. In some implementations a container manager can provide interface between the operating system and the open platform distributed system container executing the test. For example, the container manager can provide access to the ring buffer to the test case. In some implementations the ring buffer can be shared between the operating system and the open platform distributed system container executing the test. Some implementations also include introducing the test to the home network via a home network access point. Some implementations also include dispatching a service representative to conduct further testing when the test provides results indicating a need for on-site service. In some implementations the test can conduct a test of the home network for WiFi interference. In some implementations the test can conduct a test of the home network for packet loss. In some implementations the test can conduct a test of the home network for loopback support. In some implementations the test can conduct a test of the home network for Two-Way Active Measurement Protocol (TWAMP). In some implementations the test can conduct a preemptive testing regimen on the home network. In some implementations, preemptive testing can include packet loss testing, speed tests, and others. In another implementation, a method of provisioning containerized test sets onto devices being configured as active elements for use in a MoCA network is described. The method can be used to provision tests sets that diagnose faults onto devices for use in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a provisioning server includes a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes receiving via a network interface an order for devices including at least one test set to be provisioned onto the devices being configured for a multimedia over coax alliance (MoCA) local area network (LAN). A connection is established to one device selected from the devices being configured according to the order. The test set is provisioned onto the one device prior to installation of the one device into a home network. Such provisioning can enable the device, after installation to the home network, to execute the test set responsive to a triggering input made via a network interface substantially independently of further intervention. In some implementations the provisioning is performed by a server located at a manufacturer's facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the provisioning is performed by a server located at a third party service provider facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the order for devices is received from a network provider (e.g., Comcast, Verizon, T-Mobile). In a further implementation, a method of provisioning devices being configured as active elements for use in a MoCA network is described. The method can be used to provision devices to receive containerized tests sets that diagnose faults for use in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a provisioning server includes a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes receiving via a network interface an order for devices including at least one test application manager to be provisioned onto the devices being configured for a multimedia over coax alliance (MoCA) local area network (LAN). A connection is established to one device selected from the devices being configured according to the order. The test application manager is provisioned onto the one device prior to installation of the one device into a home network. Such provisioning can enable the device, after installation to the home network, to execute the test application manager to receive a containerized test case via a network interface responsive to a triggering input. Some implementations also can enable the device, after installation to the home network, to execute the test application manager to delete the containerized test case and install instead a second containerized test case received via the network interface responsive to a second triggering input. Some implementations also can enable the device, after installation to the home network, to execute the test application manager to detect a faulty containerized test case installation; and delete the containerized test case at fault. Some implementations also can enable the device, after installation to the home network to execute the application manager to install instead of the detected containerized test case at fault, a prior containerized test case thereby recovering from the detected containerized test case fault by reverting to a previous containerized test case. In some implementations the provisioning is performed by a server located at a manufacturer's facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the provisioning is performed by a server located at a third party service provider facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the order for devices is received from a network provider (e.g., Comcast, Verizon, T-Mobile). In a yet further implementation, a method of conducting testing using containerized test sets installed to devices configured as active elements for use in a MoCA network is described. The method can be used to conduct testing that diagnoses faults in connections and/or devices used in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a passive testing program is conducted using a containerized test case that monitors regular network traffic (e.g., does not generate specific test traffic). Based on a result of the passive testing program, a determination is made whether to conduct active testing. If it is determined to conduct active testing, an appropriate active testing program is instantiated and run to obtain a second result. Based on the second result, a determination can be made whether to conduct further active testing, or return to passive testing, or schedule an onsite service visit to conduct more invasive or extensive testing, or ship replacement equipment to the site or combinations thereof. In some implementations the passive monitoring is initiated at installation time. In some implementations the passive monitoring is initiated in response to a triggering event. In some implementations the passive monitoring is triggered by a scheduler process activation. In some implementations the passive monitoring is triggered by occurrence of a specific detected event affecting the network. In some implementations the specific event includes installation of new equipment. In an implementation, test cases can be deployed, invoked, changed, deleted or otherwise managed by transactions conforming all or in part with a standardized protocol, such as CPE WAN Management Protocol (CWMP) published by the Broadband Forum as technical specification TR-069 for remote management of end-user devices. The TR-069 describes a protocol that addresses the growing number of different Internet access devices such as modems, routers, gateways, as well as end-user devices which connect to the Internet, such as set-top boxes and VoIP-phones. In one implementation, a method for troubleshooting a pixelated video image transmitted over a Multimedia over Coax Alliance (MoCA) LAN is described from the perspective of a probing device. The method includes automatically iterating over a plurality of MoCA devices discovered on the MoCA LAN and transmitting packets to each of the discovered devices. The packets require a response from each of the devices. Packets are transmitted to the devices concurrently such that first and second packets are transmitted to a first device and a third packet is transmitted to a second device in between transmission of the first and second packet. The disclosed method includes detecting a number of lost packets that did not receive a required response from at least one packet-dropping device among the plurality of MoCA devices and reporting identities of one or more packet-dropping devices that have packet loss rates exceeding a preconfigured threshold. This method and other implementations of the technology disclosed can each optionally include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations. The probing device may join the MoCA LAN by establishing point-to-point communication channels with other devices on a MoCA network that includes a home router that couples the MoCA LAN in communication with a WAN. The probing device may discover IP devices on the MoCA LAN by sending probe packets to IP addresses within a configured range of addresses and receiving a response to each probe packet that includes an IP address and the MAC address of each device. The list of discovered IP devices may be filtered based on the MAC address of each device in the list so that only IP addresses of devices having a MAC address known to be on the MoCA LAN remain on the list. A discovered device may be a set-top box, a digital video recorder (DVR) set-top box, or a television. In an alternative implementation, the probe device replaces the home router in the network, assuming its role by receiving DHCP requests and responding to the DHCP requests by sending an available IP address in the network. In another implementation, a test controller device may connect to a separate probing device and causing the probing device to perform the automatically iterating, transmitting packets, and detecting lost packets actions. In addition, the test controller device may receive packet loss data from the probing device detecting the number of lost packets. The test controller may report identities of one or more packet-dropping devices. The test controller device may connect to the probing device over an Ethernet physical port, and packets may be transmitted over an Ethernet connection through a broadband home router (BHR) on the MoCA LAN. The test controller device may receive from the probing device addresses of MoCA devices on the MoCA network. In an implementation, at least 10,000 packets may be transmitted over the MoCA LAN to each of the plurality of discovered devices. A predetermined threshold for packet loss may be configured by a user before packets are transmitted. The packet loss may be determined as a proportion of the number of packets sent. The identity of and packet loss rate for each of the plurality of discovered devices may be reported. Other implementations may include a probing device that includes a processor, network interface, and storage device storing instructions for performing variations of the disclosed method. Another implementation is a test controller device that includes a processor, network interface, and storage device storing instructions for connecting to a probing device and causing the probing device to perform automatically iterating over discovered devices, transmitting packets, and detecting lost packets, receiving from the probing device packet loss data from detecting the number of lost packets, and reporting identities of one or more packet-dropping devices. Yet other implementations include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. In one implementation, a device is described that will often be operated by a cable network dispatched service person. This device can be used to diagnose faults in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN). The device includes test hardware that implements MoCA protocols coupled to at least one coaxial fitting and a processor coupled to the test hardware and to a tangible computer readable storage medium storing computer instructions. The computer instructions, when executed, cause the processor to perform an expert system method. Responsive to selection of a test sequence, the expert system method including repeatedly causing display of multiple instructional images, invoking the test hardware to perform a test, automatically evaluating results returned by the test hardware, and either reporting a result or initiating an additional test. Often, the operator will select the test sequence. At least two test sequences for diagnosing multi room DVR's and for data speed access to the WAN are described and illustrated above. On this device, the multiple instructional images depict how an operator manually couples the test hardware in connection with MoCA components of the LAN in a position that isolates a portion of the LAN for evaluation. The multiple instructional images can be animated. Invoking the test hardware, includes selecting a test to perform from the test sequence, without intervening user selection of the test from the test sequence. In the examples above, multiple tests or test steps are included in a test sequence. The sequence of tests is designed to Ron without user selection of individual test in the test sequence. Running the test from the test sequence invokes the test hardware with parameters that control interaction with the MoCA components, for MoCA-related tests. Evaluating results returned by the test hardware, includes using evaluation criteria of the test sequence, without user interpretation of the results returned, to determine whether to report a recommendation to replace or repair an identified component. Having the device evaluate results returned by the test hardware relieves the operator of the need to understand technical details of test protocols and of the need to understand acceptable and unacceptable test results. It also relieves the operator of the need to choose the next test to perform in a test sequence. Alternatively, the device can determine to repeat the cycle above, including the causing display of multiple instructional images and, for an additional physical location in the LAN, the invoking the test hardware to perform an additional test, and the automatically evaluating results returned by the additional test. In the course the test sequence, the device can instruct an operator to move the device from one physical location in the LAN to another, isolating segments to evaluate. Both active and passive elements of the LAN can be evaluated. In one implementation, the device proceeds as determined and makes a report or repeats the cycle described above to perform an additional test in the test sequence. The report can be to the operator on a display. This device and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional implementations disclosed. In the interest of conciseness, the potential combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with the sets of these features identified for other implementations such as Wi-Fi testing or generally for testing a local area network. The test sequence operating on the device can include at least one test or step that tests connection of the LAN to the WAN. In this LAN-to-WAN test, the multiple instructional images depict how an operator manually couples the test hardware in connection with the WAN in a position that isolates the LAN from the WAN for evaluation. Invoking the test hardware involves the device automatically selecting a test to perform from the test sequence, without intervening user selection of the test from the test sequence, and invoking the test hardware with parameters that control interaction with the WAN. The evaluation of results returned by the test hardware involves using evaluation criteria of the test sequence, without user interpretation of the results returned, to determine whether to report an evaluation of the WAN link. The test hardware and/or computer instructions can further implement emulation of a variety of active devices on the MoCA LAN. Devices emulated include one or more of a basic home router, a set top box, and a digital video recorder. It also can include a MoCA to Ethernet adapter or a MoCA to Wi-Fi adapter. We refer to the test hardware and/or computer instructions because there is not going migration from hardware and firmware implemented technologies to software defined technologies running on hardware. For instance, software defined radios and software defined networks are now available. This term is not meant to include software per se, for US patent purposes. In another feature, the device can test a Wi-Fi link on the LAN. In this context, link means a wireless connection between two antennas. Typically, this link is broadcast on particular channel using a particular SSID. The user may be asked to select the SSID of the link to test. This feature can include test hardware that implements WiFi protocols and is coupled to an antenna. This feature includes computer instructions that implement at least one pair of WiFi steps in the test sequence tests a WiFi link on the LAN, and that pair of WiFi steps optionally includes user selection of a particular WiFi link to test. A pair of steps test the Wi-Fi link in both directions. For the pair of WiFi steps, the elements described above are applied to Wi-Fi instead of MoCA. The multiple instructional images depict how an operator manually couples the test hardware in connection with a wired component and a wireless component of the LAN in a position that isolates a portion of the LAN for evaluation. Invoking the test hardware includes selecting a test (or pair of tests) to perform from the test sequence, without intervening user selection of the test from the test sequence. Performing the test invokes the test hardware with parameters that control interaction with the wireless component. Evaluating results returned by the test hardware includes using evaluation criteria of the test sequence, without user interpretation of the results returned, to determine whether to report an evaluation of the WiFi link. A detailed evaluation of the Wi-Fi link may be reserved for a failed test. Evaluation may include recommended remediation steps. As above, the test hardware and/or computer instructions can further implement emulation of a variety of active devices on the MoCA LAN. Test hardware can provide RF capabilities and implement a monitor mode access to MoCA hardware for packet and signal statistics. The device configured to provide protocol level access to MoCA commands for network survey. Passive components of the MoCA network can include coaxial cable and splitters. The test hardware can rapidly sequence multiple tests, more quickly than if an operator were selecting, starting in evaluating the tests. The test device described can be embedded in another device, such as a general-purpose network analyzer. Tests performed using the device can include a packet loss test, an RF single channel test, a Wi-Fi quick test, and a speed test. The device can be configured to identify passive network components, for instance, by reflectometry. The technology disclosed also can be described as a method of diagnosing faults in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN). The method implementation generally follows the outline of actions carried out by the device described above. These actions necessarily are performed on digital hardware, such as test hardware and processors. In a method implementation, running on a processor, responsive to selection of a test sequence, the method running include a processor, including repeatedly causing display of multiple instructional images, invoking test hardware to perform a test, automatically evaluating results returned by the test hardware, and either reporting a result or initiating an additional test. The method, actions are generally as described above. The multiple instructional images depict how an operator manually couples the test hardware in connection with MoCA components of the LAN in a position that isolates a portion of the LAN for evaluation. Invoking the test hardware involves selecting a test to perform from the test sequence, without intervening user selection of the test from the test sequence, and invokes the test hardware with parameters that control interaction with the MoCA components. Evaluating results returned by the test hardware uses evaluation criteria of the test sequence, without user interpretation of the results returned, to determine whether to report a recommendation to replace or repair an identified component, or to repeat the causing display of multiple instructional images and, for an additional physical location in the LAN, the invoking the test hardware to perform an additional test, and the automatically evaluating results returned by the additional test. Some implementations include proceeding as determined with the report or the repeat. Features of the method can include most or all of the features of the device implementation described above and additional features described throughout this disclosure. For the sake of brevity, we forgo repeating those features and instead incorporate them by reference. The technology disclosed also can be practiced as a tangible computer readable media impressed with computer instructions that, when executed on a processor, cause the processor and the test hardware to carry out the method described above for, when combined with appropriate hardware and processor, produce the device described above. Again, features implemented using the tangible computer readable media can include most or all of the features of the device implementation described above and additional features described throughout this disclosure. In jurisdictions outside the United States, the technology disclosed also can be practiced as software per se or as computer instructions carried by electromagnetic transmission. For claim purposes, tangible computer readable media is not intended to extend to software per se or computer instructions carried by electromagnetic transmission without a tangible media that persists the computer instructions well beyond signal transit time. The technology disclosed also can implement a device to diagnose faults in a multimedia over at least one WiFi segment of a local area network (LAN) coupled to a wide area network (WAN). In this implementation, the test hardware implements WiFi and LAN protocols, and is coupled to a wired connector and to an antenna. This device further includes a processor coupled to the test hardware and to a tangible computer readable storage medium storing computer instructions, the computer instructions that cause the processor to perform an expert system method, responsive to selection of a test sequence, the expert system method including repeatedly causing display of multiple instructional images, invoking the test hardware to perform a test, automatically evaluating results returned by the test hardware, and either reporting a result or initiating an additional test. The actions applied above to testing MoCA segments of a LAN are adapted in this implementation to testing Wi-Fi segments. In this implementation, at least one pair of WiFi steps in the test sequence tests a WiFi link on the LAN, and that pair of WiFi steps optionally includes user selection of a particular WiFi link to test. The multiple instructional images depict how an operator manually couples the test hardware in connection with a wired component and, for at least a pair of tests, also to a wireless component of the LAN in a position that isolates a portion of the LAN for evaluation. Invoking the test hardware, selects a test to perform from the test sequence, without intervening user selection of the test from the test sequence, and invokes the test hardware with parameters that control interaction with the LAN components. Evaluating results returned by the test hardware, uses evaluation criteria of the test sequence, without user interpretation of the results returned, to determine whether to report a recommendation to replace or repair an identified component, or to repeat the causing display of multiple instructional images and, for an additional physical location in the LAN, the invoking the test hardware to perform an additional test, and the automatically evaluating results returned by the additional test. Some implementations further include proceeding as determined with the report or the repeat. Features of the Wi-Fi testing implementation can include most or all of the features of the MoCA testing implementation described above and additional features described throughout this disclosure. For the sake of brevity, we forgo repeating those features and instead incorporate them by reference. The Wi-Fi testing implementation most of be practiced as a method or as a tangible computer readable media impressed with computer instructions. The computer instructions and either, when executed, implement any of the methods described, for when combined with suitable hardware and processor, produce any devices described. Once again, the features described above can be combined with the Wi-Fi testing implementation. For the sake of brevity, the features are incorporated by reference, instead of being repeated. In one implementation, a method of removing latency in performance tests conducted on an in home network by deployable containerized tests is described. The method can be used to diagnose faults in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) using devices already existing as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, existing device(s) on the MoCA network include a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes deploying a test onto an active element of the home network via a network interface. Tests are deployed on the device as part of the manufacturing process or downloaded by a central office via a network connection after installation of the device to the MoCA. Tests can be containerized into an open platform distributed system container. The open platform system container shares a single instance of an operating system with one or more other containers. Executing the test generates traffic that is sent to one or more other active elements in the home network that provide responses to the active element. The responses are retrieved from a ring buffer managed by the operating system of the device hosting the containerized test by directly accessing a mapped memory portion implementing the ring buffer. The retrieving of the responses from the ring buffer by directly accessing the mapped memory portion implementing the ring buffer enables the responses to be retrieved with less latency than retrieving responses using system calls. In some implementations a container manager can provide interface between the operating system and the open platform distributed system container executing the test. For example, the container manager can provide access to the ring buffer to the test case. In some implementations the ring buffer can be shared between the operating system and the open platform distributed system container executing the test. Some implementations also include introducing the test to the home network via a home network access point. Some implementations also include dispatching a service representative to conduct further testing when the test provides results indicating a need for on-site service. In some implementations the test can conduct a test of the home network for WiFi interference. In some implementations the test can conduct a test of the home network for packet loss. In some implementations the test can conduct a test of the home network for loopback support. In some implementations the test can conduct a test of the home network for Two-Way Active Measurement Protocol (TWAMP). In some implementations the test can conduct a preemptive testing regimen on the home network. In some implementations, preemptive testing can include packet loss testing, speed tests, and others. In another implementation, a method of provisioning containerized test sets onto devices being configured as active elements for use in a MoCA network is described. The method can be used to provision tests sets that diagnose faults onto devices for use in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a provisioning server includes a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes receiving via a network interface an order for devices including at least one test set to be provisioned onto the devices being configured for a multimedia over coax alliance (MoCA) local area network (LAN). A connection is established to one device selected from the devices being configured according to the order. The test set is provisioned onto the one device prior to installation of the one device into a home network. Such provisioning can enable the device, after installation to the home network, to execute the test set responsive to a triggering input made via a network interface substantially independently of further intervention. In some implementations the provisioning is performed by a server located at a manufacturer's facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the provisioning is performed by a server located at a third party service provider facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the order for devices is received from a network provider (e.g., Comcast, Verizon, T-Mobile). In a further implementation, a method of provisioning devices being configured as active elements for use in a MoCA network is described. The method can be used to provision devices to receive containerized tests sets that diagnose faults for use in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a provisioning server includes a processor coupled to a tangible computer readable storage medium that stores computer instructions. The computer instructions, when executed, cause the processor to perform the method. The method includes receiving via a network interface an order for devices including at least one test application manager to be provisioned onto the devices being configured for a multimedia over coax alliance (MoCA) local area network (LAN). A connection is established to one device selected from the devices being configured according to the order. The test application manager is provisioned onto the one device prior to installation of the one device into a home network. Such provisioning can enable the device, after installation to the home network, to execute the test application manager to receive a containerized test case via a network interface responsive to a triggering input. Some implementations also can enable the device, after installation to the home network, to execute the test application manager to delete the containerized test case and install instead a second containerized test case received via the network interface responsive to a second triggering input. Some implementations also can enable the device, after installation to the home network, to execute the test application manager to detect a faulty containerized test case installation; and delete the containerized test case at fault. Some implementations also can enable the device, after installation to the home network to execute the application manager to install instead of the detected containerized test case at fault, a prior containerized test case thereby recovering from the detected containerized test case fault by reverting to a previous containerized test case. In some implementations the provisioning is performed by a server located at a manufacturer's facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the provisioning is performed by a server located at a third party service provider facility that configures device(s) for use on MoCA networks prior to installation. In some implementations the order for devices is received from a network provider (e.g., Comcast, Verizon, T-Mobile). In a yet further implementation, a method of conducting testing using containerized test sets installed to devices configured as active elements for use in a MoCA network is described. The method can be used to conduct testing that diagnoses faults in connections and/or devices used in a multimedia over coax alliance (MoCA) local area network (LAN) coupled to a wide area network (WAN) as part of the MoCA network, e.g., without the addition of special purpose test devices. In one implementation, a passive testing program is conducted using a containerized test case that monitors regular network traffic (e.g., does not generate specific test traffic). Based on a result of the passive testing program, a determination is made whether to conduct active testing. If it is determined to conduct active testing, an appropriate active testing program is instantiated and run to obtain a second result. Based on the second result, a determination can be made whether to conduct further active testing, or return to passive testing, or schedule an onsite service visit to conduct more invasive or extensive testing, or ship replacement equipment to the site or combinations thereof. In some implementations the passive monitoring is initiated at installation time. In some implementations the passive monitoring is initiated in response to a triggering event. In some implementations the passive monitoring is triggered by a scheduler process activation. In some implementations the passive monitoring is triggered by occurrence of a specific detected event affecting the network. In some implementations the specific event includes installation of new equipment. In an implementation, test cases can be deployed, invoked, changed, deleted or otherwise managed by transactions conforming all or in part with a standardized protocol, such as CPE WAN Management Protocol (CWMP) published by the Broadband Forum as technical specification TR-069 for remote management of end-user devices. The TR-069 describes a protocol that addresses the growing number of different Internet access devices such as modems, routers, gateways, as well as end-user devices which connect to the Internet, such as set-top boxes and VoIP-phones. In one implementation, containerized tests are configured and deployed according to a protocol standard. A standard setting body called Cable Labs specifies an example protocol for use with Linux and Docker in some implementations. In one implementation, a “Virtual Home Network”, is provided that includes moving router functionality into a computing “cloud.” Docker implementations can include an API framework to create containers. Linux allows access by a container to hardware for testing. In some implementations Kubernetes, an environment that executes on Docker provided by Google, or other equivalent environment, can be employed to manage a cluster of Linux containers. Some implementations also include a development environment. Some implementations are compatible with the Apple Mac. In an implementation, a method of removing latency in performance tests conducted on a home network by deployable containerized tests is described. The method can include the method include deploying a test onto an active element of the home network via a network interface, the test containerized into an open platform distributed system container that shares with one or more other containers a single instance of an operating system; executing the test to generate traffic and sending the traffic to one or more other active elements in the home network that provide responses to the active element; and retrieving the responses from a first-in, first-out (FIFO) ordered storage managed by the operating system by directly accessing a mapped memory portion implementing the first-in, first-out (FIFO) ordered storage, whereby the retrieving the responses from the first-in, first-out (FIFO) ordered storage by directly accessing the mapped memory portion implementing the first-in, first-out (FIFO) ordered storage enables the responses to be retrieved with less latency than retrieving responses using system calls. In an implementation a container manager provides interface between the operating system and the open platform distributed system container executing the test. In an implementation the container manager provides access to the first-in, first-out (FIFO) ordered storage to the test. In an implementation the first-in, first-out (FIFO) ordered storage is shared between the operating system and the open platform distributed system container executing the test. In an implementation the method can include introducing the test to the home network via a home network access point. In an implementation the method can include dispatching a service representative to conduct further testing when the test provides results indicating a need for on-site service. In an implementation the test conducts testing of the home network for WiFi interference. In an implementation the test conducts testing of the home network for packet loss. In an implementation the test conducts testing of the home network for loopback support. In an implementation the test conducts testing of the home network for Two-Way Active Measurement Protocol (TWAMP). Similar system and non-transitory computer-readable storage medium implementations are also provided. | 135,910 |
11863421 | DETAILED DESCRIPTION Some general terminology descriptions may be helpful and are included herein for convenience and are intended to be interpreted in the broadest possible interpretation. Elements that are not imperatively defined in the description should have the meaning as would be understood by the person skilled in the art. User Device102—a user device can be any suitable user computing device including, but not limited to, a smartphone, a tablet computing device, a personal computing device, a laptop computing device, a gaming device, a vehicle infotainment device, a smart appliance (e.g., smart refrigerator or smart television), a cloud server, a mainframe, a notebook, a desktop, a workstation, a mobile device, or any other electronic device used for connecting to Primary VPN Server104. A VPN application installed and executed within the user device initiates and establishes the encrypted VPN connection to a VPN server. Primary VPN Server (PVPNS)104—a computing device attached to a computer network that accepts VPN users' requests for establishing encrypted connection, or tunnel, and is the endpoint of such encrypted connections from multiple VPN users. Service Gateway106—a computing device and a constituent of Primary VPN Server104. It accepts User Device102requests for establishing encrypted connection, or tunnel, and is the endpoint of such encrypted connections from multiple User Devices102. As a standard with VPN tunneling protocol endpoints, on establishing a VPN connection, or tunnel, with User Device102, Service Gateway106becomes the default gateway for User Device102. Routing Processor108—a logical unit and a constituent of Primary VPN Server104that is configured to perform complex operations of identifying the optimal secondary VPN servers from a plurality of VPN Servers. Routing Processor108is capable of querying API116for routing strategies and available servers. Packet Processor110—processing unit within Routing Processor108that processes or aggregates user traffic for further analysis. Traffic Analyzer112—processing unit within Routing Processor108that analyzes user traffic and matches it with existing routing strategies. Route Controller114—processing unit within Routing Processor108that sets and implements routing strategies suggested by Traffic Analyzer112or transmitted from Comm Listener115. Comm Listener115—processing unit within Routing Processor108that is preset to receive routing preferences from User Device102. API116—VPN service provider infrastructure component providing a collection of service endpoints exposing the functionality necessary for VPN customers to authenticate against VPN Service Provider, as well as to obtain the prerequisites necessary for establishing the encrypted connection to a VPN server. API also acts as a centralized hub for routing strategies and server information that is accessible to Primary and Secondary VPN Servers as well as VPN Application present in User Device102. Secondary VPN Server One (SVPNS One)122—a first instance of a computing device attached to a computer network that relays VPN users' requests for establishing encrypted connection, or tunnel, and is the last endpoint of such encrypted connection that connects to a target. Secondary VPN Server Two (SVPNS Two)124—a second instance of a computing device attached to a computer network that relays VPN users' requests for establishing encrypted connection, or tunnel, and is the last endpoint of such encrypted connection that connects to a target. Secondary VPN Server Two (SVPNS Three)126—a second instance of a computing device attached to a computer network that relays VPN users' requests for establishing encrypted connection, or tunnel, and is the last endpoint of such encrypted connection that connects to a target. Target One130—a first instance of a server serving any kind of content accessible over multiple protocols over the Internet. Most often a device placed within a datacenter network of high reliability and capability. However, it can constitute any endpoint on the network, for example, another consumer device, router or other network connected device. Target Two132—a second instance of a server serving any kind of content accessible over multiple protocols over the Internet. Most often a device placed within a datacenter network of high reliability and capability. However, it can constitute any endpoint on the network, for example, another consumer device, router or other network connected device. Target Three134—a third instance of a server serving any kind of content accessible over multiple protocols over the Internet. Most often a device placed within a datacenter network of high reliability and capability. However, it can constitute any endpoint on the network, for example, another consumer device, router or other network connected device. Network120—a digital telecommunications network that allows network-attached nodes to communicate as well as share and consume resources. Examples of a network are local-area networks (LANs), wide-area networks (WANs), campus-area networks (CANs), metropolitan-area networks (MANs), home-area networks (HANs), Intranet, Extranet, Internetwork, Internet. First, second, third traffic—is a simple denomination of a sequence of data packets in time. The traffic in a VPN session is not divided into sequential parts. However, one can distinguish traffic that comes before or after a different traffic. These domination simply indicated that some traffic has reached an endpoint or was sent by an endpoint before or after some other traffic. Configuration strategy—is a set of parameters that indicate a particular behaviour pattern of User Device102that suggest an optimal routing trough an exit VPN server. For example, a configuration strategy can indicate that User Device102accesses pages from a certain location and can identify an exit VPN server close to it. Configuration strategies can be centralized across entry VPN servers so that they all use the same and updated configuration strategies. A single configuration strategy can consist of the times and intervals at which specific content (websites or services in a specific language or location) are accessed or types of services (HTTP traffic, gaming, gambling, streaming services and others) requested to indicate an optimal exit VPN server. The location of targets can be identified through their IP address. Types of activities can be identified through ports that are being used for network communication. Other criteria can be added or subtracted from the current embodiments without changing the method overall. The present embodiments now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, these inventions may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout. FIG.1shows an exemplary overall architecture of the current embodiment that comprises of User Device102, which can be any computing or a networking device (e.g., a personal computer, mobile phone, a tablet computer, router, smart home device) having access (e.g. Internet connection) to a particular network, a Primary VPN Server104, API116, a Secondary VPN Server One122, and Target One126. Different embodiments can have additional components, required for additional actions, for example, there might be two secondary VPN servers employed in the cases in which a connection is split over two exit servers. The split traffic can be directed at two different target servers, therefore the targets would be represented by two or more instances of target servers. Likewise, more than one and usually more than 100 or more than 1000 or 10000 user devices can be connected to a primary server simultaneously. There are internal components contained in all elements but the description of them has been forgone for clarity since they are not relevant to current embodiments. However, it must be noted that at least some software and hardware components are prerequisites for a VPN connection to function, for example a VPN client application must be present in User Device102to access API116and Primary VPN Server104. All the mentioned components of the embodiments have access to the Network120and are able to interact with each other through it. Here, Network120can be any digital telecommunication network that permits several nodes to share and access resources, e.g. local-area network (LAN), wide-area networks (WANs), campus-area networks (CANs), metropolitan-area networks (MANs), home-area networks (HANs), Intranet, Extranet, Internetwork, Internet. Primary VPN Server104contains the following sub-elements: Service Gateway106and Routing Processor108. Routing Processor108also contains the following exemplary sub-elements: Packet Processor110, Traffic Analyzer112, Route Controller114, Comm Listener115. Service Gateway106has direct access to Network120and is able to communicate with external components, like User Device102, Secondary VPN Server One122, and API116. In addition, Traffic Analyzer112within Routing Processor108also has access to Network120and in the current embodiments is able to communicate with API116directly. In some embodiments, the communication can be relayed over Service Gateway106instead of being direct but this does not change the overall functioning of the embodiments. While the elements shown in theFIG.1implement the exemplary embodiment, some elements in other embodiments can have different titles or can be combined into a single element instead of two separate elements (for example, Traffic Analyzer112and Route Controller114can be combined into a single hardware, software infrastructure to form a single logical unit or can also be combined into a single logical unit on a cloud. Likewise, Service Gateway106and Routing Processor108can also be combined into a single software infrastructure that is run on single or shared hardware or can be combined to a single unit on a cloud). However, the functionality of elements and the flow of information between the elements is not impacted generally by such combinations or consolidations. Therefore,FIG.1as shown should be interpreted as exemplary only, and not restrictive or exclusionary of other features, including features discussed in other areas of this disclosure here within. The infrastructure shown here is represented as to reveal the logical structure and technological action flow of the embodiments. InFIG.1all occurrences of communication between the various components of the current embodiment takes place through Network120. The instances of communication between User Device102and the Primary VPN Server104can happen through an encrypted tunneling protocol provided by Service Gateway106which can include the process of authentication and authorization to enable data exchange between User Device102and Primary VPN Server104. Likewise, the instances of communication between the Primary VPN Server104and Secondary VPN Server One122can be over proxy protocol or any of the following protocol defined above, IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 51): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; or Quic. A current state of the art information flow would generally consist of User Device102receiving authentication and server information from API116over Network120and initiating a connection to Primary VPN Server104, Service Gateway106providing point to point contact with User Device102and establishing a secure connection with it. VPN connectivity is established by an encrypted tunneling protocol. All requests from User Device102are sent through this encrypted tunnel where the request packets are encoded and secure. This encoding of packets is known as the encapsulation and enables data packets to appear as though they are of a public nature to a public network but in fact they are actually private data packets, making them to pass unnoticed. During the establishment of this point to point tunneling connection, Service Gateway106assigns a private IP address to User Device102that is entirely different form the original IP address. All requests originating from User Device102have this new private IP address assigned to it. The private IP address is exclusive to the individual user device within the VPN server but it is not globally unique—other users on other servers might have the same private IP address. However, since the private IP address is only used for communications between a particular Primary VPN Server104and User Device102, there is no ambiguity. Once User Device102establishes a secure connection with Primary VPN Server104, all requests originating from User Device102are sent through the Primary VPN Server104on behalf of User Device102. The following detailed description of figures will show how the embodiments disclosed herein improve upon the state of the art functionality. The main focus of the improvements is to find an optimal routing track for the whole or a part of user traffic. FIG.2Ashows an exemplary flow diagram of user defined multiserver routing. In step201, User Device102initiates a process to authenticate with the API116. User Device102can be authenticated using a variety of methods consisting of providing a username and a password or other identifying information. In step203, API116confirms the identity of User Device102and authenticates it. API116also provides a list of VPN servers available for connection at that time. One of those servers available to User Device102is PVPNS104. In step205, User Device102initiates a VPN connection with PVPNS104and more specifically by addressing VPN Gateway106. This action on User Device102can happen through a software application installed on User Device102that has a dashboard or other user interface. However, User Device102can engage in a VPN connection with the VPN Gateway106by configuring their system network settings more directly. In step207, once VPN Gateway106receives the request to connect, it creates a VPN tunnel between itself and User Device102. The tunnel is established by VPN Gateway106receiving User Device102requests from its public IP address, then returning a response with a newly assigned private IP address and a private IP address of the VPN Gateway106through which User Device102can communicate with VPN Gateway106in a private way. All the subsequent communication is done through the tunnel created by User Device102and VPN Gateway106. The connection is private (secure) because symmetric cryptography is used to encrypt the data transmitted. Usually, the keys for this symmetric encryption are generated uniquely for each connection and are based on a shared secret that was negotiated at the start of the session. The server and client negotiate the details of which encryption algorithm and cryptographic keys to use before the first byte of data is transmitted. The negotiation of a shared secret is both secure (the negotiated secret is unavailable to eavesdroppers and cannot be obtained, even by an attacker who places themselves in the middle of the connection) and reliable (no attacker can modify the communications during the negotiation without being detected). The identity of the communicating parties can be authenticated using public-key cryptography. This authentication can be optional but is generally required for at least one of the parties (typically the server). The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission. In step209, after the VPN tunnel is established and secured, User Device102is able to make requests and access the target servers privately without its public IP being revealed. User's individual traffic consists of sending requests and receiving responses as well as other protocol-specific information exchange, like TCP handshakes, cryptographic key exchange, and others. Since the primary VPN server does not analyze the user's traffic to the detail of individual requests, it deals with traffic more generally, i.e. the flow of information from and to its user. However, for the sake of clarity we will use requests and responses as discrete entities to illustrate the flow of actions. However, actions by the primary VPN server are not limited to these individual datagrams. These actions are also not limited by a particular protocol. For example, the traffic can consist of TCP or UDP packets and datagrams. User Device102makes a request to access a domain Target One130(for example, a web page, a video streaming service, a gaming or gambling platform) and sends it to VPN Gateway106. In step211, VPN Gateway106receives the data request from User Device102and selects a default routing to an exit VPN server that will ultimately make a request to Target One130. The choice by default is made upon some preset rules or rules updated manually at PVPNS104. The choice can also be made by considering several factors, for example server proximity to Target One120or User Device102but this does not change the overall functionality of the embodiments. In this case, VPN Gateway106can choose SVPNS One122as the exit VPN server and forward the request from User Device102to SVPNS One122. The internal communication between entities of the VPN service provider infrastructure can be exchanged in a variety of ways and protocols, including but not limited to IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 51): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; Quic. However, it is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step213, SVPNS One122makes a request to Target One130for the data specified in the original request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step215, Target One130returns the data specified in the original request from User Device102to SVPNS One122. In step217, SVPNS One122forwards the data received from Target One130to VPN Gateway106. In step219, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described above in steps209and219forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. During the cycle, all elements remain stable and User Device102sends all requests to PVPNS104and then they are routed via SVPNS One122. FIG.2Bshows the continuation of an exemplary flow diagram of user defined multiserver routing. In step221, User Device102sends a preference to VPN Gateway106that a different exit server should be used. That preference might be a freely formed request from User Device102but it can also be made available to User Device102via a dashboard or as a list of potential exit VPN servers. The routing preference request is sent from a user with a destination address that belongs to the VPN service provider infrastructure, and more specifically to Comm Listener115which is a listening device or software that has a specific port open to specifically receive such requests from users. Thus, a routing preference request is different from other requests from User Device102in that it is addressed no to an external target server but to an element of PVPNS104. The routing preference request is sent via the original VPN tunnel and thus is firstly addressed to VPN Gateway106. In step223, VPN Gateway106receives the routing preference request from User Device102and forwards it to Comm Listener115. In step225, Comm Listener115receives the routing preference request from User Device102. It then forwards this request to Route Controller114. In step227, Route Controller114receives the routing preference request from User Device102and formulates an appropriate routing rule that corresponds to User Device102preference. In one embodiment, User Device102made a preference to reach Target One130not via SVPNS One122as defined in the default routing and executed in the previous data exchange cycles in steps209-219, but via SVPNS Two124. In other words, User Device102expressed a preference to reroute its traffic via SVPNS Two124. Route Controller114identifies this preference and makes a routing rule to that effect that it forwards to VPN Gateway106. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In step229, after the routing preference has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target One130(the same target as in the previous data exchange cycles) and sends it to VPN Gateway106. In step231, VPN Gateway106receives the data request from User Device102and selects the preferred routing to an exit VPN server that was selected by User Device102. In this data exchange cycle, VPN Gateway106cannot choose SVPNS One122as the exit VPN server and must route the request to SVPNS Two124. VPN Gateway106can employ the underlying functionalities of its operating system to execute intermediate steps necessary to complete the routing. However, this does not alter the overall flow of actions within the embodiments. The internal communication between entities of the VPN service provider infrastructure can be exchanged in a variety of ways and protocols, including but not limited to IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 51): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; Quic. However, it is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step233, SVPNS Two124makes a request to Target One130for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step235, Target One130returns the data specified in the original request from User Device102to SVPNS Two124. In step237, SVPNS Two124forwards the data received from Target One130to VPN Gateway106. In step239, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described in steps229and239forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. However, during this cycle, User Device102sends all requests to PVPNS104and they are routed via SVPNS Two124. This type of connection constitutes a multiserver VPN connection. FIG.3shows an exemplary flow diagram of routing strategy update and synchronization. The method described here shows an exemplary instance of updating and synchronizing routing information among various VPN servers in the VPN service provider infrastructure. The nature of the said infrastructure is such that any physical or software defined (virtual) server can be a primary (entry) VPN server in some communication instances but a secondary (exit) VPN server in others. These roles can also be taken up at the same time, so the same server can be an entry server for some communications and an exit server for others simultaneously. Optimal routing strategies are also necessary for Traffic Analyzer112to correctly match analyzed traffic with an existing strategy. Existing strategies need to be updated and synchronized among servers so that they represent optimal routing paths. Routing paths can change when new servers are added, others are removed or suspended or experience performance or load issues. In step301, Traffic Analyzer112makes an API call to API116requesting routing strategy data to achieve the needed synchronization with the rest of the infrastructure. In step303, API116responds and provides the needed data to Traffic Analyzer112. This exemplary flow of information can happen at preset intervals of time or when triggered by an event or a series of events described in the functioning rules of Traffic Analyzer112. The particular time when strategies are updated does not change the overall functionality of the embodiments. It can happen before, after or during any of the other action flows described in the remaining diagrams. This method of updating and synchronizing primary VPN servers with the rest of the VPN service provider infrastructure is only meant as an example of implementation. Without changing the overall functionality of the embodiments, the same goal could be achieved by API116initiating the communication with Traffic Analyzer112and pushing the routing strategies once they have been updated. Such a flow would skip step301and perform step303directly. Also, routing strategies could be customized manually by a system administrator by inputting or changing routing strategies present in the primary VPN servers. There can be other methods for updating routing information without involving API116. Other communication means could also be implemented without changing the overall functionality of the embodiments, for example Traffic Analyzer112could communicate with API116through VPN Gateway106or other similar component on the VPN server. Such implementations do not change the overall structure or method of the embodiments. Examples of routing strategies can include but are not limited to finding the optimal route to the target server, optimizing network latency for gaming or gambling services, distributing the load of the service provider infrastructure, and others. In some instances, traffic can be rerouted to the closest secondary (exit) VPN server to the target thus reducing latency. FIG.4Ashows an exemplary flow diagram of server defined multiserver routing. In step401, User Device102initiates a process to authenticate with the API116. User Device102can be authenticated using a variety of methods consisting of providing a username and a password or other identifying information. In step403, API116confirms the identity of User Device102and authenticates it. API116also provides a list of VPN servers available for connection at that time. One of those servers available to User Device102is PVPNS104. In step405, User Device102initiates a VPN connection with PVPNS104and more specifically by addressing VPN Gateway106. This action on User Device102can happen through a software application installed on User Device102that has a dashboard or other user interface. However, User Device102can engage in a VPN connection with the VPN Gateway106by configuring their system network settings more directly. In step407, once VPN Gateway106receives the request to connect, it creates a VPN tunnel between itself and User Device102. The tunnel is established by VPN Gateway106receiving User Device102requests from its public IP address, then returning a response with a newly assigned private IP address and a private IP address of the VPN Gateway106through which User Device102can communicate with VPN Gateway106in a private way. All the subsequent communication is done through the tunnel created by User Device102and VPN Gateway106. The connection is private (secure) because symmetric cryptography is used to encrypt the data transmitted. Usually, the keys for this symmetric encryption are generated uniquely for each connection and are based on a shared secret that was negotiated at the start of the session. The server and client negotiate the details of which encryption algorithm and cryptographic keys to use before the first byte of data is transmitted. The negotiation of a shared secret is both secure (the negotiated secret is unavailable to eavesdroppers and cannot be obtained, even by an attacker who places themselves in the middle of the connection) and reliable (no attacker can modify the communications during the negotiation without being detected). The identity of the communicating parties can be authenticated using public-key cryptography. This authentication can be optional but is generally required for at least one of the parties (typically the server). The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission. In step409, after the VPN tunnel is established and secured, User Device102is able to make requests and access the target servers privately without its public IP being revealed. User's individual traffic consists of sending requests and receiving responses as well as other protocol-specific information exchange, like TCP handshakes, cryptographic key exchange, and others. Since the primary VPN server does not analyze the user's traffic to the detail of individual requests, it deals with traffic more generally, i.e. the flow of information from and to its user. However, for the sake of clarity we will use requests and responses as discrete entities to illustrate the flow of actions. However, actions by the primary VPN server are not limited to these individual datagrams. These actions are also not limited by a particular protocol. For example, the traffic can consist of TCP or UDP packets and datagrams. User Device102makes a request to access a domain Target One130(for example, a web page, a video streaming service, a gaming or gambling platform) and sends it to VPN Gateway106. In step411, VPN Gateway106receives the data request from User Device102and selects a default routing to an exit VPN server that will ultimately make a request to Target One130. The choice by default is made upon some preset rules or rules updated manually at PVPNS104. The choice can also be made by considering several factors, for example server proximity to Target One120or User Device102but this does not change the overall functionality of the embodiments. In this case, VPN Gateway106can choose SVPNS One122as the exit VPN server and forward the request from User Device102to SVPNS One122. The internal communication between entities of the VPN service provider infrastructure can be exchanged in a variety of ways and protocols, including but not limited to IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 41): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; Quic. However, it is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step413, SVPNS One122makes a request to Target One130for the data specified in the original request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step415, Target One130returns the data specified in the original request from User Device102to SVPNS One122. In step417, SVPNS One122forwards the data received from Target One130to VPN Gateway106. In step419, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described above in steps409and419forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. During the cycle, all elements remain stable and User Device102sends all requests to PVPNS104and then they are routed via SVPNS One122. FIG.4Bshows the continuation of the exemplary flow diagram of server defined multiserver routing. In step421, VPN Gateway106forwards unencrypted data packets to Packet Processor110. These data packets can be forwarded individually, as a constant stream or they can be aggregated and only forwarded when a certain number of packets is reached or at preset intervals. The forwarding and the further analysis of the packets and the aggregated traffic can take place asynchronously with the previous data exchange cycle. This means that the transfer of data for analysis can be constant or happen at the same time or independently from the data exchange cycle that forwards and executes requests by User Device102. The differentiation of data exchange cycles into asynchronous processes ensures that the requests of User Device102can be executed without waiting for any other processes to finish and thus no delays occur. Likewise, analysis can happen at the same time that the requests are executed. In step423, Packet Processor110processes the data packets. Processing of data packets refers to extracting certain information from data packets. The extracted information can be indicative of locations or countries of origin and target, network connection type, also dynamic parameters, like timestamps, session duration, timestamps of idleness, a session's total traffic, response time, latency; such data can also include aggregated dynamic parameters over any period of time (average speed, average data packet size, average response time, average latency, most/least visited targets, error rate, variations in which median and percentile groups are used instead of average values, and others) in any combination and with any weights associated with the parameters. In most applications of the current embodiments, the goal of processing data packets is to arrive at an aggregated data model that is suitable for matching with existing routing strategies. Packet Processor110can save aggregation conclusions about the data packets as metadata. In step425, Packet Processor110forwards the processed data packets or metadata regarding them to Traffic Analyzer112. Traffic Analyzer112uses the received data (whether in processed data packet form or metadata form) to match it with existing routing strategies that will determine the secondary (exit) VPN server. Traffic Analyzer112should have a set of routing strategies performed in a flow of information exemplified inFIG.3, steps301and303. The processing load is distributed between Packet Processor110and Traffic Analyzer112. In some embodiments, the needed data features can be extracted at one entity or the other without changing the overall functionality of the embodiments. For example, Packet Processor110can aggregate data without extracting features and Traffic Analyzer112can extract features or Packet Processor110can already have relevant features processed before forwarding it to Traffic Analyzer112. Traffic Analyzer112matches traffic with strategies based on an algorithm that does not change the overall functionality of the embodiments. Analysis can take as parameters protocols used for the connections within the traffic information, target IP addresses, their ranges and locations, and target ports can be indicative of particular types of services accessed. Any type of algorithm could be implemented that matches particular traffic with the optimal routing strategy. Examples of algorithmic operations include but are not limited to grouping data in categories, forming series of data (ordered, partially ordered or unordered), aggregating data, extracting aggregated results, performing statistical analysis, running machine learning and deep learning algorithms, forming predictive models, and other processing functions. Traffic Analyzer112can run multiple related mechanisms that determine the outcome together. Steps421-427form a complete cycle of information transfer in the case in which a strategy match is not found for the relevant traffic. In other words, if no match is found by Traffic Analyzer112for processed data packets, then no decision to change routing is made and the default routing remains valid. In such a case, however, packets will continue to be analyzed for future potential matches and steps421-427can be reiterated. If a match is found, then the following actions are performed. In step429, Traffic Analyzer112communicates the matching strategy to Route Controller114. The information communicated from Traffic Analyzer112to Route Controller114can be indicative of the secondary (exit) VPN server that should be used for optimal routing of this particular traffic. In the current embodiment, the optimal secondary (exit) VPN server is SVPNS Two124. In step431, Route Controller114receives the routing strategy from Traffic Analyzer112and formulates an appropriate routing rule that corresponds to the routing strategy. In one embodiment, the strategy indicates to reach Target One130not via SVPNS One122as defined in the default routing and executed in the previous data exchange cycles in steps409-419, but via SVPNS Two124. In other words, Traffic Analyzer112found a strategy that matched the analyzed traffic that shows that it is optimal to reroute the traffic via SVPNS Two124. Route Controller114identifies this strategy and makes a routing rule to that effect that it forwards to VPN Gateway106. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In step433, after the routing strategy has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target One130(the same target as in the previous data exchange cycles) and sends it to VPN Gateway106. In step435, VPN Gateway106receives the data request from User Device102and selects the preferred routing to an exit VPN server that was indicated in a routing strategy. In this data exchange cycle, VPN Gateway106does not choose SVPNS One122as the exit VPN server and routes the request to SVPNS Two124. It is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. FIG.4Cshows the continuation of the exemplary flow diagram of server defined multiserver routing. In step437, SVPNS Two124makes a request to Target One130for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step439, Target One130returns the data specified in the original request from User Device102to SVPNS Two124. In step441, SVPNS Two124forwards the data received from Target One130to VPN Gateway106. In step443, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described in steps433and443forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. However, during this cycle, User Device102sends all requests to PVPNS104and they are routed via SVPNS Two124. This type of connection constitutes a multiserver VPN connection. FIG.5Ashows an exemplary flow diagram of server defined multiserver routing with split traffic. In step501, User Device102initiates a process to authenticate with the API116. User Device102can be authenticated using a variety of methods consisting of providing a username and a password or other identifying information. In step503, API116confirms the identity of User Device102and authenticates it. API116also provides a list of VPN servers available for connection at that time. One of those servers available to User Device102is PVPNS104. In step505, User Device102initiates a VPN connection with PVPNS104and more specifically by addressing VPN Gateway106. This action on User Device102can happen through a software application installed on User Device102that has a dashboard or other user interface. However, User Device102can engage in a VPN connection with the VPN Gateway106by configuring their system network settings more directly. In step507, once VPN Gateway106receives the request to connect, it creates a VPN tunnel between itself and User Device102. The tunnel is established by VPN Gateway106receiving User Device102requests from its public IP address, then returning a response with a newly assigned private IP address and a private IP address of the VPN Gateway106through which User Device102can communicate with VPN Gateway106in a private way. All the subsequent communication is done through the tunnel created by User Device102and VPN Gateway106. The identity of the communicating parties can be authenticated using public-key cryptography. This authentication can be optional but is generally required for at least one of the parties (typically the server). The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission. In step509, after the VPN tunnel is established and secured, User Device102is able to make requests and access the target servers privately without its public IP being revealed. User's individual traffic consists of sending requests and receiving responses as well as other protocol-specific information exchange, like TCP handshakes, cryptographic key exchange, and others. Since the primary VPN server does not analyze the user's traffic to the detail of individual requests, it deals with traffic more generally, i.e. the flow of information from and to its user. However, for the sake of clarity we will use requests and responses as discrete entities to illustrate the flow of actions. However, actions by the primary VPN server are not limited to these individual datagrams. These actions are also not limited by a particular protocol. For example, the traffic can consist of TCP or UDP packets and datagrams. User Device102makes a request to access a domain Target One130(for example, a web page, a video streaming service, a gaming or gambling platform) and sends it to VPN Gateway106. In step511, VPN Gateway106receives the data request from User Device102and selects a default routing to an exit VPN server that will ultimately make a request to Target One130. The choice by default is made upon some preset rules or rules updated manually at PVPNS104. The choice can also be made by considering several factors, for example server proximity to Target One120or User Device102but this does not change the overall functionality of the embodiments. In this case, VPN Gateway106can choose SVPNS One122as the exit VPN server and forward the request from User Device102to SVPNS One122. The internal communication between entities of the VPN service provider infrastructure can be exchanged in a variety of ways and protocols, including but not limited to IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 51): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; Quic. However, it is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step513, SVPNS One122makes a request to Target One130for the data specified in the original request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step515, Target One130returns the data specified in the original request from User Device102to SVPNS One122. In step517, SVPNS One122forwards the data received from Target One130to VPN Gateway106. In step519, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described above in steps509and519forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. One significant exception to this is that the target server can change. This means that User Device102can access different targets via the default routing, for example, including Target Two132. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. During the cycle, all elements remain stable and User Device102sends all requests to PVPNS104and then they are routed via SVPNS One122. FIG.5Bshows the continuation of the exemplary flow diagram of server defined multiserver routing with split traffic. In step521, VPN Gateway106forwards unencrypted data packets to Packet Processor110. These data packets can be forwarded individually, as a constant stream or they can be aggregated and only forwarded when a certain number of packets is reached or at preset intervals. The forwarding and the further analysis of the packets and the aggregated traffic can take place asynchronously with the previous data exchange cycle. This means that the transfer of data for analysis can be constant or happen at the same time or independently from the data exchange cycle that forwards and executes requests by User Device102. The differentiation of data exchange cycles into asynchronous processes ensures that the requests of User Device102can be executed without waiting for any other processes to finish and thus no delays occur. Likewise, analysis can happen at the same time that the requests are executed. The forwarding and the further analysis of the packets and the aggregated traffic can take place asynchronously with the previous data exchange cycle. This means that the transfer of data for analysis can be constant or happen at the same time or independently from the data exchange cycle that forwards and executes requests by User Device102. The differentiation of data exchange cycles into asynchronous processes ensures that the requests of User Device102can be executed without waiting for any other processes to finish and thus no delays occur. Likewise, analysis can happen at the same time that the requests are executed. In step523, Packet Processor110processes the data packets. Processing of data packets refers to extracting certain information from data packets. The extracted information can be indicative of locations or countries of origin and target, network connection type, also dynamic parameters, like timestamps, session duration, timestamps of idleness, a session's total traffic, response time, latency; such data can also include aggregated dynamic parameters over any period of time (average speed, average data packet size, average response time, average latency, most/least visited targets, error rate, variations in which median and percentile groups are used instead of average values, and others) in any combination and with any weights associated with the parameters. In most applications of the current embodiments, the goal of processing data packets is to arrive at an aggregated data model that is suitable for matching with existing routing strategies. Packet Processor110can save aggregation conclusions about the data packets as metadata. In step523, Packet Processor110forwards the processed data packets or metadata regarding them to Traffic Analyzer112. Traffic Analyzer112uses the received data (whether in processed data packet form or metadata form) to match it with more than one existing routing strategy that will determine the secondary (exit) VPN server. More than one strategy is matched with characteristics of traffic indicative of different services used or content access on the web or other entities accessible via a network. This means that different parts of the traffic correspond to different strategies that can be implemented simultaneously. Therefore, the same traffic incoming from User Device102can be routed via more than one secondary (exit) VPN server at the same time or in close succession. This can be called split traffic multiserver routing. Traffic Analyzer112should have a set of routing strategies performed in a flow of information exemplified inFIG.3, steps301and303, and match parts of the traffic with the strategies. The processing load is distributed between Packet Processor110and Traffic Analyzer112. In some embodiments, the needed data features can be extracted at one entity or the other without changing the overall functionality of the embodiments. For example, Packet Processor110can aggregate data without extracting features and Traffic Analyzer112can extract features or Packet Processor110can already have relevant features processed before forwarding it to Traffic Analyzer112. Traffic Analyzer112matches parts of traffic with more than one strategy based on an algorithm that does not change the overall functionality of the embodiments. Any type of algorithm could be implemented that matches particular traffic with the optimal routing strategy. Examples of algorithmic operations include but are not limited to grouping data in categories, forming series of data (ordered, partially ordered or unordered), aggregating data, extracting aggregated results, performing statistical analysis, running machine learning and deep learning algorithms, forming predictive models, and other processing functions. Traffic Analyzer112can run multiple related mechanisms that determine the outcome together. Steps521-527form a complete cycle of information transfer in the case in which a strategy match is not found for the relevant traffic. In other words, if no match is found by Traffic Analyzer112for processed data packets, then no decision to change routing is made and the default routing remains valid. In such a case, however, packets will continue to be analyzed for future potential matches. In step529, Traffic Analyzer112communicates the matching strategies to Route Controller114. The information communicated from Traffic Analyzer112to Route Controller114can be indicative of the secondary (exit) VPN server that should be used for optimal routing of this particular traffic. In the current embodiment, the two optimal secondary (exit) VPN servers are SVPNS Two124and SVPNS Three128. In step531, Route Controller114receives the routing strategy from Traffic Analyzer112and formulates appropriate routing rules that correspond to the routing strategies. In one embodiment, the strategy indicates to reach Target One130not via SVPNS One122as defined in the default routing and executed in the previous data exchange cycles in steps509-519, but via SVPNS Two124and to reach Target Two132not via SVPNS One122but via SVPNS Three126. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In other words, Traffic Analyzer112found two strategies that matched the analyzed traffic for optimal routing of the traffic via SVPNS Two124and SVPNS Three126. Route Controller114identifies this strategy and makes a routing rule to that effect that it forwards to VPN Gateway106. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In step533, after the routing strategy has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target One130and sends it to VPN Gateway106. FIG.5Cshows the continuation of the exemplary flow diagram of server defined multiserver routing with split traffic. In step535, VPN Gateway106receives the data request from User Device102and selects the optimal routing to an exit VPN server that was indicated in a routing strategy optimized for Target One130. As indicated in the split strategy, VPN Gateway106routes the request to Target One130via SVPNS Two124. It is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step537, SVPNS Two124makes a request to Target One130for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step539, Target One130returns the data specified in the original request from User Device102to SVPNS Two124. In step541, SVPNS Two124forwards the data received from Target One130to VPN Gateway106. In step543, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. FIG.5Dshows the continuation of the exemplary flow diagram of server defined multiserver routing with split traffic. In step545, after the routing strategy has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target Two132and sends it to VPN Gateway106. In step547, VPN Gateway106receives the data request from User Device102and selects the optimal routing to an exit VPN server that was indicated in a routing strategy optimized for Target Two132. As indicated in the split strategy, VPN Gateway106routes the request to Target Two132via SVPNS Three126. It is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step549, SVPNS Three126makes a request to Target Two132for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step551, Target Two132returns the data specified in the original request from User Device102to SVPNS Three126. In step553, SVPNS Three126forwards the data received from Target Two132to VPN Gateway106. In step555, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described in steps533and555forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. However, during this cycle, User Device102sends all requests to PVPNS104and they are routed via SVPNS Two124and SVPNS Three126. This type of connection constitutes a split traffic multiserver VPN connection. The sequence in the data exchange described in steps533and555is not strict and the two requests from User Device102(for Target One130and Target Two132) can be serviced simultaneously. In some embodiments, the first request made can even be serviced later than the second request because of network latency. FIG.6Ashows an exemplary flow diagram of server defined and user defined multiserver routing. In step601, User Device102initiates a process to authenticate with the API116. User Device102can be authenticated using a variety of methods consisting of providing a username and a password or other identifying information. In step603, API116confirms the identity of User Device102and authenticates it. API116also provides a list of VPN servers available for connection at that time. One of those servers available to User Device102is PVPNS104. In step605, User Device102initiates a VPN connection with PVPNS104and more specifically by addressing VPN Gateway106. This action on User Device102can happen through a software application installed on User Device102that has a dashboard or other user interface. However, User Device102can engage in a VPN connection with the VPN Gateway106by configuring their system network settings more directly. In step607, once VPN Gateway106receives the request to connect, it creates a VPN tunnel between itself and User Device102. The tunnel is established by VPN Gateway106receiving User Device102requests from its public IP address, then returning a response with a newly assigned private IP address and a private IP address of the VPN Gateway106through which User Device102can communicate with VPN Gateway106in a private way. All the subsequent communication is done through the tunnel created by User Device102and VPN Gateway106. The identity of the communicating parties can be authenticated using public-key cryptography. This authentication can be optional but is generally required for at least one of the parties (typically the server). The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission. In step609, after the VPN tunnel is established and secured, User Device102is able to make requests and access the target servers privately without its public IP being revealed. User's individual traffic consists of sending requests and receiving responses as well as other protocol-specific information exchange, like TCP handshakes, cryptographic key exchange, and others. Since the primary VPN server does not analyze the user's traffic to the detail of individual requests, it deals with traffic more generally, i.e. the flow of information from and to its user. However, for the sake of clarity we will use requests and responses as discrete entities to illustrate the flow of actions. However, actions by the primary VPN server are not limited to these individual datagrams. These actions are also not limited by a particular protocol. For example, the traffic can consist of TCP or UDP packets and datagrams. User Device102makes a request to access a domain Target One130(for example, a web page, a video streaming service, a gaming or gambling platform) and sends it to VPN Gateway106. In step611, VPN Gateway106receives the data request from User Device102and selects a default routing to an exit VPN server that will ultimately make a request to Target One130. The choice by default is made upon some preset rules or rules updated manually at PVPNS104. The choice can also be made by considering several factors, for example server proximity to Target One120or User Device102but this does not change the overall functionality of the embodiments. In this case, VPN Gateway106can choose SVPNS One122as the exit VPN server and forward the request from User Device102to SVPNS One122. The internal communication between entities of the VPN service provider infrastructure can be exchanged in a variety of ways and protocols, including but not limited to IP in IP (Protocol 4): IP in IPv4/IPv6; SIT/IPv6 (Protocol 41): IPv6 in IPv4/IPv6; GRE (Protocol 47): Generic Routing Encapsulation; OpenVPN (UDP port 1194); SSTP (TCP port 443): Secure Socket Tunneling Protocol; IPSec (Protocol 50 and 51): Internet Protocol Security; L2TP (Protocol 115): Layer 2 Tunneling Protocol; VXLAN (UDP port 4789): Virtual Extensible Local Area Network; WireGuard; Quic. However, it is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. In step613, SVPNS One122makes a request to Target One130for the data specified in the original request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step615, Target One130returns the data specified in the original request from User Device102to SVPNS One122. In step617, SVPNS One122forwards the data received from Target One130to VPN Gateway106. In step619, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described above in steps609and619forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. During the cycle, all elements remain stable and User Device102sends all requests to PVPNS104and then they are routed via SVPNS One122. FIG.6Bshows the continuation of the exemplary flow diagram of server defined and user defined multiserver routing. In step621, User Device102sends a preference to VPN Gateway106that a different exit server should be used. That preference might be a freely formed request from User Device102but it can also be made available to User Device102via a dashboard or as a list of potential exit VPN servers. The routing preference request is sent from a user with a destination address that belongs to the VPN service provider infrastructure, and more specifically to Comm Listener115which is a listening device or software that has a specific port open to specifically receive such requests from users. Thus, a routing preference request is different from other requests from User Device102in that it is addressed no to an external target server but to an element of PVPNS104. The routing preference request is sent via the original VPN tunnel and thus is firstly addressed to VPN Gateway106. In step623, VPN Gateway106receives the routing preference request from User Device102and forwards it to Comm Listener115. In step625, Comm Listener115receives the routing preference request from User Device102. It then forwards this request to Route Controller114. In step627, Route Controller114receives the routing preference request from User Device102and formulates an appropriate routing rule that corresponds to User Device102preference. In one embodiment, User Device102made a preference to reach Target One130not via SVPNS One122as defined in the default routing and executed in the previous data exchange cycles in steps209-219, but via SVPNS Two124. In other words, User Device102expressed a preference to reroute its traffic via SVPNS Two124. Route Controller114identifies this preference and makes a routing rule to that effect that it forwards to VPN Gateway106. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In step629, after the routing preference has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target One130(the same target as in the previous data exchange cycles) and sends it to VPN Gateway106. In step631, VPN Gateway106receives the data request from User Device102and selects the preferred routing to an exit VPN server that was selected by User Device102. In this data exchange cycle, VPN Gateway106cannot choose SVPNS One122as the exit VPN server (as per default routing) and routes the request to SVPNS Two124. In step633, SVPNS Two124makes a request to Target One130for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step635, Target One130returns the data specified in the original request from User Device102to SVPNS Two124. In step637, SVPNS Two124forwards the data received from Target One130to VPN Gateway106. In step639, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described in steps629and639forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. However, during this cycle, User Device102sends all requests to PVPNS104and they are routed via SVPNS Two124. This type of connection constitutes a multiserver VPN connection. FIG.6Bshows the continuation of the exemplary flow diagram of server defined and user defined multiserver routing. In step641, VPN Gateway106forwards unencrypted data packets from User Device102to Packet Processor110. These data packets can be forwarded individually, as a constant stream or they can be aggregated and only forwarded when a certain number of packets is reached or at preset intervals. The forwarding and the further analysis of the packets and the aggregated traffic can take place asynchronously with the previous data exchange cycle. This means that the transfer of data for analysis can be constant or happen at the same time or independently from the data exchange cycle that forwards and executes requests by User Device102. The differentiation of data exchange cycles into asynchronous processes ensures that the requests of User Device102can be executed without waiting for any other processes to finish and thus no delays occur. Likewise, analysis can happen at the same time that the requests are executed. In step643, Packet Processor110processes the data packets. Processing of data packets refers to extracting certain information from data packets. The extracted information can be indicative of locations or countries of origin and target, network connection type, also dynamic parameters, like timestamps, session duration, timestamps of idleness, a session's total traffic, response time, latency; such data can also include aggregated dynamic parameters over any period of time (average speed, average data packet size, average response time, average latency, most/least visited targets, error rate, variations in which median and percentile groups are used instead of average values, and others) in any combination and with any weights associated with the parameters. In most applications of the current embodiments, the goal of processing data packets is to arrive at an aggregated data model that is suitable for matching with existing routing strategies. Packet Processor110can save aggregation conclusions about the data packets as metadata. In step645, Packet Processor110forwards the processed data packets or metadata regarding them to Traffic Analyzer112. Traffic Analyzer112uses the received data (whether in processed data packet form or metadata form) to match it with existing routing strategies that will determine the secondary (exit) VPN server. Traffic Analyzer112should have a set of routing strategies performed in a flow of information exemplified inFIG.3, steps301and303. The processing load is distributed between Packet Processor110and Traffic Analyzer112. In some embodiments, the needed data features can be extracted at one entity or the other without changing the overall functionality of the embodiments. For example, Packet Processor110can aggregate data without extracting features and Traffic Analyzer112can extract features or Packet Processor110can already have relevant features processed before forwarding it to Traffic Analyzer112. Traffic Analyzer112matches traffic with strategies based on an algorithm that does not change the overall functionality of the embodiments. Analysis can take as parameters protocols used for the connections within the traffic information, target IP addresses, their ranges and locations, and target ports can be indicative of particular types of services accessed. Any type of algorithm could be implemented that matches particular traffic with the optimal routing strategy. Examples of algorithmic operations include but are not limited to grouping data in categories, forming series of data (ordered, partially ordered or unordered), aggregating data, extracting aggregated results, performing statistical analysis, running machine learning and deep learning algorithms, forming predictive models, and other processing functions. Traffic Analyzer112can run multiple related mechanisms that determine the outcome together. Steps641-645form a complete cycle of information transfer in the case in which a strategy match is not found for the relevant traffic. In other words, if no match is found by Traffic Analyzer112for processed data packets, then no decision to change routing is made and the default routing remains valid. In such a case, however, packets will continue to be analyzed for future potential matches and steps641-645can be reiterated. If a match is found, then the following actions are performed. In step647, Traffic Analyzer112communicates the matching strategy to Route Controller114. The information communicated from Traffic Analyzer112to Route Controller114can be indicative of the secondary (exit) VPN server that should be used for optimal routing of this particular traffic. In the current embodiment, the optimal secondary (exit) VPN server is SVPNS Three126. In step649, Route Controller114receives the routing strategy from Traffic Analyzer112and formulates an appropriate routing rule that corresponds to the routing strategy. In one embodiment, the strategy indicates to reach Target One130not via SVPNS Two124as defined in user preference and executed in the previous data exchange cycles in steps629-639, but via SVPNS Three126. In other words, Traffic Analyzer112found a strategy that matched the analyzed traffic that shows that it is optimal to reroute the traffic via SVPNS Three126. Route Controller114identifies this strategy and makes a routing rule to that effect that it forwards to VPN Gateway106. In one example, the embodiments begin the routing with the first SYN packet received or, in the case of UDP, a stateful NAT can be employed to route traffic. In step651, after the routing strategy has been implemented at VPN Gateway106, User Device102makes a request to access a domain Target One130(the same target as in the previous data exchange cycles) and sends it to VPN Gateway106. In step653, User Device102makes a data request for Target One130. In step655, VPN Gateway106receives the data request from User Device102and selects the preferred routing to an exit VPN server that was indicated in a routing strategy. In this data exchange cycle, VPN Gateway106does not choose SVPNS Two124as the exit VPN server and routes the request to SVPNS Three126. It is desirable that the communication takes place over an encrypted protocol, like a VPN tunnel, so that the data exchanged is encrypted and cannot be intercepted. FIG.6Bshows the continuation of the exemplary flow diagram of server defined and user defined multiserver routing. In step657, SVPNS Two124makes a request to Target One130for the data specified in the request from User Device102. The type of data can be an HTTP response, a streaming service or any other media or data entity. The applications of the current embodiments are not limited by a particular protocol or the type of target that is being accessed. In step659, Target One130returns the data specified in the original request from User Device102to SVPNS Two124. In step661, SVPNS Two124forwards the data received from Target One130to VPN Gateway106. In step663, VPN Gateway106returns the request data to User Device102over the existing VPN tunnel. The data exchange described in steps653and663forms a complete cycle and can be reiterated multiple times before any changes are made to the existing communication. The rates of the cycle are flexible and can include, for example, one, one hundred, one thousand or ten thousand instances. However, during this cycle, User Device102sends all requests to PVPNS104and they are routed via SVPNS Three126. This type of connection constitutes a multiserver VPN connection. The current exemplary embodiments describe how multiple decisions can be implemented within the VPN service provider infrastructure. However, in a different configuration, the decision order could be reversed. For example, the data exchange cycle could begin by a server defined multiserver routing (steps641-651) and then be changed by user defined multiserver routing (steps621-627). The setting of which preference should have priority is a simple configuration in the VPN service infrastructure and it does not change the overall functionality of the embodiments. The order of these steps can be synchronous, asynchronous or partially synchronous, depending on the configuration of the VPN service provider infrastructure. In at least some embodiments, the user preference is not changed by server defined traffic analysis. In at least one instance, the user preference disables the traffic analysis and steps641-663are not performed. Likewise, both user defined and server defined routing preferences can be applied to full traffic or partial traffic as described inFIG.5, steps521-555. For example, the user could select a preference to redirect part of the traffic through other than optimal multiserver routing identified by a strategy or by default routing. However, these are trivial changes to the overall structure of the embodiments that show how multiple decisions can be incorporated into a single unified method and system. A combination of the flows of action described above comprises a method for multihop or multiserver routing for a VPN connection. Parts of the flows can happen synchronously or asynchronously. The method overall comprises receiving, at an entry VPN server from a user device, a first request for connection to a first target, forwarding, at the entry VPN server, the first request through a first exit VPN server, aggregating, at the entry VPN server, data activity packets from the first request into traffic information, matching, at the entry VPN server, the user device's traffic information with routing strategy to route traffic through a second exit VPN server, different from the first exit VPN server, assigning, at the VPN server, the routing strategy to the second request to route traffic through a second exit VPN server, receiving, at the entry VPN server from the user device, a second request for connection to the first target or to a second target, different from the first target, and forwarding, at the entry VPN server, the second request through a second exit VPN server. The method is also adapted for and compatible with two or more routing strategies that are matched simultaneously with different parts of traffic from the user device, part of the traffic from the user device being routed via the first exit VPN server and another part, that does not include traffic from the first part, being routed via the second exit VPN server, routing strategies being generated based on on at least one of the following: used protocols, target IP addresses, and target ports, receiving, at the VPN server, an instruction from the user device to route requests through a third exit VPN server, different from the first exit VPN server, receiving, at the entry VPN server from the user device, a third request for connection to the first target or to a second target, different from the first target, forwarding, at the entry VPN server, the third request through the third exit VPN server, different from the first exit VPN server, instructions including a reference to a geographic location of the third exit VPN server, entry VPN server receiving updated routing strategies from an application programming interface, data transfer between the entry VPN server and any of the exit VPN servers happening over an encrypted connection, and the second request being associated with a different domain than a domain of the first request, traffic analysis taking place at the same time that the first request and the second request are routed. The embodiments herein may be combined in a variety of ways as a matter of design choice. Accordingly, the features and aspects herein are not intended to be limited to any particular embodiment. Furthermore, the embodiments can take the form of hardware, firmware, software, and/or combinations thereof. In one embodiment, such software includes but is not limited to firmware, resident software, microcode, etc.FIG.7illustrates a computing system700in which a computer readable medium706may provide instructions for performing any of the methods and processes disclosed herein. Furthermore, some aspects of the embodiments herein can take the form of a computer program product accessible from the computer readable medium706to provide program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, the computer readable medium706can be any apparatus that can tangibly store the program code for use by or in connection with the instruction execution system, apparatus, or device, including the computing system700. The computer readable medium706can be any tangible electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Some examples of a computer readable medium706include solid state memories, magnetic tapes, removable computer diskettes, random access memories (RAM), read-only memories (ROM), magnetic disks, and optical disks. Some examples of optical disks include read only compact disks (CD-ROM), read/write compact disks (CD-R/W), and digital versatile disks (DVD). The computing system700can include one or more processors702coupled directly or indirectly to memory708through a system bus810. The memory708can include local memory employed during actual execution of the program code, bulk storage, and/or cache memories, which provide temporary storage of at least some of the program code in order to reduce the number of times the code is retrieved from bulk storage during execution. Input/output (I/O) devices704(including but not limited to keyboards, displays, pointing devices, I/O interfaces, etc.) can be coupled to the computing system700either directly or through intervening I/O controllers. Network adapters may also be coupled to the computing system700to enable the computing system700to couple to other data processing systems, such as through host systems interfaces712, printers, and/or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just examples of network adapter types. Although several embodiments have been described, one of ordinary skill in the art will appreciate that various modifications and changes can be made without departing from the scope of the embodiments detailed herein. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover, in this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises”, “comprising”, “has”, “having”, “includes”, “including”, “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . . a”, “has . . . a”, ‘includes . . . a”, “contains . . . a” does not, without additional constraints, preclude the existence of additional identical elements in the process, method, article, and/or apparatus that comprises, has, includes, and/or contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. For the indication of elements, a singular or plural form can be used, but it does not limit the scope of the disclosure and the same teaching can apply to multiple objects, even if in the current application an object is referred to in its singular form. It will be appreciated that some embodiments describe the use of one or more generic or specialized databases (such as “Exit Nodes Database”, or similar), that contains a collection of information that is organized so that it can be easily accessed, managed and updated. Computer databases typically contain aggregations of data records or files, in the current case, databases usually store different information and statistics about the proxies or exit nodes, information about utilization threshold of the exit node provider. Such databases can also contain information about the users, requests performed, networks used, exit nodes used, types of exit nodes requested and similar data. Databases are structured to facilitate the storage, retrieval, modification, and deletion of data in conjunction with various data-processing operations. The Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it is demonstrated that multiple features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law. | 85,002 |
11863422 | In the present application, the term “and/or” is intended to cover all possible combinations and sub-combinations of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, and without necessarily excluding additional elements. In the present application, the phrase “at least one of . . . or . . . ” is intended to cover any one or more of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, without necessarily excluding any additional elements, and without necessarily requiring all of the elements. Reference will first be made toFIG.1which illustrates, in block diagram form, an example network associated with a blockchain, which may be referred to herein as a blockchain network100. The blockchain network100is a peer-to-peer open membership network which may be joined by anyone, without invitation or without consent from other members. Distributed electronic devices running an instance of the blockchain protocol under which the blockchain network100operates may participate in the blockchain network100. Such distributed electronic devices may be referred to as nodes102. The blockchain protocol may be a Bitcoin protocol, or other cryptocurrency, for example. The electronic devices that run the blockchain protocol and that form the nodes102of the blockchain network100may be of various types including, for example, computers such as desktop computers, laptop computers, tablet computers, servers, mobile devices such as smartphones, wearable computers such as smart watches or other electronic devices. Nodes102of the blockchain network100are coupled to one another using suitable communication technologies which may include wired and wireless communication technologies. In many cases, the blockchain network100is implemented at least partly over the Internet, and some of the nodes102may be located in geographically dispersed locations. Nodes102maintain a global ledger of all transactions on the blockchain, grouped into blocks, each of which contains a hash of the previous block in the chain. The global ledger is a distributed ledger and each node102may store a complete copy or a partial copy of the global ledger. Transactions by a node102affecting the global ledger are verified by other nodes102so that the validity of the global ledger is maintained. The details of implementing and operating a blockchain network, such as one using the Bitcoin protocol, will be appreciated by those ordinarily skilled in the art. Each transaction typically has one or more inputs and one or more outputs. Scripts embedded into the inputs and outputs specify how and by whom the outputs of the transactions can be accessed. The output of a transaction may be an address to which value is transferred as a result of the transaction. That value is then associated with that output address as an unspent transaction output (UTXO). A subsequent transaction may then reference that address as an input in order to spend or disperse that value. Nodes102can fulfil numerous different functions, from network routing to wallet services, to maintain a robust and secure decentralized public ledger. “Full nodes” contain a complete and up-to-date copy of the blockchain, and can therefore verify any transactions (spent or unspent) on the public ledger. “Lightweight nodes” (or SPV) maintain a subset of the blockchain and can verify transactions using a “simplified payment verification” technique. Lightweight nodes only download the headers of blocks, and not the transactions within each block. These nodes therefore rely on peers to verify their transactions. “Mining nodes,” which can be full or lightweight nodes, are responsible for validating transactions and creating new blocks on the blockchain. “Wallet nodes”, which are typically lightweight nodes, handle wallet services of users. Nodes102communicate with each other using a connection-oriented protocol, such as TCP/IP (Transmission Control Protocol). When a node wishes to send a transaction to a peer, an “INVENTORY” message is sent to the peer, transmitting one or more inventory objects that is known to the transmitting node. If the peer replies with a “GETDATA” message, i.e., a full transaction request, the transaction is sent using a “TRANSACTION” message. The node receiving the transaction forwards it in the same manner—given that it is a valid transaction—to its peers. Reference is now made toFIG.2, which diagrammatically shows an example node200with an input buffer202and an output buffer204. The example node200has network interfaces with multiple peer nodes, referenced as intA, intB, intC, intD, etc. The input buffer202shows incoming transactions from the various peer nodes, and the output buffer204shows output network packets, corresponding to transactions, for transmission to peer nodes over the respective interfaces. Network packets are serially sent and received at an application-level according to the primitives provided by the operating system of the node200. Assuming that a transaction x fits in a single Ethernet/IP packet, its transmission to m peers requires the buffering of m different output network packets. Both input and output network packets, along with other information, will contain a serialized transaction and a logical interface ID representing the TCP/IP connection to the sending/receiving peer. Once a transaction is generated, the source node broadcasts the transaction message over the network. Generally, when a client generates a transaction, it is put in the output buffer204. The transaction may or may not be forwarded immediately to the peers. In some implementations of node networks, transactions are propagated by a mechanism known as “diffusion propagation”, whereby each transaction source transmits the transaction to its neighbours with an independent, exponential delay. The delays in propagation are random, and are useful to introduce uncertainty in timing estimates for a malicious attacker. Once a peer receives a certain transaction, the peer may not accept future relays of the same transaction: for example, the transaction hash may be stored in the peer's memory pool, allowing the peer to reject identical transactions. The “diffusion” of transactions through the network is symmetric, meaning that a forwarding node does not use information about the IP addresses of the neighbouring nodes to influence the transaction broadcast. For example, in “standard” diffusion processes, the peers of a broadcasting node all receive the same transaction and in each relay instance only one transaction at a time is relayed per peer. The symmetric nature of this “diffusion” may be exploited by malicious third parties having knowledge of the peer-to-peer graph structure of the network in conducting de-anonymizing attacks. The present disclosure provides alternative techniques for transactions relay on blockchain networks, to improve protection against traffic analysis attacks. More particularly, the proposed relay protocols may be used to disguise, conceal or obfuscate connections between source nodes of transactions and their IP addresses. A transactions relay protocol, Diffusion Mixer Protocol (DMP), is proposed. DMP includes two independent diffusion stages. The first stage (“random differential relay”, or RDR) allows for relayed transactions mixing and obfuscation of transaction sources. During the random differential relay stage, each node waits a predefined amount of time before broadcasting a transaction to the network, to receive and collect a plurality of transactions from its peers. The node then creates outgoing connections to its “entry nodes”, and sends to an arbitrarily (e.g., randomly) selected subset of these entry nodes different transactions with approximately the same timestamps. Entry nodes of a node are those neighbouring nodes to which direct outgoing connections can be established from the node. The randomness in the choice of entry nodes and the diversity in the relayed transactions may make the reconstruction of the network topology more difficult for an attacker. The second stage (“standard diffusion”) ensures a timely and reliable propagation of transactions within the network. In the standard diffusion stage, each node relays the same transaction to all its entry nodes, and in each relay instance only one transaction at a time is relayed per entry node. It should be noted that in a network of nodes, such as a blockchain network, one or more of the nodes may be capable of implementing the DMP. Specifically, one or more of the nodes of the network may be able to relay its received data packets to its entry nodes by participating in the DMP. A participating node may, for example, select between an RDR process and a standard diffusion process, for propagating a particular data packet. The nodes of the network may elect to participate in the DMP, joining the protocol either via a decentralized manner or through inclusion in a group of participating nodes assembled by a central authority. A participating node relays its output network packets according to the DMP. In particular, if a participating node receives a data packet, the node may forward the received data packet according to a mode of propagation that is selected for that node, using the rules stipulated by the DMP. The proposed DMP for transactions relay is described with reference toFIGS.3to7. A schematic visualization of the DMP is provided inFIG.3. An example blockchain network300of nodes is shown. Each node represents a network terminal (i.e., a blockchain node), while edges represent links between nodes. For the purposes of this illustration, it is supposed that for each link, it is possible to send or receive a single bit at a time. In this example network300, each node maintains a set of unconfirmed transactions so that when a node receives a new transaction, it is propagated through the network to all other nodes. Each node is to validate and store the new transactions in their respective local set and forward the new transactions to any peer nodes that do not yet have the new transactions. Due to the peer-to-peer nature of the blockchain network300, all nodes do not receive a new transaction at the same time, meaning it will take some time for a new transaction to reach all nodes in the network300. FIG.3illustrates the two stages of the DMP for propagating a particular transaction Tx1, namely the random differential relay302and the standard diffusion304for Tx1. The source node310of transaction Tx1may either generate the transaction Tx1or receive it from a peer node, at a time, t1. In accordance with the DMP, source node310waits to receive at least one more incoming transaction from its neighbouring nodes prior to initiating broadcast of the received/queued transactions. In the example ofFIG.3, once transaction Tx2is received by source node310at time t2, the transactions Tx1and Tx2are sent to an arbitrarily selected subset of the source node310's entry nodes at time t3. Transaction Tx1is forwarded to entry nodes310cand310d, while transaction Tx2is forwarded to entry nodes310aand310b. The example ofFIG.3is only illustrative; in particular, the source node310may wait to receive more than two incoming transactions before propagating any of its received transactions. The entry nodes relay the received transactions to their own peers. For example, nodes310band310dforward transactions Tx2and Tx1, respectively, to one or more of their neighbouring nodes. In the DMP, each recipient of a transaction independently selects a mode of propagating the received transaction. Node320is an example of a node which selects standard diffusion as its diffusion mode. As shown inFIG.3, node320forwards the same transaction, Tx1, to all its entry nodes, namely320a,320b,320c,320d, and320e. Reference is now made toFIG.5, which shows, in flowchart form, an example method500for propagating data packets in a network, in the RDR stage of DMP. The method500is implemented by a node of, for example, a blockchain network, such as network100. A node may be understood, in this context, to refer to a mining node, full node, validator node, or other type of discrete blockchain node in the blockchain network. The node is a computing device with network connection(s), computing resources, and executing software implementing the blockchain protocol. In operation502, the client associated with the node generates at least one data packet of a first type. In the context of a blockchain network, the data packet of a first type may comprise a blockchain transaction. That is, the client may generate a blockchain transaction which is to be propagated to the other nodes of the network. In operation504, the node collects a set of data packets of the first type during a first time period, T. That is, the node accumulates data packets of the first type over a period of time. The set includes the at least one generated data packet and at least one data packet of the first type that is received from one or more peer nodes in the network. In this way, the data packets generated by the node are mixed with those data packets of the same type that are received from neighbouring nodes. In a blockchain network, during the time period T, the node accumulates a set of transactions by monitoring the network for incoming transactions to be relayed. The length of time period T may be predefined. In some example implementations, the length of time may vary based on parameters such as average connection times, average number of transactions received per unit of time, or the node's centrality (i.e., the number of incoming connections to the node) within the network. During the time period T, the node may only be permitted to accumulate data packets of the first type, and therefore may be prevented from transmitting any data packets of the first type for the duration of time period T. In operation506, the node arbitrarily selects a subset of its entry nodes to which different sets of the collected data packets will be forwarded. More specifically, for each data packet in the set of collected data packets, the node arbitrarily selects two or more of its entry nodes (i.e. neighbouring nodes, with which the node has outgoing connections), and assigns the data packet to the selected entry nodes. For example, the entry nodes may be selected randomly. The node may, in some implementations, query the network to obtain fresh addresses of its peers. For example, in the Bitcoin network, the node may query one or more database source names (DSN) embedded in Bitcoin Core, BitcoinJ, or other blockchain protocol, and maintained by Bitcoin (or other blockchain) community members. As a response, the node will get one or more DSN records showing the IP addresses of available full nodes which may accept incoming connections. A decentralized version of peer discovery may be implemented by having peers send “ADDR” messages containing their IP addresses and port numbers to a new node that joins the network. In some implementations, as part of operation506, one or more of the nodes in a network may maintain a table or other data structure tracking its assignment of each collected data packet to an entry node that the data packet should be relayed to.FIG.4shows an example of transactions relay for source node410in the RDR stage of the DMP in a blockchain network. Table 1 is an example assignment of the collected transactions, Tx1-Tx5, to the entry nodes of source node410. The entry nodes are indicated as nodes A, B, C, D, E, F, G, and H. As shown inFIG.4and Table 1, the source node410relays each transaction to at least two entry nodes, and multiple transactions can be relayed via the same node. For example, transactions Tx3, Tx4, and Tx5are all simultaneously relayed via entry node E. More generally, in the RDR process, multiple data packets can be simultaneously relayed to the same peer node by a forwarding node. Not all entry nodes receive transactions from source node410in a given instance of the DMP. In the example of Table 1, entry nodes C and G do not receive any transactions from source node410. TABLE 1Transactions/NodesRelay 1Relay 2Relay 3Tx1Node ANode DNode HTx2Node ENode BNode FTx3Node ENode ANode HTx4Node BNode ETx5Node ENode F Referring again toFIG.5, for each collected data packet, in operation508, the node transmits the data packet to each of the (arbitrarily or randomly) selected entry nodes. Each selected entry node is configured to relay the data packet to one or more second nodes (e.g., peers of the entry node) in the network using a mode of data propagation that is randomly selected for that entry node. That is, each selected entry node forwards the received data packet to one or more of its own peers using a propagation mode that is independently chosen for that entry node. In the example transactions relay ofFIG.4, each of transactions Tx1-Tx5is forwarded to the entry nodes to which the transaction is assigned. Each node receiving a transaction from source node410then randomly selects a mode of propagation/diffusion to use in forwarding the received transaction to one or more of its peer nodes (if any). In particular, an entry node that receives a transaction selects, on a random basis, between relaying the transaction according to the standard diffusion process or the RDR process. The choice between the two options is random. Thus, in the DMP, the two diffusion processes alternate probabilistically, i.e. there is not a clear separation between the RDR stage and the standard diffusion stage. As a result of this “mixing” of diffusion processes, it becomes more difficult for an attacker to reconstruct a topology of the network based on identifying a separation between the sets of nodes relaying via random data propagation or via standard diffusion. In some implementations, the random selection by an entry node of the diffusion mode may involve receiving, from the source node, a message in addition to the relayed data packet. The entry node may then generate a random value (e.g. random number), append it to the received message, and hash the result, for example, using SHA-256. The entry node can then check the hash value and subsequently obtain the diffusion mode based on predetermined rules regarding the hash value (e.g., if the final character of the hash is a digit, select the RDR as mode of diffusion). Alternatively or additionally, the selection of the diffusion mode can be done using any randomized process (e.g. random number generator), where the probability of selecting one of the modes may be greater than that of selecting the other of the modes, depending on factors such as number of incoming and/or outgoing connections, average number of data packets received per unit of time, etc. In propagating a particular data packet, it may be desirable to balance the level of anonymity protection for the propagating nodes with the overall speed of propagation. If the measures to ensure a certain level of anonymity are too cumbersome (e.g. requires too many network resources, nodes of the network are intentionally underutilized in relaying data packets, etc.), the efficacy of the network in timely spreading data may be impaired. Accordingly, in some implementations, the random selection of the mode of propagation by a relaying node may be weighted. In particular, different probabilities may be assigned to each of the two or more modes of propagation (i.e., RDR, standard diffusion, etc.) so that the probabilities reflect the proportional significance of anonymity and speed of data propagation. For example, in some instances, a higher predefined probability may be associated with the RDR mode for the nodes of a particular network, reflecting a proportionally greater emphasis on preserving anonymity of the propagated data. The method500ofFIG.5is implemented by a node which generates its own data packet of a first type. In particular, a node that participates in the DMP and generates a data packet for propagation to the rest of the network performs the method500.FIG.6shows an example process performed by a relay node, or a node which forwards or relays a data packet that is generated by a different node. That is, a relay node is a node that does not itself generate data to transfer during the relay of a specific data packet, instead serving the function of “relaying” the data packet. In operation550, the relay node independently selects its own mode of data propagation. A relay node may, for example, select between a RDR mode and standard diffusion mode. If the standard diffusion mode is selected (which may be determined at operation552), the relay node forwards the data packet to all of its entry nodes in operation554. In the example ofFIG.6, the selection of propagation mode is between two possible options; this example is not limiting and in other examples, there may be three or more possible modes of propagation. If, in the method500the selected mode is RDR (which may be determined at operation552), the relay node performs the steps556,558and560which correspond to the operations504,506and508ofFIG.5. Reference will now be made toFIG.7, which shows, in flowchart form, an example process600for propagating data packets in a network. The process600may be implemented at a blockchain node having a plurality of incoming and outgoing connections to other nodes of a blockchain network. Operations602,604,606and610of process600correspond to operations502,504,506and508of method500, respectively. In operation608, the node determines whether a triggering condition has been met, prior to transmitting a collected data packet to its assigned entry node in operation610. In particular, the transmitting of the data packet is performed in response to detecting that a suitable triggering condition has been satisfied. When the triggering condition has not been met, the node continues to collect data packets of the first type without relaying any of said data packets to its entry/peer nodes. A triggering condition may be employed to direct the node to collect a sufficient number of incoming data packets and/or to collect incoming data packets for a sufficient amount of time. For example, sufficiency may be determined based on a defined threshold. By collecting a plurality of incoming data packets prior to, for example, simultaneously propagating them to peer nodes in the network, an attacker that monitors the relay traffic originating from the node may not be able to easily identify the node as the correct source of the relayed data packets. In some implementations, the triggering condition may be the expiry of a predetermined duration since the time of generation of the at least one data packet of the first type by the node in operation602. That is, the node may be designed to monitor and collect incoming data packets (e.g., transactions) for a predetermined period of time that begins when the node generates a data packet of the same type, before any of said data packets are propagated by the node. This condition may be useful in trying to ensure that a data packet that is generated by the node is propagated after having collected more data packets of the same type that can be simultaneously broadcasted, thereby rendering it difficult for an attacker to correctly identify the node as the source of the generated data packet. In some implementations, the triggering condition may be the expiry of a predetermined duration since the time of receipt of a first of the at least one incoming data packet of the first type from the node's peers. That is, the node may be designed to monitor and collect incoming data packets for a predetermined period of time that begins when a first of such incoming data packets is received. This condition may be useful in trying to ensure that more data packets, either data packets generated by the node itself or received from other peers, are collected by the node prior to any broadcast to the rest of the network. In some implementations, the triggering condition may be the number of collected data packets during the first time period reaching a threshold number. In particular, the node may be designed to monitor and collect incoming data packets until the earlier of the expiry of the first time period or a predetermined threshold number of data packets being collected by the node. Heuristics for Random Differential Relay As described above, random differential relay represents a departure from the “standard diffusion” protocol for propagating transactions in a network of nodes. In implementing RDR, a propagating node relays different transactions simultaneously to a randomly selected subset of entry nodes. The propagating node may create a data structure, such as the data structure illustrated in Table 1, by randomly assigning to each collected transaction one or more entry nodes that the transaction should be relayed to. More generally, a network node that relays data packets to its peers may maintain its own internal routing data structures which specify the type of relay to perform for each of a plurality of data packets collected (i.e., received or locally generated) by the node. In the context of the Diffusion Mixer Protocol proposed herein, each node in the blockchain network that implements RDR may build its own routing data structure, or “RDR table”, independently. An RDR table defines a transaction allocation scheme for each node that adopts the RDR protocol. That is, an individual node's RDR table is used to manage what transactions are to be relayed to which peer and when. The RDR table may keep track of all the transactions received or generated in a given amount of time, ΔTRDR, as well as the source peers of transactions. An RDR table may include additional information, such as: time of arrival of the first instance of a transaction (“ToA timestamp”); times chosen for relaying a transaction (“ToR timestamp”); and/or counter of the number of instances of the same transaction received by the node. An example RDR table is provided below. TABLE 2Transaction IDSourcesDestinationsDatatx1a, b, dc, e. . .tx2[local]a, c, e. . .tx3d, ea, b. . . A node's local RDR table may be updated dynamically (i.e., in real-time) as new information (timeouts, transactions received or generated) becomes available. The present disclosure provides various heuristics, or “sub-systems”, which contribute to the building and updating of individual RDR tables. These sub-systems can be considered as sets of rules or guidelines which may be applied to update transaction allocations as specified in RDR tables. The strategies encompassed by these sub-systems may be useful in enhancing transaction source obfuscation and balancing network traffic generated by the relay operations of an individual node. The proposed set of sub-systems, namely source mixing, relay mixing, destination mixing, time-of-arrival mixing, and source control, may work in parallel, while a load balancing module can be used to merge the transaction relay information collected and provide an optimized allocation of network resources. Reference is now made toFIG.8, which shows in flowchart form, an example method700for transmitting data packets that are either generated or received at a node in a network. The method700represents a technique of propagating data in a network according to a transaction allocation scheme that complies with the rules of at least one of the proposed sub-systems/heuristics. The method700is implemented by a node of, for example, a blockchain network, such as network100ofFIG.1. More specifically, the method700is performed by a node that participates in the DMP and is configured to generate or receive data packets of a first type (e.g., transactions) for propagation to the rest of the network. In operation702, the client associated with the node generates at least one data packet of a first type. The data packet may, for example, comprise a blockchain transaction. In operation704, the node collects a set of data packets of the first type during a first time period, T. That is, the node accumulates data packets of the first type over a period of time. The set includes the at least one generated data packet and at least one data packet of the first type that is received from one or more peer nodes in the network. In this way, the data packets generated by the node are mixed with those data packets of the same type that are received from neighbouring nodes. In operation706, a mapping of the data packets of the collected set to a plurality of neighbouring nodes connected to the node is determined. The mapping indicates an expected time of relay of each data packet of the set to the neighbouring nodes. This “mapping” is used to construct the individual local RDR tables for nodes of the network. One or more of the sub-systems/heuristics described in the present disclosure may contribute (in parallel or independently) to construction of the RDR tables. In particular, one or more different sub-mappings may be applied in determining the mapping of the collected data packets to neighbouring nodes. The sub-mappings may be of at least two different types. A first type of sub-mapping allocates any two data packets having a same source (i.e., originating node) for relay to different subsets of the neighbouring nodes. The “source mixing” and “relay mixing” sub-systems described in greater detail below are examples of this first type of sub-mapping. A second type of sub-mapping assigns different expected times of relay to any two data packets that are generated at the node or received by the node from peer nodes in a same time interval. The “time-of-arrival mixing” sub-system is an example of this second type of sub-mapping. In operation708, once the mapping of the data packets of the collected set to neighbouring nodes is determined, said data packets are transmitted to neighbouring nodes in accordance with the determined mapping. It will be understood that the individual sub-systems may be independently implemented to update the transaction allocations defined in an RDR table. That is, each sub-system can be adopted separately for an RDR table, independently of the other sub-systems. Accordingly, the individual sub-systems may provide different ways of allocating transactions to relay nodes and, consequently, different techniques for propagating transactions. Source Mixing The principle underlying the source mixing sub-system is that transactions generated locally at a node should be transmitted to non-overlapping subsets of peers. By way of illustration, if node x generates two transactions txiand txi+1, the sets of peers selected for relay of those transactions, denoted S(txi) and S(txi+1), respectively, satisfy S(txi)≠S(txi+1) That is, the sets of peers for two subsequent transactions differ by at least one peer. This inequality can help to complicate any malicious search for patterns for the initial relay of transactions generated at a node. This concept can be extended to a source mixing of degree δSMas follows: S(txi+a)≠S(txi+b),∇(a,b)∈[0,δSM−1],a≠b Reference is now made toFIG.9, which shows in flowchart form, an example method800for transmitting data packets generated at a node in a network. The method800represents a technique of propagating data in a network according to a transaction allocation scheme that complies with the rules of a source mixing sub-system/heuristic. The method800is implemented by a node of, for example, a blockchain network, such as network100ofFIG.1. More specifically, the method800is performed by a node that participates in the DMP and generates data packets of a first type (e.g., transactions) for propagation to the rest of the network. In operation802, the client associated with the node generates at least one data packet of a first type. The data packet may, for example, comprise a blockchain transaction. The node determines a first mapping of the at least one generated data packet to its neighbouring nodes (i.e. peers). In particular, a plurality of subsets of peers are selected for relaying the data packets that are generated at the node. Each data packet is associated with a specific subset of relay nodes by the first mapping. For each data packet, in operation804, a predetermined number of first data packets of the first type that were previously generated by the node are identified. These may be data packets which have already been transmitted to peers by the node, or data packets which were previously generated but have yet to be relayed to the node's peers. In operation806, a list of relay node sets associated with the first data packets is obtained. The relay node sets comprise those neighbouring nodes (peers) to which the first data packets are respectively relayed (or allocated for relaying). That is, the relay node sets indicate the subsets of peers of the node to which individual ones of the first data packets are allocated. In operation808, a first set of relay nodes is selected based on identifying a set of neighbouring nodes that is different from the relay node sets in the list obtained in operation806. For example, the first set of relay nodes may be chosen by arbitrarily selecting a set of two or more neighbouring nodes that is not included in the obtained list of relay node sets. In some implementations, a requirement may be imposed that the selected first set be different from the relay node sets in the obtained list by two or more peers. That is, an upper limit may be set on the number of elements belonging to the intersecting set between the selected first set of relay nodes and any one of the relay node sets in the obtained list. The method800may be performed by a node after a single data packet is generated at the node, or after the node collects a plurality of generated data packets. In particular, the node may generate and accumulate data packets of a first type over a period of time (similar to the RDR stage of DMP) and determine a first mapping of the accumulated data packets to relay node sets. In these cases, the data packets may be respectively allocated to arbitrarily selected subsets of relay nodes, ensuring that no two such subsets are equal to each other. The number of neighbouring nodes that are selected for inclusion in the first set of relay nodes may be arbitrarily determined. In at least some implementations, the number of peers selected for the first set is bounded according to the bandwidth requirements (e.g. cumulative amount of incoming and outgoing data within fixed timeframes) of the propagating node. In particular, the number of peers selected for relay of locally generated transactions may be adjusted in order to address network load issues or to improve source obfuscation. For example, the number of peers included in the first set may be defined by m(txi)=mSM±rnd(ξSM) where mSMis a nominal value representing the average number of peers selected for relay in source mixing sub-system and rnd(ξSM) represents a random integer number between 0 and ξSM−1. The selection of the first set of relay nodes can then be set in the first mapping in association with the respective data packet. In other words, the first mapping may indicate that the data packet is associated with (i.e. allocated to) the first set of relay nodes. In operation810, the data packet is transmitted according to the determined first mapping. Relay Mixing The relay mixing sub-system is premised on the concept that transactions received by a node should be relayed to non-overlapping subsets of the node's peers. Using the parameter X to represent the number of elements belonging to the intersecting set between the relaying peers selected for two different transactions received by the same node, the idea behind relay mixing can be captured by |S(txj+a)∩S(txj+b)|≤λ∇(a,b)∈[0,δRM−1],a≠b(1) where δRMis the degree of relay mixing. The inequality (1) defines a transaction allocation problem of finding allocations of transactions to relay nodes that satisfy the inequality. The relay mixing strategy can thus be controlled by varying the parameter λ in (1). Once λ is set, an iterative search for a suboptimal solution to the transaction allocation problem is performed. The relay mixing sub-system may require that the inequality (1) be satisfied for each peer pifrom which the node receives one or more transactions. For example, the last δRMtransactions received (txj, txj+1, . . . , txj+δRM−1) from peer pimay be used to implement the relay mixing by requiring inequality (1) to be satisfied for those transactions. Accordingly, in some implementations, an individual parameter λimay be defined for each peer pi, respectively. In this way, source obfuscation may be implemented by creating an independent data structure for transaction relay for each peer p1, p2, . . . , pmfrom which the node receives transactions, identifying allocations of the received transactions to relay nodes. Alternatively, in other implementations, the parameter X may be a unique system parameter; a time-varying parameter λtupdated using a specific time window and information stored in the RDR table; or a time-varying parameter λitfor each peer and updated using a specific time window and information stored in the RDR table. The number of combinations of transaction allocations for a generic peer is C=(mx)δRM, where m is the number of peers of the node, δRMis the degree of relay mixing, and x is an average number of peers selected for relay. The iterative search for a suboptimal solution may proceed in several possible ways:Set a maximum number of iterations and select the transaction allocation with the smallest number of intersecting peersSet a maximum number of iterations but interrupt the process earlier if a given threshold of intersecting peers is reachedSet a maximum number of iterations and increase the value of X if the requirements are not met, then restart the processSet a maximum number of iterations and modify the value of x if the requirements are not met, then restart the processSet a maximum number of iterations and reduce the value of m if the requirements are not met, then restart the process Another set of approaches can be considered if the maximum number of iterations is substituted with a fixed time window ΔTRM. The number of neighbouring nodes that are selected for inclusion in the set of relay nodes may be arbitrarily determined. In at least some implementations, the number of peers selected for the set is bounded according to the bandwidth requirements (e.g., cumulative amount of incoming and outgoing data within fixed timeframes) of the propagating node. In particular, the number of peers selected for relay of locally generated transactions may be adjusted in order to address network load issues or to improve source obfuscation. For example, the number of peers included in the first set may be defined by m(txi)=mRM±rnd(ξRM) where mRMis a nominal value representing the average number of peers selected for relay in relay mixing sub-system and rnd(ξRM) represents a random integer number between 0 and ξRM−1. In some embodiments, ξSMand ξRMmay have the same value. Reference is now made toFIG.10, which shows in flowchart form, an example method900for relaying data packets received at a node in a network. The method900represents a technique of propagating data in a network according to a transaction allocation scheme that complies with the rules of a relay mixing sub-system/heuristic. The method900is implemented by a node of, for example, a blockchain network, such as network100ofFIG.1. More specifically, the method900is performed by a node that participates in the DMP and receives data packets of a first type (e.g., transactions) for propagation to the rest of the network. In operation902, the client associated with the node receives at least one data packet of a first type. The data packet may, for example, comprise a blockchain transaction. The node determines a second mapping of the at least one received data packet to its neighbouring nodes (i.e., peers). In particular, a plurality of subsets of peers are selected for relaying the data packets that are generated at the node. Each data packet is associated with a specific subset of relay nodes by the second mapping. For each data packet, in operation904, a predetermined number of second data packets of the first type that were most recently received by the node are identified. These may be data packets which have already been transmitted to peers by the node, or data packets which were previously received but have yet to be relayed to the node's peers. In operation906, a first allocation of the second data packets to a fixed set of neighbouring nodes is determined. In particular, the first allocation is selected from one or more allocations of the second data packets to neighbouring nodes that satisfy a predetermined condition. This operation corresponds to the iterative search for a suboptimal solution to inequality (1) described above. That is, of the allocations of data packets to relay nodes that satisfy (1), a unique allocation (e.g., an allocation with fewest intersecting peers) is determined. As captured by (1), an allocation of second data packets to a fixed set of neighbouring nodes satisfies a predetermined condition if, for any two of the second data packets, a number of neighbouring nodes to which both said second data packets are allocated (for relaying) is less than or equal to a predefined threshold value. The unique allocation of the second data packets to neighbouring nodes identified in operation906can then be set in the second mapping. In other words, the second mapping may indicate the relay nodes to which the second data packets (i.e., data packets received by the node from its peers) are respectively allocated. In operation908, the at least one received data packet is relayed according to the determined second mapping. The method900may be performed by a node after a single data packet is received at the node, or after the node collects a plurality of received data packets. In particular, the node may receive and accumulate data packets of a first type over a period of time (similar to the RDR stage of DMP) and determine a mapping of the accumulated data packets to relay node sets. In these cases, the data packets may be respectively allocated to arbitrarily selected subsets of relay nodes, ensuring that no two such subsets are equal to each other. Destination Mixing The destination mixing heuristic captures the idea that an outbound connection of a node should carry out transactions relayed by different peers. This heuristic may be considered as a special case of the relay mixing sub-system, since the latter involves the creation of non-overlapping subsets of peers for relay from the same source peers. In method900, destination mixing may be implemented by ensuring that, at operation906, for any two of the first nodes (i.e., nodes from which the node receives data packets), the set of all second data packets received from said two first nodes is allocated to at least two different neighbouring nodes in the first allocation. For example,FIG.11illustrates an example of destination mixing for a node i. The destination mixing sub-system ensures that node a does not receive, in a given time window ΔTDMtwo transactions relayed by the same node c. Thus, only one of the two transactions received at node i from node c is relayed to node a. In some implementations, the destination mixing may be enabled on a different subset of peers for each time window ΔTDM. For example, the subsets may be allocated in a similar way to the one described for source mixing with parameters (mDM, δDM, ξDM). This strategy may contribute to de-correlation of source and destination for a given transaction. Time-of-Arrival Mixing The time-of-arrival mixing heuristic implements a delayed relay of data packets, in order to help de-correlate source and destination information about a data packet relay. For example, data packets (e.g., transactions) that are collected (or generated) within a time window ΔTi(e.g., in RDR stage of DMP) may be scheduled for relay at the end of ΔTi(RDRiinFIG.12). The time-of-arrival mixing sub-system delays the relay past RDRi. In some implementations, the relay of data packets may be delayed by a multiple qΔTi, e.g. RDRi, RDRi+1, RDRi+2, etc. Thus, in accordance with the time-of-arrival heuristic, relaying a received (or generated) data packet by a node includes determining a next scheduled time for relay of received data packets to neighbouring nodes and relaying the data packet a predetermined amount of time after the next scheduled time for relay. All transactions collected within ΔTimay be relayed at ΔTi+qΔT, or each transaction j collected within ΔTimay be relayed at a given ΔTi+qjΔT. The random variable q may, in some examples, have a negative exponential probability density function, pdfq(x)=c×e−(x+g) where c and g are a multiplicative and an additive constant, respectively. Source Control A malicious peer may attempt to push the same data packet (or group of data packets) multiple times to a given node i to try to find a pattern in the local relay strategy of i. For example, a malicious peer node may create two connections to node i and monitor how incoming and outgoing traffic for i are correlated. The source control sub-system is implemented by setting a particular threshold for the number of data packets that can be received from each peer. If a peer exceeds the threshold for a given data packet, its connection will be permanently or temporarily closed. The number of instances in which a node receives a given data packet, such as a blockchain transaction, may be stored in the RDR table. Load Balancing Load balancing may be used to periodically perform a shuffle of data packets already allocated for relay to peers by the other sub-systems. The purpose of the load balancing module is to average the relay distribution among the peers, to avoid traffic overload in some peer connections or single point of failures. Two different approaches to load balancing may be implemented:Each data packet j has the same weight wjdespite their size (i.e. number of inputs, number of outputs, unlocking and locking script size)Each data packet j has its own weight wj, proportional to its size in bytes For example, in method800, a second allocation of the second data packets to the fixed set of neighbouring nodes may be determined, the second allocation being a re-arrangement of the first allocation to account for balancing traffic at output interfaces of the node. A cumulative value cican be computed for each peer i over the number of data packets nischeduled to relay: ci=∑k=1niwk(i) Subsequently, an iterative method is performed to shuffle the data packets to relay and obtain an average c* value for each peer: c⋆=Σi=1mcim Various different heuristics addressing this shuffle of data packets may be available. For example, different priorities may be assigned to different sub-systems, in order to anticipate the relay of a subset of data packets or enhance the load balancing for the outgoing traffic. Moreover, the execution of different sub-systems can introduce duplicates or inconsistent allocations of data packets, which need to be solved before the activation of the relay. Reference will now be made toFIG.13, which shows, in block diagram form, a simplified example of a participating node1000. The node1000includes a processor1002, which may include one or more microprocessors, application specific integrated chips (ASICs), microcontrollers, or similar computer processing devices. The node1000further includes memory1004, which may include persistent and non-persistent memory, to store values, variables, and in some instances processor-executable program instructions, and a network interface1006to provide network connectivity over wired or wireless networks. The node1000includes a processor-executable blockchain application1008containing processor-executable instructions that, when executed, cause the processor1002to carry out one or more of the functions or operations described herein. It will be understood that the devices and processes described herein and any module, routine, process, thread, application, or other software component implementing the described method/process for configuring the blockchain node may be realized using standard computer programming techniques and languages. The present application is not limited to particular processors, computer languages, computer programming conventions, data structures, or other such implementation details. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the invention as defined by the appended claims. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word “comprising” and “comprises”, and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, “comprises” means “includes or consists of” and “comprising” means “including or consisting of”. The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. | 50,201 |
11863423 | DETAILED DESCRIPTION OF THE INVENTION To overcome the problems faced by the conventional network routing technologies, the present invention provides a decentralized system in which distributed nodes self-organize into a peer-to-peer computer network. Data transfer latencies and stabilities between nodes are continually measured and evaluated. When a data transport need arises between two nodes in the network, better performing paths between nodes are dynamically determined in the peer-to-peer computer network based on the up-to-date measured latencies and network stability. In some embodiments, referring toFIG.1, a peer-to-peer computer network100includes a plurality of nodes A, B, C, V1, R, P, V2, Z, etc. Some of the nodes (e.g., A, B, C, R, P, Z) can be physical computer devices or systems which are connected on the Internet. Some of the nodes (e.g., V1, V2 . . . ) can be virtual nodes that virtual machines or virtual agents defined in a software defined network. The peer nodes in the peer-to-peer computer network100can communicates with each other in encrypted messages using public/private key pairs. The public key of a node can be obtained from the node ID of the node, which is available to all peer nodes in the peer-to-peer computer network100. All the nodes in peer-to-peer computer network100are pre-installed computer codes which contain protocols that govern the communications among the nodes, the set-up, maintenance, and governance within the peer-to-peer computer network100, and measurements, data path selection, and data routing within the peer-to-peer computer network100. FIG.2shows detailed components of two exemplified nodes node A210and node V1250in the peer-to-peer computer network100. Node A210includes a communication module220, a processor225, and computer memory230. The computer memory230stores computer codes that include instructions that define a distributed autonomous routing protocol (DARP), which can be executed by the processor225and the communication module220. The components in the DARP are the same as those stored in a virtual node such as node V1250, and their details are described below in conjunction with node V1250. The node V1250is a self-contained virtual system which resides in a host system or host device but isolated from the host by a firewall255. A virtual node can run any executable or script that is supported by the operating system environment of the host system or host device. The node V1250includes a remote access module260that is configured to communicate with other nodes in the peer-to-peer computer network100. The pre-installed DARP defines several applications or modules: network self-organization protocols270, a peer-node hash table275, data path discovery protocols280, and smart contract290. Analogously, these protocols and a peer-node hash table are stored in the computer memory230in the node A210, which can be accessed and executed by the processor225. The peer-node hash table275can store IP addresses, port numbers, and protocols (such as TCP, UDP, DNS, etc.), which are information used to communicate with the nodes identified by the node IDs. The nodes may support multiple network protocols that can be used to exchange messages based on network parameters. Nodes can choose which protocol is best suited for a particular situation and switch when needed. Each node must have a Public/Private key pair in order to be able to join the network. A node ID is derived from the Public Key. The Public Key of node can also be obtained from Node ID, which allows other peer nodes to verify the authenticity of messages signed by this node. Thus, a node ID is not only an identifier for the node but can also be used to obtain the public key for decrypting messages sent by this node. Moreover, secure messages sent from other peer nodes to this node can be encrypted by the public key of this node, which can only be decrypted and read by the private key of this node. The peer-node hash table275at each node contains information for a portion of the peer nodes (i.e., a portion of the global node ID hash table) in the whole peer-to-peer computer network. Importantly, other peer nodes can also query a peer-node even it is not stored in their own peer-node hash tables. Given each node is connected to the peer-to-peer computer network100and its node ID is stored in the peer-node hash tables at some peer nodes, any other node within the peer-to-peer computer network100may find it one way or another. Thus, with the sharing of information stored in peer-node hash tables, nodes in the peer-to-peer computer network100are not required to be directly connected for them to find each other. The node IDs and queries of the node IDs can be defined by Kademlia protocol. The network self-organization protocols270stores instructions for tasks for autonomously setting up and maintaining the peer-to-peer computer network100. Since there is no centralized command center, the peer-to-peer computer network100is formed and maintained solely by the distributed nodes therein, which makes the disclosed network more resilient against attacks and network failures. The disclosed peer-to-peer computer network100adopts a node-centric approach in organizing the relationship between a node and relationships to other nodes. Referring toFIG.1, node A is connected to node B, node C, node V1, and node R via connections11,12,13,15respectively. These nodes that node A is connected to are stored as neighbor nodes at node A. Node A sends pulse messages to node B, node C, V1, R and some of the nodes reply and send return pulses back to node A. Using the time stamps of the pulse messages sent out and the reception time stamp of the return messages, node A can calculate round-trip times (RTTs) from the respective nodes. In some embodiments, the pulse messages can be based on User Datagram Protocol, TCP or DNS protocols. Node A organizes its neighbor nodes according to the measured values of the respective RTTs: for example, neighbor nodes having RTTs within [0, 10 ms] are placed in a first orbital bin; neighbor nodes having RTTs within (10 ms, 20 ms] are placed in a second orbital bin . . . . Graphically, the nodes can be visualized as located at different orbits around node A: node B and node C are on orbit10(˜10 ms RTT) relative to node A, while node V1 and node R are located at an orbit20(˜20 ms RTT) around node A, and so on. In addition to data-transfer latencies, each node also measures jitters in its communication with other nodes. Details about latency measurements based on sending and reception time stamps and details about jitters in data transfer latencies between nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Since the peer-to-peer computer network100is a distributed system without a center, each of node B, node C, node V1, and node R measures RTTs from their respective neighbor nodes and organizes the respective neighbor nodes in a similar fashion as node A does, as described above. For example, node R is connected to neighbor node P with connection32and to neighbor node V2 via connection31. Node P is located on an orbit30relative to node R and node V2 is located in an orbit40relative to node R. In a cascading fashion, all the updated nodes (current members) in the peer-to-peer computer network100are connected to each other: a first node is connected to its neighbors; each of the neighbors is connected to their respective neighbors. Under the instructions of DARP, the RTTs between nodes are continually measured; the orbital bins around each node are regularly updated; nodes in the peer-to-peer computer network100are updated. A distinct advantage of the presently disclosed system and method is that the latency measurements in the peer-to-peer computer network100does not require clock synchronization between peer nodes. Local clocks at different nodes can generally have skews or clock rate differences. The RTT measurements involves the subtraction of the reception time of a pulse message received by a neighbor node (or a candidate node) from the sending time (measured at the same node) of the return message back to the origination node. Thus, a skew in the clock at the neighbor node (or the candidate node) is cancelled out in the RTT measurement. In other words, offsets between clocks of a node and its neighbor nodes do not affect RTT measurements between peer nodes in the peer-to-peer computer network100. Details about independence of latency measurement against clock offset or skew in a disclosed decentralized network are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Each node (e.g., A, B, C, V1, R, P, V2, Z) in the peer-to-peer computer network100is represented by a unique node identification (ID). Each node (physical or virtual) in the peer-to-peer computer network100stores a hash table of hash values of the node IDs of some neighbor nodes (current members, or the updated nodes) in the peer-to-peer computer network100and the nodes' IP addresses, port numbers and protocols. The hash values in the peer-node hash table allow allows the node to quickly query some current members (mostly connected neighbor nodes, as well as candidate nodes that may be selected to be connected to the current node) of the peer-to-peer computer network100. For example, node V1250can query some current members of the peer-to-peer computer network100using the hash values stored in the peer-node hash table275(FIG.2). Moreover, node V1 can send requests to its neighbor nodes to query a node using peer-node hash tables at the neighbor nodes. Since the nodes in the peer-to-peer computer network100are interconnected in the above-described cascading fashion, node V1250can find any node in the peer-to-peer computer network and sends messages or data to another node within the peer-to-peer computer network100and manage the relationship with the other nodes in the peer-to-peer computer network100. Referring toFIGS.1and2, the data path discovery protocols280guides the operation tasks for identifying, evaluating, and selecting data routing paths and sending data between a source node to a destination node along a selected relayed data path within the peer-to-peer computer network100. For example, when a need arises for node A (source node) to send data to node Z (destination node) within the peer-to-peer computer network100, DARP can discover multiple candidate relayed data paths from node A to node Z by sending path packages, as described below in relation toFIG.5, wherein each of the relayed data path includes at least one relay node that is a current member of the peer-to-peer computer network100. Under the guidance of DARP, a distributed node in the peer-to-peer computer network100can evaluate data-transmission latencies and jitters of the multiple candidate relayed data paths from node A to node Z. For example, a relayed data path from node A to node R to node V2 to node Z is identified and selected if the latencies and jitter meet preset criteria. This particular relayed data path includes two relay nodes (node R and V2 node) and three routing segments there in between: node A to node R; node R to node V2; and node V2 to node Z. The latencies of a relayed data path can be characterized by the total the one-way latency (OWL), which is the sum of OWLs from all the routing segments of the relayed data path. The data jitter in the relayed data path can be represented by an average of data jitter in the routing segments that constitute the relayed data path. In parallel, node A sends pulse one or more path packages directly to node Z in a direct path as defined by conventional network routing protocols, which results in a measurement of the one-way latency for the direct path. If the total OWL in a relayed data path is shorter than the OWL of the direct path and the jitter in the relayed data path is below a threshold, that relayed data path can be selected to route data from node A to node Z, which gives better data-transport performance that the conventional method along the direct path. Another advantage of the presented disclosed methods and systems is that the total measured OWL of a relayed data path in the peer-to-peer network is independent from the clock skews or offsets at the relay nodes along the relayed data path. The total measured OWL is determined by the sending time of the path package at the source node (e.g., node A) and the reception time of the path package at the destination node (e.g., node Z). Details about one-way latencies along a relayed data path comprising one or more relay nodes and its independence of the clocks of the relayed nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/219,884, titled “Automated formation and optimization of a group of nodes for distributed data routing over computer networks”, filed Apr. 1, 2021, the content of which is incorporated herein by reference. Referring toFIG.2, the smart contract290defines obligations and incentives for each node relative to the peer-to-peer computer network100and relative to each other. For example, after successful data transfer via a relayed data path, the relayed nodes can be paid by tokens typically by the source node that has initiated the data transfer. The successful completion of data transfers and token transactions can be validated and recorded by peer nodes on a blockchain. In addition, those peer nodes that function as relay nodes can be validated and awarded by tokens for continuing to up and available to route data for its peers. These above and other conditions are defined in the smart contract, which are pre-agreed when nodes install DARP codes. Details about governance and utility of a decentralized data routing system including obligations and incentives of the peer nodes are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. Referring toFIG.3, the method for autonomously routing data using in a peer-to-peer computer network (e.g.,100) can include two processes each comprising multiple steps: self-organizing a peer-to-peer computer network comprising a plurality of nodes each associated with a unique node ID (step310) and automatically routing data from a first node to a second node via one or more relay nodes in the peer-to-peer computer network (step320). Step310is related to setting up and maintaining a functional peer-to-peer computer network capable of routing data within the network. Each node in peer-to-peer computer network is represented by a unique ID. Hash values of these node IDs are stored in a peer-node hash table (e.g.,275inFIG.2). Step320involves the process of identifying, evaluating, and selecting relayed data paths for routing data between peer nodes in the peer-to-peer computer network. As described below in relation withFIGS.4and5, the relay node is an updated node in the peer-to-peer computer network. The process of self-organizing a peer-to-peer computer network comprising a plurality of nodes each associated with a unique node ID (step310) can include one or more of the following steps. Referring toFIG.4, the first node in a peer-to-peer computer network stores information about of its neighbor nodes in the peer-to-peer computer network (step410). In the example shown inFIG.1, node A stores information of its neighbor nodes, such as node B, node C, node V1, and node R that node A is connected to in the peer-to-peer computer network. The information can include node IDs and other properties (such as IP addresses, port numbers, and protocols) of the neighbor nodes, which as described above can be stored in a peer-node hash table (e.g.,275inFIG.2). Optionally, the first node can also store information about candidate nodes that are currently not neighbor nodes of the first node, but can become neighbor nodes to the first node in the future (step420). The candidate nodes are nodes that the first node is aware of and has incrementally stored previously. In some embodiments, the candidate nodes can be shared by the neighbor nodes of the first node. For example, inFIG.1, Node A's neighbor nodes, i.e., node B, node C, node V1, and node R are in communication with node A. Under DARP protocols, these node A's neighbor nodes can share with node A about the nodes they are respectively connected to and are aware of. For instance, the candidate nodes stored at node A can include nodes that are connected to node B, node C, node V1, and node R, such as node P and node V2 that are connected to node R. The candidate nodes allow node A to explore a larger pool of nodes and to expand its network of neighbor nodes in each update. At the same time, some of the nodes that node A has been connected may become unstable or non-responsive or non-performing (e.g., increased data latencies or increased data jitter), these nodes may be dropped off from node A's connections (i.e., Node A's list of neighbor nodes, with more details described below). The balance of expansion and trimming of neighbor nodes (i.e., updated connection with the first node) assures a healthy operational peer-to-peer computer network. In general, nodes are self-managed and self-organized in the peer-to-peer computer network based on the performance by the data connections between the nodes. Thus, the nodes in the peer-to-peer computer network are required by DARP protocols to continually measurement performance characteristics (e.g., latency, jitter, etc.) of their connections. Based on the most updated performance measurements, the peer-to-peer computer network dynamically refresh its members: some good performing nodes are added to neighbor nodes, and some non-response or bad performing nodes are removed from neighbor nodes. The updated neighbor nodes for all nodes in the peer-to-peer computer network form the updated nodes for the peer-to-peer computer network. To this end, pulse messages are regularly automatically sent from the first node to the neighbor nodes and the candidate nodes (step430). Each of the pulse messages is characterized by a sending time stamp at the first node. In response to the pulse messages, the first node receives return pulses from at least some of the nodes in the neighbor nodes and the candidate nodes (step440). Each of the return pulses is characterized by a reception time stamp at the first node. Similarly, each of the pulse messages sent from the first node to one of the neighbor nodes or the candidate nodes is associated with a sending time stamp. Next, round-trip times (RTTs) between the first node and its neighbor nodes or its candidate nodes are calculated based on the pulse messages and the return pulses (step450). Each of the return messages is characterized by a reception time stamp. Since both sending and reception times are measured at the first node, thus RTT calculations are independent of the clocks at the neighbor nodes and the candidate nodes. A neighbor node or a candidate node receives a pulse message from the first node at a reception time and sends a return message back to the first node at a transmittance time. The reception time and transmittance time cancel out each other in the calculation of the RTT at the first node using the transmittance time of the pulse message at the first node and the reception time of the return message at the first node. However, RTT measurement may be affected by clock rate differences between the first node and the neighbor node or the candidate node. In some embodiments, the RTT calculations between the first node and neighbor nodes or the candidate nodes in step450can compensate the clock rate differences between different nodes. The first node can send pulse messages to a neighbor node or a candidate node at regular time intervals and receive return messages at regular time intervals. The return messages include transmittance times at the neighbor node or the candidate node. The clock rate of the neighbor node or the candidate node can be calculated using the transmittance times. In RTT calculations, the time gap between the reception time and the transmittance time at the neighbor node or the candidate node can be adjusted according to the difference between the clock rates at the first node and the neighbor or candidate node. In other words, the RTT measurements and calculations can be independent of the clock skews or clock rate discrepancies at the counterpart testing nodes. In the presently disclosed method, RTTs are used for monitoring connection performances between pairs of neighboring nodes in the peer-to-peer computer network. The neighbor nodes and the candidate nodes are then sorted into a plurality of orbital bins each comprising nodes characterized by RTTs related to the first node within a specific interval (step460). As noted above, each orbital bin is defined by a range of RTT such as [0 ms, 5 ms], [5 ms, 10 ms] . . . , etc. In one respect, nodes in different orbital bins can be considered being at different distances from the first node in relation to data transport. The spread in “data transport distances” between the orbital bins assures an optimal reach of the first node's connections with its neighbor nodes. The nodes that have not successfully updated with RTTs are not sorted in the orbital bins. From each of the orbital bins, at least one node is automatically selected based on RTTs associated with the node. The selected node is added to updated neighbor nodes for the first node (step470). The sum of updated neighbor nodes of all the nodes in the peer-to-peer computer network form the updated nodes in the peer-to-peer computer network (step470). Within an orbital bin, a node having a shorter RTT can be selected, which gives a faster data transport within RTT range of that orbital bin. Moreover, the node selection within each orbital bin can also take into account of jitters, bandwidths, clock rate differences, and other performance parameters measured by the pulse messages and the return pulses at the first node. A node will not be selected if measured jitters, bandwidths, clock rate differences, and other performance parameters exceeding a respective threshold. It should be noted that the neighbor nodes and the candidate nodes that are non-responsive to the pulse messages from the first node do not lead to updated RTT calculations and are not sorted into the orbital bins. These non-response nodes are thus discarded if some of them were on members of the peer-to-peer computer network. Furthermore, those nodes that have recently measured jitter exceeding a predetermined threshold can also be removed from the list of updated nodes in the peer-to-peer computer network if they have been. In some embodiments, when two nodes in the same orbital bin have similar performances (in latencies and jitter), the node that has been an updated node in the peer-to-peer computer network for longer duration is selected. This criterion is based on the observation that nodes that have shown longer period of reliable performance more likely provide more reliable performance in the future. Steps410-470are repeated for other nodes (e.g., B, C, V1, R, P, V2, Z, etc.) in the peer-to-peer computer network. In this way, node connections are regularly evaluated between pairs of neighboring nodes; the neighbor nodes are regularly updated. These node updating steps are repeated and propagated throughout the peer-to-peer computer network. The process of automatically routing data from a first node to a second node in the peer-to-peer computer network (step320inFIG.3) can include one or more of the following steps. Referring toFIG.5, an order or a need is first identified to send data from a first node to a second node in a peer-to-peer computer network (step510). The IP address of the second node is looked up using second node's ID on the peer-node hash table (275inFIG.2) stored at the first node. One or more path packages are sent from the first node to the second node in a direct data path (step520) as defined by conventional Internet routing. Each path package records all the timestamps from the first node, all the intermediate hops along the direct path, and the second node. One-way latency (OWL) and jitter are measured in the direct path between the first node and the second node using the one or more path packages received at the second node (step530). The OWL of the direct path is the reception time at the second node subtracted by the sending time recorded at the first node. The conventional direct data path is used as a benchmark for the improved performance of the relayed data paths. Next, relayed paths between the first node and the second node are searched for and selected. One or more path packages are sent from the first node to the second node via relay nodes (step540). Each path package records the reception time and the sending time at each relay node along its path as well as the sending time at the first node. Each of the relayed data paths includes one or multiple relay nodes that are from the updated nodes in the peer-to-peer computer network (step540). UsingFIG.1as an example, when node A wants to find relayed paths to node Z, node A sends path packets to its neighbor nodes in the orbital bins (e.g., node B, C, R, V1, etc.). Thus, the updated neighbor nodes have been recently updated using pulse messages and RTT and jitter measurements as described above. Each of the neighbor nodes receiving a path packet records a reception timestamp and a seconding timestamp to the path package. Then, the node A's neighbor node transmits this updated path packet forward to its neighbor node (e.g., from node R to node P and node V2). The relaying operation is repeated until the destination node is reached, or certain constraints are not met anymore (e.g., the number of hops has exceeded the maximum number of hops along each relayed path). Thus, a path packet that is successfully arrives the destination node Z includes the timestamps of all the intermediate hops for the specific relayed path. An important aspect for the presently disclosed cascaded path packages is in its network security. At each hop, a relay node cryptographically signs the path packet with its private key paired with a public key of the relay node. Thus, the destination node (or the second node) can cryptographically verify the integrity and authenticity of all the hops (or routing segments) along the relayed path. Thus, no intermediate node can alter hop timestamps or the list of hops. In some embodiments, the construction of a path packet along the data path (a potential data relay path) can include the following steps: the source node builds a path packet describing constraints (e.g., the maximum number of hops allowed along the relayed path) and the destination node; the source node cryptographically signs the path packet using the node ID of the source node, the node ID of the destination, the node ID of the first hop node (i.e. the first hop), and sends this path packet to the first relay node along with the signature; the first hop node records OWL, jitter, etc. of this hop; the first hop node cryptographically signs the path packet using the source node signature, recorded OWL, jitter, etc. and the node ID of the second hop node, and sends the updated path package to the second hop node; the second hop node repeats the steps of the first hop node; and these steps are repeated till the path package is received by the destination node. The destination node receives a chain of signatures that each depends on the previous signatures as well as recorded measurements along each routing segment, which prevents the content of the path packet from being altered by the intermediate malicious nodes. (When a data path is indeed selected for data routing, its hop nodes will function as relay nodes for data routing.) In the above described method, the first node (the source node) can find the second node (the destination node) even if they are not directly connected or the second node is not listed in the peer-node hash table of the first node. Moreover, the relay nodes may or may not be directly connected to the first node (the source node) or the to the second node (destination node). Additionally, these relay nodes have been recently or currently updated by their respective neighbor nodes, which means that they provide good data transfer performance via their connections. In some embodiments, the search for the destination node is enabled by Kademlia protocol, which allow a node to find information (node ID etc.) about a previously unseen node that is connected to the whole peer-to-peer computer network, and to send path packets to that node. For each path package that is originated from the first node and received by the second node, the total OWL for each of the relayed data paths between the first node and the second node is calculated (step550). Since the sending time and reception time are recorded by the path package for each routing segment, the OWL for each routing segment is simply the difference between the reception time of the receiving node subtracted by the sending time of the sending node for that routing segment. The total OWL for the relayed path from the first node to the second node is the sum of all the OWLs of the routing segments along the relayed path. Since each relay node resends the next path package right after it receives one, the clock skew or clock discrepancy is cancelled out between the reception time and the sending time at the relay node. In other words, the total OWL is independent from the clock discrepancies at the relay nodes along the relayed path. Details about one-way latencies along a relayed path and its independence of the clocks of the relayed nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. One of the relayed data paths is automatically selected if a total OWL and an average jitter associated with the relayed data path satisfy predetermine criteria in comparison to the direct path (step560). The selected relayed data path is the best performing among all the relayed path with lowest total OWL and data transfer jitters below a threshold. The selected relayed data path also has a total OWL shorter than the OWLs of other identified relayed data paths and the direct data path. The average jitter associated with a relayed data paths from the first node to the second node is calculated by a mean of jitters measured at all routing segments along the relayed data path. Details about jitters in data transfer latencies between nodes are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Once a relayed data path is selected within the peer-to-peer computer network, the first node can send data to the second node along the selected one of the relayed data paths (step570). It should be noted that the relay nodes can be physical nodes or SDN-defined virtual nodes in the peer-to-peer computer network. After successful relayed data routing, the relay nodes can be subsequently rewarded by the party (typically the first node or the source node) that has requested the data transport. The award can be in the form a transfer of tokens. The transactions can be recorded on a blockchain. Details about the awards, validation of transactions, and related tokenomics are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. In some embodiments, referring toFIG.6, the process of autonomously self-organizing nodes and autonomously finding best data routing paths between nodes in a peer-to-peer computer network can include one or more of the following steps: when a source node has the need to send data to a destination node in a peer-to-peer computer network, the destination node is identified to receive a data transfer in the peer-to-peer computer network (Step600). As described above, the nodes in the peer-to-peer computer network are identified by their node IDs. The node ID of a node can be derived from the public key of that node. The public key of node can also be obtained from Node ID. Other peer nodes can use the public key to authenticate a message cryptographically signed by this node using a private key (that is paired with the public key). The node ID (and the IP addresses, port numbers and protocols) of a node in the peer-to-pee network is stored in peer-node hash tables (275,FIG.2) of some other peer nodes (e.g., neighbor nodes). Since the nodes in the peer-to-peer computer network are interconnected in a cascading fashion (to neighbors, and in turn to neighbors' neighbors), a node can find any current peer node in the peer-to-peer computer network using Kademlia protocol and can send messages or data packages to any other peer node within the peer-to-peer computer network. Optionally, constraints for the data transfer from the source node to the destination node are defined (step605). Such constraints can include a maximum latency (defined by the total one-way latency along a routing path), a maximum jitter for the data transfer (i.e., variations in the data transfer latencies), and the maximum number of hops (i.e., number of relay nodes) allowed in a relayed data path from the source node to the destination node. The constraints can also be based on bandwidths, clock rate differences, etc. As disclosed in detail in relation toFIGS.1and2and steps410-460inFIG.4, the source node stores a list of neighbor nodes associated with a source node in orbital bins according to round-trip times (RTTs) between the source node and the neighbor nodes (step610). The list of neighbor nodes stored at the source node can be sorted into orbital bins ranked by RTT values such as [0, 10 ms], (10 ms, 20 ms], etc. It should be noted, as described above in relation to step470(FIG.4), that the neighbor nodes can be sorted in orbital bins based on other parameters such as jitters, bandwidths, and clock rate differences measured by pulse messages and return messages between the source node and the neighbor nodes. Furthermore, as described above in relation to step450(FIG.4), RTT calculations can compensate for close rate differences between source node and the neighbor nodes. The list of the neighbor nodes can be updated by removing nodes based on predetermined performance criteria (step615). For example, if recently measured RTTs and/or jitters between the source node and some of the nodes do not satisfy performance criteria (RTT too long or data-transfer jitter too large), these nodes can be removed from the list of neighbor nodes at the source node. Furthermore, new nodes can also be added to the list of neighbor nodes associated with the source node as previously described (step470inFIG.4). The source node can send one or more path packages to the destination node in a from direct data path (step620) from the source node to the destination node. The direct path is defined by conventional network routing protocols. One-way latency (OWL) and jitter in the direct path are measured using the one or more path packages received by the destination node (step625). Each path package is associated with a sending time recorded by the source node and a reception time recorded at the destination node. An OWL can be calculated using the reception time and the sending time independent of clock skew that may exist between the destination node and the source node as described in step530(FIG.5) and step675below. The OWL and jitter measured in the direct path are used as a benchmark for the candidate relayed data paths between the destination node and the source node. To find relayed data paths, path packages are sent from the source node to its neighbor nodes (step630). The neighbor nodes include a first hop node (step630). Each pack package can contain sending time recorded by the source node as well as a signature of the source node. The signature of the source node, as described above, can be verified by the public key (which can be obtained from the node ID) of the source node. As discussed previously in relation with step540(FIG.5), a node in the peer-to-peer network such as the source node may only be connected to a subset of all the nodes in the peer-to-peer network. But using Kademlia protocol, a node in the peer-to-peer network can find and reach another peer node in the peer-to-peer network by querying the other peer node at peer-node hash tables at different nodes and by sending cascaded path packages through the peer-to-peer network. In this step, the source node can send path packages simultaneously to all the updated neighbor nodes stored in the peer-node hash table (275,FIG.2) at the source node. Optionally, for security purpose, the neighbor nodes can verify the path packages received from the source node (step635). The neighbor nodes such as the first hop node can verify a cryptographic signature in the path package signed by the source node. If the path package is signed using a private key of the source node, the signature can be authenticated using a public key of the source node that is paired with its private key. As discussed above, the ID and the public key of the source node can be queried (e.g., using peer-node hash tables275inFIG.2) by the neighbor nodes in the peer-to-peer network. For multi-hop path packages (step665), a neighbor node can also verify the hop number and the signatures by the source node and all the intermediate hop nodes associated with the path package. The first hop node can update the path packet by with associated hop information (step640). The updated hop information can include reception time at the first hop node, the sending time of the path package to the next hop node or the destination node (step645and step660below) as well as a signature cryptographically signed by the first hop node. The updated hop information is inserted into the path packet to be sent to the next hop node or the destination node. Next, one or more path packages can be sent from the first hop node to the destination node in a second direct data path (step645) from the first hop node to the destination node. This step terminates additional hops and will be used to evaluate a relayed data path comprising only one relay node: the first hop node. As discussed above in relation toFIGS.1and2and steps410-460inFIG.4, and similar to step610relating to the source node, the first hop node can store information of a list of neighbor nodes associated with in orbital bins according to RTTs between the first hop node and its neighbor nodes (step650). Similar to step615, neighbor nodes can be removed from the list based on predetermined performance criteria (step655), which can include removal of nodes having RTT or data-transfer jitter over allowed respective thresholds. Furthermore, new nodes can also be added to the list of neighbor nodes associated with the first hop node as previously described. Moreover, as described above in relation to step470(FIG.4), the neighbor nodes can be sorted in orbital bins based on other parameters such as jitters, bandwidths, and clock rate differences measured by pulse messages and return messages between the first hop node and its neighbor nodes. Furthermore, as described above in relation to step450(FIG.4), RTT calculations can compensate for close rate differences between first hop node and its neighbor nodes. Steps660and step665can be skipped if the constraints defined in step605specify a maximum number of one hop node (that is, only the first hop node or one relayed node is allowed in a relayed data path). Furthermore, path packages updated with the hop information at the first hop node can be sent from the first hop node to its neighbor nodes including a second hop node (step660). These path packages are used to evaluate relayed data paths that include additional relay nodes (e.g., the second hop node, etc.). Then, steps635-660described above relating to the first hop node can be repeated for the second hop node or additional hop nodes (step665). UsingFIG.1as an example, node A can be the source node, node R can be the first hop node, node V2 can be the second hop node, and without limiting to only two hop nodes, the destination node can be node Z. In the cascading manner as described above, steps630-665can reach all the peer nodes that are currently on the updated lists of neighbor nodes of one or more nodes in the peer-to-peer network. Under the Kademlia protocol, because each peer node is connected to multiple of its neighbors, all peer nodes are inter-connected; the source node will always have one or more pathways to reach the destination node in the same peer-to-peer network. The destination node receives all the path packages received from the source node (in the first direct path), from the first hop node (one hop then in the second direct path), and from other hop nodes (multiple hops) (step670). The path packages include information recorded at the source node as well updated information recorded at the intermediate hop nodes. Each of the path packages includes the IDs of the source node and the intermediate hop nodes, the sending times and the reception times from the source node to all the hop nodes, as well as cryptographic signatures by all the nodes along the paths. The signatures can be used for verifications using the public keys of the associated nodes. These path packages represent possible relayed data routing paths between the source node and the destination node with the first direct path being the benchmark. The total OWLs and other performance metrics are then calculated for the potential data routing paths associated with the path packages (step675) received by the destination node. As described above in relation to step550inFIG.5, the total OWL for the relayed path from the source node to the destination node is the sum of the OWLs of all the routing segments along the relayed data path (via one or more hop nodes). Since each hop node resends the updated path package right after the last version of the path package was received, the clock skew is cancelled out between the reception time and the sending time at the relay node. In other words, the total OWL is independent from the clock skews at the hop nodes along a relayed data path that is being evaluated. Details about one-way latencies along a relayed path and its independence of the clocks of the relay/hop nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Other performance metrics calculated at the destination node can include jitter or variations in data-transfer times, bandwidths of data throughput, clock rate differences, and the number of hops in a relayed data path. A relayed data paths can be automatically selected for transferring data from the source node to the destination node based on the path packages received by the destination node if the associated total OWL and other performance metrics satisfy predetermine criteria (step680). The selected relayed path includes one or more relay nodes, which are the hop nodes such as the first hop node, the second hop node . . . used in finding data routing paths from the source node to the destination node. Typically, the data routing path having the lowest OWL and jitter can be selected. The predetermine criteria can require each relayed data path to have an OWL and jitter to be below respective thresholds (that low latency and low variation). The predetermine criteria can include a comparison of a potential relayed data path against the (first) direct path from the source node to the destination node: at least one of OWL and jitter should exceed the data-transfer performance of the direct path. The predetermine criteria can also be related to the constraints for the data transfer described in step605. For example, the constraints can specify a maximum number of hops to be 2, thus all potential relayed data paths having more than two hop nodes can be discarded from the evaluation. Using data path packages received, the destination node can maintain a list of potential data routing paths including the currently selected data routing path. The extra data routing paths can be used as alternative routing paths to the first selected path. One or more of the above steps (610-615,640-645) can be implemented by or under the data path discovery and routing protocols280(inFIG.2). One or more of the above steps (600,605,620-635,650-680) can be implemented by or under the network self-organization protocols270(inFIG.2). Once a relayed data path is selected within the peer-to-peer computer network, the source node can send data to the destination node along the selected one of the relayed data paths similar to step570. It should be noted that the source node, the destination node, as well as the relay nodes can be physical nodes or SDN-defined virtual nodes in the peer-to-peer computer network. After successful relayed data routing, the relay nodes can be subsequently rewarded by the party (typically the first node or the source node) that has requested the data transport. The award can be in the form a transfer of tokens. These transactions can be recorded on a blockchain. Details about the rewards, validation of transactions, and related tokenomics are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. In some embodiments, referring toFIG.7, a hybrid decentralized data routing method in a peer-to-peer computer network can include one or more of the following steps: when a need arises to route data from a source node to a destination node in a peer-to-peer computer network, multiple paths are identified from the source node to the destination node in the peer-to-peer computer network. Each of the multiple paths can include two or more routing segments that each includes a sending node and a receiving node (step710). In the presently disclosed method, the protocols for selecting paths in a peer-to-peer computer network (such as measurements and evaluations of latencies and other data transfer metrics, the encryption of the path packages) and for maintaining connections between peer nodes (such as measuring round-trip times between nodes, the selections and organization of neighbor nodes) are pre-installed in the peer nodes within the peer-to-peer computer network. In identifying the multiple paths, the receiving node in one of the routing segments in one of the multiple paths is selected among a plurality of nodes in the peer-to-peer computer network based on round-trip times (RTTs) measured between the sending node and the plurality of nodes (step720). As described above in relation toFIGS.4-6, each node in the peer-to-peer computer network, such as the sending node in one of the routing segment, can maintain a list of neighbor nodes. The neighbor nodes associated with the sending node in the routing segment are selected among a plurality of nodes based on the RTTs between the sending node and the plurality of nodes. The RTT between the sending node and one of the plurality of nodes is measured using pulse messages sent between the sending node and one of the plurality of nodes. The RTT is calculated using a sending time stamp of a pulse message sent from the sending node and a reception time stamp of a return pulse message, received by the sending node, in response to the pulse message. Even if some computer clocks at the plurality of nodes in the peer-to-peer computer network can have skews relative to each other, the RTT calculations are independent of the skews between the computer clocks at the plurality of nodes in the peer-to-peer computer network. As previously described (270inFIG.2, steps420-470inFIG.4and step610inFIG.6), the neighbor nodes are sorted into a plurality of orbital bins according to RTTs between the sending node and the neighbor nodes (steps460-470inFIG.4). Each of the orbital bins is associated a specific interval for the RTT values. In identifying one of the multiple paths, the receiving node in one of the routing segments is selected from the neighbor nodes associated with the sending node in the same routing segment. In some embodiments, peer-node hash tables (275inFIG.2) are stored in the peer-to-peer computer network. Each of the peer-node hash tables each includes hash values of node IDs of neighbor nodes associated with a potential sending node (275inFIG.2). The step of identifying multiple paths from a source node to a destination node can include querying the destination node using the peer-node hash tables stored at the source node and other potential sending nodes in the peer-to-peer computer network. In some embodiments, the receiving node or the sending node (which can be a relay node) along a routing path can be a virtual node. Path packages are sent along the multiple paths from the source node to the destination node (step730). As described previously (280inFIG.2, step540inFIG.5, step620inFIG.6), the path packages are for quantitatively measuring and evaluating different routing path options from the source node to the destination node (steps630-660inFIG.6). A path packet can include a sending time stamp recorded at the source node. At each receiving node, the path packet can be updated to include a reception time stamp recorded at the receiving node and an identification of the receiving node. Moreover, the one of the path packets can be updated to include a cryptographic signature at the receiving node. The cryptographic signature can be signed with a private key paired with a public key associated with the receiving node. In some embodiments, the public key of the receiving node can be obtained from a node identification (ID) of the receiving node. Next, total one-way latencies (OWLs) associated with the multiple paths are measured using path packages from the source node to the destination node (step740). The total OWL for one of the multiple paths is obtained by summing OWLs measured by one of the path packages along all routing segments in the one of the multiple paths (280inFIG.2, step550inFIG.5, step675inFIG.6). Even if some computer clocks at the plurality of nodes may have skews relative to each other, the total OWLs measured in the multiple paths are independent of the skews between the computer clocks at the plurality of nodes (i.e., the relay nodes along the multiple paths) in the peer-to-peer computer network because offsets in the reception time and the sending time of the path package at the relay nodes can cancel out each other. A relayed data path can then be selected from the multiple paths at least in part based on the total OWLs respectively associated with the multiple paths from the source node to the destination node (step750). As discussed previously (280inFIG.2, step560inFIG.5, step680inFIG.6), the selected relayed data path has a total OWL lower than at least one other path in the multiple paths. In most situations, a selected relayed data path has among the shortest total OWL among all evaluated paths from the source node to the destination node. In some embodiments, multiple relayed paths can be selected from the source node to the destination node, which can serve as alternative data routing paths for providing redundant routing pathways in case one of them fails for some reason. The selection of relayed data path(s) can also include sending one or more path packages from the source node to the destination node in a direct data path from the source node to the destination node (steps520-530inFIG.5). The total OWL of the relayed data path is compared to that of the direct data path. The relayed data path is selected when it provides a lower total OWL than the direct data path (280inFIG.2, step560inFIG.5, step680inFIG.6). In some embodiments, jitters associated with the multiple paths are also measured using path packages from the source node to the destination node. The selection of the relayed data path from the multiple paths can further take into account of jitters associated with the multiple paths from the source node to the destination node (steps625,675inFIG.6). For example, a path is not selected if it is characterized with high data jitters even it has low total OWL. In some embodiments, the relayed data path is selected from the multiple paths further based on the numbers of routing segments respectively associated with the multiple paths from the source node to the destination node. In general, fewer routing segments (i.e., fewer relay nodes) are preferred for a routing path because it represents a more reliable routing option with few relay nodes and thus few failure mechanisms. The selection of a relayed data path can be based on an optimization of a shorter total OWL and a smaller number of routing segments (or relay nodes). For example, two routing paths, path A and path B, have similar total OWLs, but path B has one relay node (i.e., two routing segments) while path A has two relay nodes (i.e., three routing segments), then path B is preferred and can be selected due to its fewer number of relay nodes. Data can then be routed along the relayed data path selected from the source node to the destination node, (step760) in the peer-to-peer computer network. The above disclosed system and method provide a novel hybrid approach: nodes in a peer-to-peer network are qualified and maintained largely based on round-trip pulse measurements between peer nodes, while data routing paths are measured and selected based on one-way latency measurements. In other words, round-trip pulse measurements are used in peer node selection, and one-way latency measurements are used in routing path selection. In some embodiments, a hybrid decentralized data routing method includes self-organizing and maintaining a peer-to-peer computer network (related to step310inFIG.3). Neighbor nodes and candidate nodes of a node in a peer-to-peer computer network are measured, evaluated, selected, and maintained using pulse messages and return pulses (steps430,440inFIG.4). The measurement and evaluations can be based on round-trip times (RTTs), jitter of the return messages, reliability of the neighbor nodes or the candidate nodes, etc. In some embodiments, the pulse messages sent from a node in the peer-to-peer computer network to the neighbor nodes and the candidate nodes (step430inFIG.4) can be scheduled at a regular time interval such as at 1 second apart. Each of these pulse messages can contain a sending time, the node ID of the neighbor node or the candidate node that received the pulse message. The return pulse can include reception time of the pulse message and the node ID of the neighbor node or the candidate node that sent the return pulse. In addition, pulse messages and return pulses can contain other information:1) Estimated RTT, OWL, jitter, packet loss, and clock skew between a pair of nodes that sends and receives a pulse message in the peer-to-peer computer network; and2) Estimated RTTs, OWLs, jitter, packet loss, and clock skews about other nodes than the sending node and the receiving node in the peer-to-peer computer network. Such information about other nodes can be shared with trusted nodes or when a node just joined the peer-to-peer computer network. Such neighbor node information can be shared randomly or periodically, e.g., every 60 s. In some embodiments, the pulse messages sent from a node in the peer-to-peer computer network to the neighbor nodes and the candidate nodes (related to step430inFIG.4, step610inFIG.6, step750inFIG.7described above) can be customized based a few of the following exemplified parameters:1) Sparse measurements.The intervals between or the frequency of the pulse messages can be dynamically adjusted to result in sparse measurements of RTTs. The sparse measurements can reduce data traffic, or the overhead produced by the pulse messages and their associated return pulses on the computer network. The intervals between the pulse messages are aperiodic, which can for example follow a pattern 1 s, 5 s, 20 s, is, 5 s, 20 s . . . or can be irregular or random values (within a predetermined range such as between [1 s, 60 s]). Such sparse measurements can properly evaluate and maintain stable links between a node in the peer-to-peer network with its neighbor nodes at a lower cost to the network.2) Seasonality in pulse messages and RTT measurements.The intervals between or the frequency of the pulse messages can be based on time of the day, time of the week, time of the year, and event times such as holidays, sports events, new streaming releases, etc. More frequent pulse messages (i.e., having shorter intervals) can be conducted if critical communications are required: peak work time, and peak entertainment time (such as streaming for a sports event). High data traffic often create congestions in some routes, which create more opportunities for relaying data at shorter OWLs and low jitter.3) Dependence on network location or network performance.Pulse messages can be sent and return pulses can be measured at higher frequency in network locations where Internet or the computer network are less stable. In these areas, neighbor nodes may be measured and updated more frequently to remove nodes that no longer perform.4) Node role dependence.Pulse messages can also be sent to different types of nodes at different frequencies or average pulse intervals. The top relay nodes (those nodes that have been selected often to replay data) can be measured more frequently than other neighbor nodes in the orbital bins (described in relation to step460inFIG.4) because the relay nodes are the ones most often used for relaying next data routing tasks and most mission critical. The neighbor nodes in orbital bins can also be measured more frequently by pulses messages than candidate nodes. Since it is not frequent for a candidate node to be updated to become a neighbor node, it is not necessary to measurement RTTs with candidate nodes too frequently. In one implementation, frequency or pulse intervals of the pulse messages can be based on several tiers of the nodes' roles: a) relay nodes (nodes that have been relaying data for other peer nodes); b) neighbor nodes in the orbital bins (pre-selected neighbor nodes); c) candidate nodes (nodes that may potentially be selected as neighbor nodes). In some embodiments, pulse message interval PMI can be a function of a few parameters as expressed in the formula below: PMI=f(time,r,nr), (1) wherein time is time of the day, time of the week, time of the year, and event times as described above. r is geographic location or the topological location of a node in the peer-to-peer network. r is characterized by a general network performance (e.g., stability or jitter, bandwidths, the amount traffic, down time, density of peer nodes with the peer network, historic relay statistics, etc.) at that locality. nr, or node role, is the role of recipient node for the pulse messages, which may include relay nodes, neighbor nodes selected in the orbital bins, and candidate nodes. These nodes can all receive pulse messages from a node in the peer-to-peer network but as described above, they do not need to be evaluated at the same intensity or frequency. In some embodiments, Referring toFIG.8, a first node in a peer-to-peer computer network stores information about of its neighbor nodes in the peer-to-peer computer network. In the example shown inFIG.1, node A stores information of its neighbor nodes, such as node B, node C, node V1, and node R that node A is connected to in the peer-to-peer computer network. The information can include node IDs and other properties (such as IP addresses, port numbers, and protocols) of the neighbor nodes, which as described above can be stored in a peer-node hash table (e.g.,275inFIG.2). Optionally, the first node can also store information about candidate nodes that are currently not neighbor nodes of the first node but may be selected to become neighbor nodes to the first node in the future. The candidate nodes are nodes that the first node is aware of and has incrementally stored previously. In some embodiments, the candidate nodes can be shared by the neighbor nodes of the first node. For example, inFIG.1, Node A's neighbor nodes, i.e., node B, node C, node V1, and node R are in communication with node A. Under DARP protocols, these node A's neighbor nodes can share with node A about the nodes they are respectively connected to and are aware of. For instance, the candidate nodes stored at node A can include nodes that are connected to node B, node C, node V1, and node R, such as node P and node V2 that are connected to node R. The candidate nodes allow node A to explore a larger pool of nodes and to expand its network of neighbor nodes in each update. At the same time, some of the nodes that node A has been connected may become unstable or non-responsive or non-performing (e.g., increased data latencies or increased data jitter), these nodes may be dropped off from node A's connections (i.e., Node A's list of neighbor nodes, with more details described below). The balance of expansion and trimming of neighbor nodes (i.e., updated connection with the first node) assures a healthy operational peer-to-peer computer network. In general, nodes are self-managed and self-organized in the peer-to-peer computer network based on the performance by the data connections between the nodes. Thus, the nodes in the peer-to-peer computer network are required by DARP protocols to continually measurement performance characteristics (e.g., latency, jitter, etc.) of their connections. Based on the most updated performance measurements, the peer-to-peer computer network dynamically refresh its members: some good performing nodes are added to neighbor nodes, and some non-response or bad performing nodes are removed from neighbor nodes. The updated neighbor nodes for all nodes in the peer-to-peer computer network form the updated nodes for the peer-to-peer computer network. To this end, pulse messages are regularly automatically sent from the first node to the neighbor nodes and the candidate nodes (step830). Each of the pulse messages is characterized by a sending time stamp at the first node. In some embodiments, the time intervals between the pulse messages sent out from the first node can be kept at a regular interval such as 0.5 second, 1 second, or 2 seconds, etc. In some embodiments, the time intervals between the pulse messages can be dynamically adjusted (step820). One motivation to increase the interval between pulse messages (or decreasing average frequency of the pulse messages) is to reduce the number of pulse messages and reduce the burden or overhead of DARP measurements on the computer network. In many situations, sparse pulse-message measurements can be made without sacrificing the need to evaluate and maintaining peer nodes in the peer-to-peer computer network. The intervals between the pulse messages can be determined based on time (e.g., time of a day, or a week, or schedules of events such as holiday, sports games, etc.), or network location, or network performance, or roles of nodes receiving the pulse messages (step830). The intervals between the pulse messages can be determined based on one, or a combination of two or more factors described above. In response to the pulse messages, the first node receives return pulses from at least some of the nodes in the neighbor nodes and the candidate nodes (step840). Each of the return pulses is characterized by a reception time stamp at the first node. Similarly, each of the pulse messages sent from the first node to one of the neighbor nodes or the candidate nodes is associated with a sending time stamp. Next, round-trip times (RTTs) between the first node and its neighbor nodes or its candidate nodes are calculated based on the pulse messages and the return pulses (step850). Each of the return messages is characterized by a reception time stamp. Since both sending and reception times are measured at the first node, thus RTT calculations are independent of the clocks at the neighbor nodes and the candidate nodes. A neighbor node or a candidate node receives a pulse message from the first node at a reception time and sends a return message back to the first node at a transmittance time. The reception time and transmittance time cancel out each other in the calculation of the RTT at the first node using the transmittance time of the pulse message at the first node and the reception time of the return message at the first node. However, RTT measurement may be affected by clock rate differences between the first node and the neighbor node or the candidate node. In some embodiments, the RTT calculations between the first node and neighbor nodes or the candidate nodes in step850can compensate the clock rate differences between different nodes. The first node can send pulse messages to a neighbor node or a candidate node at regular time intervals and receive return messages at regular time intervals. The return messages include transmittance times at the neighbor node or the candidate node. The clock rate of the neighbor node or the candidate node can be calculated using the transmittance times. In RTT calculations, the time gap between the reception time and the transmittance time at the neighbor node or the candidate node can be adjusted according to the difference between the clock rates at the first node and the neighbor or candidate node. In other words, the RTT measurements and calculations can be independent of the clock skews or clock rate discrepancies at the counterpart testing nodes. In the presently disclosed method, RTTs are used for monitoring connection performances between pairs of neighboring nodes in the peer-to-peer computer network. The neighbor nodes and the candidate nodes are then sorted into a plurality of orbital bins each comprising nodes characterized by RTTs related to the first node within a specific interval (step860). As noted above, each orbital bin is defined by a range of RTT such as [0 ms, 5 ms], [5 ms, 10 ms] . . . , etc. In one respect, nodes in different orbital bins can be considered being at different distances from the first node in relation to data transport. The spread in “data transport distances” between the orbital bins assures an optimal reach of the first node's connections with its neighbor nodes. The nodes that have not successfully updated with RTTs are not sorted in the orbital bins. From each of the orbital bins, at least one node is automatically selected based on RTTs associated with the node. The selected node is added to updated neighbor nodes for the first node (step870). The updated neighbor nodes of the first node can include nodes that were previously neighbor nodes or previously candidate nodes relative to the first node. In other words, some neighbor nodes are reaffirmed with their neighbor nodes status, and some candidate nodes are upgraded to become neighbor nodes in this update cycle. Those previous neighbor nodes that are not confirmed and removed from the list of neighbor nodes associated with first node. The updated neighbor nodes are characterized by their orbital bins and RTTs in relation to the first node, which are used in routing path finding as described in relation toFIGS.5-7. The sum of updated neighbor nodes of all the nodes in the peer-to-peer computer network form the updated nodes in the peer-to-peer computer network (step870). Within an orbital bin, a node having a shorter RTT can be selected, which gives a faster data transport within RTT range of that orbital bin. Moreover, the node selection within each orbital bin can also take into account of jitters, bandwidths, clock rate differences, and other performance parameters measured by the pulse messages and the return pulses at the first node. A node will not be selected if measured jitters, bandwidths, clock rate differences, and other performance parameters exceeding a respective threshold. It should be noted that the neighbor nodes and the candidate nodes that are non-responsive to the pulse messages from the first node do not lead to updated RTT calculations and are not sorted into the orbital bins. These non-response nodes are thus discarded if some of them were on members of the peer-to-peer computer network. Furthermore, those nodes that have recently measured jitter exceeding a predetermined threshold can also be removed from the list of updated nodes in the peer-to-peer computer network if they have been. In some embodiments, when two nodes in the same orbital bin have similar performances (in latencies and jitter), the node that has been an updated node in the peer-to-peer computer network for longer duration is selected. This criterion is based on the observation that nodes that have shown longer period of reliable performance more likely provide more reliable performance in the future. The process of automatically routing data from a first node to a second node in the peer-to-peer computer network have been disclosed inFIGS.5-7and related discussions described above. The selected routing path includes a neighbor node stored in one of the orbital bins associated with the first nod. In other words, the data is relayed by the neighbor node from the first node to the second node. For example, as illustrated inFIG.1, node R in orbital bin20can relay data from node A to node Z. In some embodiments, the neighbor nodes connected to a node in a peer-to-peer computer network can be thoroughly established and lightly updated or confirmed using different types of pulse messages at different intervals. At a high level, RTTs and jitters can be measured between a first node and other associated nodes using bursts of short messages to quick evaluate neighbor nodes and candidate nodes of the first nodes. Once a set of neighbor nodes are established, sparse longer pulse messages can be used to update the neighbor nodes for the first node and to maintain other peer nodes in the network. One or more of these regularly updated neighbor nodes can be used to relay data from the first node to another node in the peer-to-peer computer network. One advantage of the presently disclosed method is to discover, update and maintain reliable and good performing (based on RTTs, jitter, packet loss, etc.) neighbor nodes while minimizing traffic overhead on the computer network. In other words, the disclosed method is aimed at increasing the benefits of faster, more secure, and more reliable data routing while minimizing burden over the Internet. Referring toFIG.9A, neighbor nodes associated with a first node in a peer-to-peer computer network are automatically discovered using bursts of short pulse messages (step900). The bursts of short pulse messages are the first type of pulse messages, which are used for more thoroughly measure performances of the nodes in the peer-to-peer network. Then neighbor nodes connected to the first node in the peer-to-peer computer network are automatically updated using sparse long pulse messages (step905). The long pulse messages are the second type of pulse messages. Data from the first node to a second node are automatically relaying by one of the neighbor nodes associated with the first node in the peer-to-peer computer network (step910). The process of automatically routing data from a first node to a second node in the peer-to-peer computer network have been disclosed inFIGS.5-8and related discussions described above. The selected routing path includes a neighbor node stored in one of the orbital bins associated with the first nod. In other words, the data is relayed by the neighbor node from the first node to the second node. For example, as illustrated inFIG.1, node R in orbital bin20can relay data from node A to node Z. The short pulse messages used in step900only include basic information necessary for enabling time and jitter measurements. Such short pulse messages can include just sending time, sender node identification (of the first node), and the receiving node's identification (an existing neighbor node or a candidate node). Similarly, a return pulse can include the sender node's identification (the existing neighbor node or the candidate node that has received a series of short pulse messages from the first node), and optionally, the sending time of the associated short pulse message, and optionally, the reception time of the short pulse messages by the receiving node and the sending time of the return pulse by the receiving node. The reception time of the short pulse message and the sending time of the return pulses at the receiving node are optional because there is normally only a very minute lapse between the reception of a pulse message and the transmission of the associated return pulse. In calculating an RTT, these two times approximately cancel out each other. RTT can be calculated solely using the sending time of a pulse message from a first node and the reception time of a corresponding return pulse at the first node. Therefore, the RTT calculation can be independent of the clock and its skew at the receiving node (an existing neighbor node or a candidate node) that receives a pulse message (short or long) from the first node. This advantageous feature has been discussed several times previously in relation withFIGS.4-8. Moreover, jitter can be calculated using the RTTs of the short pulse messages and associated return pulses within a burst. Additionally, packet loss can be obtained by the number of return pulses in responsive to pulse messages. As described below, the nodes involved can be sorted in orbital bins and certain nodes can be selected neighbor nodes of the first node. Once neighbor nodes for a first node is established, these nodes and other candidate nodes can be evaluated at much sparse frequencies using long pulse messages. The long pulse messages include the information contained in the short pulse messages and additional information for maintaining a peer-to-peer network. The long pulse messages and corresponding return pulses can contain additional information such as:1) Estimated RTT, OWL, jitter, packet loss, and clock skew between a pair of nodes that sends and receives a pulse message in the peer-to-peer computer network; and2) Estimated RTTs, OWLs, jitter, packet loss, and clock skews about other nodes than the sending node and the receiving node. Such information about other nodes can be shared with trusted nodes or a node that just joined the peer-to-peer computer network, which allow these nodes to establish their own neighbor nodes and candidate nodes. Such neighbor node information can be shared randomly or periodically, e.g., every 60 s. In general, the short pulse messages and associated return pulses are for the specific purpose of measuring latencies (e.g., RTT) and jitter between a pair of nodes in the peer-to-peer computer network. The additional information in the long pulse messages and associated return pulses can be shared with and used by a third node to allow the third node to expand its lists of candidate nodes and neighbor nodes. Moreover, information about any peer nodes can be used to maintain the peer-to-peer computer network including validating the performance of nodes that just joined and removing nodes that no longer perform (e.g., based on uptime, jitter, packet loss, latencies, etc.). The step900inFIG.9Acan include one or more detailed steps inFIG.9B. Bursts of short pulse messages are automatically sent from a first node to neighbor nodes and candidate nodes associated with the first node in a peer-to-peer computer network (step920). Each of the short pulse messages is characterized by a sending time stamp recorded at the first node. Return pulses are received, by the first node, from at least some of the neighbor nodes and the candidate nodes in response to the bursts of short pulses (step925). For example, a burst of 10 short pulse messages can be sent out from the first node to another node within 5 seconds (e.g., every 0.5 second). Round-trip times (RTTs) and jitter between the first node and the neighbor nodes or the candidate nodes are calculated based on the bursts of short pulse messages and the associated return pulses (step930). Each of the return pulses is characterized by a reception time stamp recorded at the first node. In addition, packet loss can be conveniently obtained by comparing the return pulses to the pulse messages in the burst. If only eight return pulses are received in response to ten pulse messages from a neighbor node or a candidate node, there is a 20% packet loss from this node at this time. Since both sending and reception times are measured at the first node, thus RTT calculations are independent of the clocks at the neighbor nodes and the candidate nodes. A neighbor node or a candidate node receives a pulse message from the first node at a reception time and sends a return message back to the first node at a transmittance time. The reception time and transmittance time cancel out each other in the calculation of the RTT at the first node using the transmittance time of the short pulse message at the first node and the reception time of the return message at the first node. However, RTT measurement may be affected by clock rate differences between the first node and the neighbor node or the candidate node. In some embodiments, the RTT calculations between the first node and neighbor nodes or the candidate nodes can compensate the clock rate differences between different nodes. The first node can send pulse messages to a neighbor node or a candidate node at regular time intervals and receive return messages at regular time intervals. The return messages include transmittance times at the neighbor node or the candidate node. The clock rate of the neighbor node or the candidate node can be calculated using the transmittance times. In RTT calculations, the time gap between the reception time and the transmittance time at the neighbor node or the candidate node can be adjusted according to the difference between the clock rates at the first node and the neighbor or candidate node. In other words, the RTT measurements and calculations can be independent of the clock skews or clock rate discrepancies at the counterpart testing nodes. In the presently disclosed method, RTTs are used for monitoring connection performances between pairs of neighboring nodes in the peer-to-peer computer network. Jitter in connection with another node (a neighbor node or a candidate node) can be calculated using the RTTs calculated using the pairs of short pulse message and associated return pulse in a burst communicated between the first node and another node. For example, if a burst of 10 short pulses is sent from the first node to a receiving node, and the receiving node sent 10 return pulses back to the first node. The 10 pairs of short pulse messages and return pulses will result in computed 10 RTTs that characterize data communication between the first node and the receiving node. An average or a median value of the 10 RTT values can be used as the representative RTT value between the two nodes. Jitter can be obtained by the variations in the ten RTT values, which can be standard deviation, the difference between the maximum value and the minimum value, etc. The neighbor nodes and the candidate nodes are sorted into a plurality of orbital bins each comprising nodes characterized by RTTs related to the first node within a specific interval (step935). As noted above, each orbital bin is defined by a range of RTT such as [0 ms, 5 ms], [5 ms, 10 ms] . . . , etc. In one respect, nodes in different orbital bins can be considered being at different distances from the first node in relation to data transport. The spread in “data transport distances” between the orbital bins assures an optimal reach of the first node's connections with its neighbor nodes. The nodes that have not been successfully re-affirmed with acceptable RTT values are not sorted in the orbital bins. A node from one of the orbital bins is automatically selected and assigned based on RTTs, jitter, and packet loss to be a neighbor node associated with the first node (step940). Within an orbital bin, a node having a shorter RTT can be selected, which gives a faster data transport within RTT range of that orbital bin. Moreover, the node selection within each orbital bin can also take into account of jitters, bandwidths, clock rate differences, and other performance parameters measured by the pulse messages and the return pulses at the first node. A node will not be selected if measured jitters, bandwidths, clock rate differences, and other performance parameters exceeding a respective threshold. It should be noted that the neighbor nodes and the candidate nodes that are non-responsive to the pulse messages from the first node do not lead to updated RTT calculations and are not sorted into the orbital bins. These non-response nodes are thus discarded if some of them were members of the peer-to-peer computer network. Furthermore, those nodes that have recently measured jitter exceeding a predetermined threshold can also be removed from the list of updated nodes in the peer-to-peer computer network if they have been. In some embodiments, when two nodes in the same orbital bin have similar performances (in RTTs and jitter), the node that has been an updated node in the peer-to-peer computer network for longer duration is selected. This criterion is based on the observation that nodes that have shown longer period of reliable performance more likely provide more reliable performance in the future. In general, nodes are self-managed and self-organized in the peer-to-peer computer network based on the performance by the data connections between the nodes. Thus, the nodes in the peer-to-peer computer network are required by DARP protocols to continually measurement performance characteristics (e.g., latency, jitter, etc.) of their connections. Based on the most updated performance measurements, the peer-to-peer computer network dynamically refresh its members: some good performing nodes are added to neighbor nodes, and some non-response or bad performing nodes are removed from neighbor nodes. The updated neighbor nodes for all nodes in the peer-to-peer computer network form the updated nodes for the peer-to-peer computer network. To this end, long pulse messages are regularly automatically sent from the first node to the neighbor nodes and the candidate. The step905of automatically updating neighbor nodes connected to the first node in the peer-to-peer computer network using sparse long pulse messages inFIG.9Acan include one or more detailed steps inFIG.9C. Long pulse messages are sent from the first node to neighbor nodes and candidate nodes associated with the first node at time intervals longer than those between the short pulse messages in the bursts (step960). Round-trip times (RTTs) and jitter between the first node and the neighbor nodes or the candidate nodes are calculated based on each pair of one of the long pulse messages and its associated return pulse (step965). As described above, the long pulse messages contain more information and are longer than the short pulse messages. Moreover, the long pulse messages are parse, which have longer intervals in between than intervals between the first type of pulse messages in the bursts. For example, the long pulse messages can be sent every 60 seconds or 180 seconds. In one implementation, the intervals between the long pulse messages are at least 10 times, or at least 20 times, or at least 50 times of the intervals between the short pulse messages. For example, a burst of 10 short pulse messages can be packed in a 5 second span with approximately 0.5 sec interval between successive short pulse messages. The time intervals between long pulse messages can be 60, 120, 180, or 300 seconds, which are longer than 5 second, 10 seconds, or 25 seconds, respectively corresponding to 10×, 20×, 50× of the intervals between the short pulse messages. In other words, the long pulse messages that are for regular updates of the peer nodes in the peer-to-peer computer network carry much less burden to network traffic. Moreover, bursts of short pulses can be used to more thoroughly measure node performance on an infrequent basis, such as daily, once a few days, once a week. The frequency of using bursts of short pulse messages can be determined by the local network performance. In general, more stable network locality requires less frequent measurements by both bursts of short pulse messages and long pulse messages. Furthermore, the long pulse messages can be sent with dynamically adjusted intervals as previously described in relation toFIG.8. The intervals between the long pulse messages can be sparse, “seasonality” adjust, dependent on network locality and performance, and roles of the nodes. Orbital bins associated with the first node have been established in steps905and935. Based on the RTTs and jitter, a candidate node can be automatically added to the plurality of orbital bins associated with the first node if the candidate node has short RTT and low jitter in connection with the first node (step970). An existing neighbor node associated with the first node can be retired if the existing neighbor node has long RTT and/or high jitter in connection with the first node in the recent measurements (step970). A node from one of the orbital bins can be automatically selected and assigned based on RTTs and jitter to be a neighbor node associated with the first node (step975). One notable advantage of the disclosed method is in its vast scalability of the data routing method. Each node in the peer-to-peer network only needs to maintain a small number of neighbor nodes, which drastically reduces the burden of maintaining the peer network. Since all peer nodes in the network are connected in a cascading fashion, a node in the peer network can reach any other node in the same network. Thus, the decentralized data routing approach can perform data routing in a peer-to-peer network of hundreds of nodes as well as a billion nodes. Another important feature of the above disclosed system and method is in its network security. The data messages and data packages sent between peer nodes can be signed cryptographically by the relay nodes using their private keys similar to blockchain technologies. The signatures can be verified using node identifications related to public keys. The above embodiments are only used to illustrate the technical solution of the present invention but not to limit it. Those skilled in the art can modify or equivalently replace the technical solution of the present invention without departing from the spirit and scope of the present invention. The scope of protection shall be subject to the claims. | 89,857 |
11863424 | DETAILED DISCLOSURE OF EMBODIMENTS FIG.1illustrates schematically a mesh communication network120. The mesh communication network120is for example an electrical supply network of the AMM type. The mesh communication network120relies on powerline communications PLC or radio-frequency communications to enable a base node device (also called a “data concentrator”) to collect, from smart electricity meters, energy consumption reading data from electrical installations that said smart electricity meters are respectively responsible for monitoring. The data concentrator and the smart electricity meters are thus node devices of the mesh communication network120. The mesh communication network120may comprise other node devices, for example installed at electrical transformers. The communication network120therefore has a mesh structure, as shown schematically onFIG.1by means of arrows, where node devices fulfil the role of relays for increasing the range of the communications in the mesh communication network120, as detailed hereinafter. Thus any one smart electricity meter has available potentially a plurality of paths for reaching the data concentrator, and vice-versa. The present invention is thus particularly adapted to the context of the G3-PLC (registered trade mark) technology and its extension G3-PLC Hybrid RF. The mesh communication network120thus comprises a plurality of node devices130,131,132,133,134,135,136,137,138,139. A network neighbourhood is associated with each node device of the mesh communication network120. OnFIG.1, the node device133is associated with a network neighbourhood110encompassing the node devices130,134and137. In the mesh communication network120, a signal or a message broadcast by a node device (such as the node device133) is in general not visible at every point in the communication network. Each node device sending signals or messages then has a network neighbourhood, that is to say a subset of said mesh communication network120wherein every node device can intelligibly receive said signals or messages directly coming from the node device that broadcast said signals or messages. The network neighbourhood corresponds to the range of the signals sent, according to predetermined transmission parameters (e.g. power, modulation and coding scheme, network topology, etc.) of the node device at the source of said signals and also potentially according to characteristics of the communication channel (attenuation, noise, impedance, etc.). The mesh communication network120relies on a routing protocol of the reactive type, such as the LOADng protocol (“Lightweight On-demand Ad hoc Distance-vector Routing Protocol-Next Generation”). Unlike the routing protocols of the proactive type that rely on a global knowledge of the network topology, the routing protocols of the reactive type rely on on-demand route discoveries, each node device of the network then needing solely to have knowledge of its own network neighbourhood for routing data in the mesh communication network120. To discover a suitable route in the mesh communication network120from a source node device (for example the node device133) as far as a destination node device (for example the node device132), it is known that the source node device broadcasts a route discovery request, called RREQ (“Route REQuest”). This route discovery request is received by each node device in the network neighbourhood of said source node device. Each node device in the network neighbourhood of said source node device relays said request by broadcast if said node device in question is not the destination node device. By gradual broadcasting, a plurality of route discovery requests are typically received by the destination node device, each of these requests having followed a different path in the mesh communication network120. Each node device that originates a message, such as for example a route discovery request, includes therein an identifier that is particular to it, as well as a sequence number, as defined in the LOADng protocol. This sequence number is a counter value particular to each node device of the mesh communication network120. Each time a node device generates a new message, said node device increments its counter and includes the value of said counter in the message in question. Thus, when a node device receives a message, said node device analyses the identifier of the node device originating the message and the sequence number that are included in the message, and can determine whether the message received is actually a new message or a new copy of a message already received. Each node device can however decide not to relay a route discovery request, when one or more criteria are not met. In particular, before deciding to relay said request, the node device in question typically checks whether said request comprises information representing a route cost, from the source node device as far as the node device in question, that is better than the route cost represented by information contained in another route discovery request previously received by the node device in question. In other words, the node device in question relays said request by broadcasting if said request relates to a path that has followed, from the source node device as far as the node device in question, a pathway of lower cost than any other request previously received by the node device in question (and therefore for the same route discovery). The cost of a route may be based on one or more metrics. For example, the route cost is a number of hops experienced by the copy in question from the source node device. According to another example, the route cost is the result of a calculation that depends on the bandwidth of the links passed over, by the copy in question, from the source node device. According to yet another example, the route cost is proportional to the latency experienced by the copy in question from the source node device. Other metrics may be used to establish a route cost, i.e. a transit cost, from the source node device as far as the destination node device. When a node device decides to relay, by broadcast, a route discovery request, the node device in question updates the route cost information contained in said request, so as to take into account the fact that said request has passed through the node device in question. Thus, according to such a principle, a plurality of copies of the route discovery request typically arrive at the destination node device, each comprising information on the cost of the route that said copy followed to be propagated from the source node device as far as the destination node device. The pathway followed by said route discovery request associated with the best route cost is then selected to enable the source node device to transmit data to the destination node device. To activate the route in question, the destination node device transmits a route discovery reply called RREP (“Route REPly”). This route discovery reply is transmitted gradually following the reverse path of the route discovery request that was associated with the best route cost. Each node device receiving the route discovery reply updates an internal routing table, at the data link layer DLL of the OSI (standing for “Open Systems Interconnection”) model, in order to indicate therein that any subsequent message transmitted in unicast mode from the source node device in question to the destination node device in question must be transmitted or relayed to such and such a node device of its network neighbourhood. In the link layer, also called the MAC (“Medium Access Control”) layer, the routing tables are preferentially implemented in an adaptation sublayer responsible for implementing the routing protocol in the mesh communication network. For example, this adaptation sublayer is in accordance with the 6LoWPAN protocol (standing for “IPv6 over Low power Wireless Personal Area Networks), which was initially developed to support IPv6 in the context of the IEEE 802.15.4 standard and which was extended to the G3-PLC and G3-PLC Hybrid RF (registered trade mark) technology. It should be noted that the 6LoWPAN protocol is itself based on the routing protocol of the aforementioned LOADng reactive type. By means of the routing tables thus configured, unicast communications can be made by any pair of node devices of the mesh communication network120. Intermediate node devices therefore serve as relays when the node devices of said pair are not in the network neighbourhood of each other, and the communications thus take place gradually, each node device relying on one of its own neighbours to relay messages as far as their respective destinations. To communicate between adjacent node devices (i.e. node devices that are in the network neighbourhood of each other), the messages are transmitted in the form of modulated frames. When a modulated frame is specifically addressed to an adjacent node device and is correctly demodulated by it, said adjacent node device retransmits an acknowledgement ACK to the node device that sent it said modulated frame. The acknowledgement ACK is transmitted on the same frequency band as the modulated frame with which said acknowledgement ACK is associated. A plurality of frequency bands are defined for supporting the transmission of these modulated frames, an adapted modulation scheme being associated with each of these frequency bands. Each frame transmitted in the form of modulated signals begins with a preamble defined according to the modulation scheme according to which said signals were modulated. The preamble is adapted to make it possible to synchronise in reception on said frame, that is to say to be able to determine an actual instant of start of frame. To do this, the preamble typically comprises a plurality of successive copies of the same symbol. The actual content and the duration of the preamble are thus predefined and depend on the modulation scheme used. The preambles of a plurality of frames are identical when the same modulation scheme is applied, and different otherwise. The modulation schemes applicable (and corresponding demodulation schemes) are preferentially multi-carrier modulation schemes (and respectively demodulation schemes) of the OFDM (Orthogonal Frequency Division Multiplex) type. In terms of frequency bands that can be used in the context of the use of the mesh communication network120, mention can be made of: the CENELEC A frequency band, which goes approximately from 35 kHz to 91 kHz; the FCC frequency band, which goes approximately from 150 kHz to 480 kHz; the ARIB frequency band, which goes approximately from 150 kHz to 400 kHz; and the CENELEC B frequency band, which goes approximately from 98 kHz to 122 kHz; and the RF channel of G3-PLC Hybrid RF. It is then possible to use: a first thirty-six carrier modulation scheme in the CENELEC A frequency band; a second seventy-two carrier modulation scheme in the FCC frequency band; a third fifty-four carrier modulation scheme in the ARIB frequency band; a fourth sixteen carrier modulation scheme in the CENELEC B frequency band; and a fifth modulation scheme for the RF channel of G3-PLC Hybrid RF. It is clear from the above that a node device can simultaneously use a plurality of frequency bands for communicating with one or more of its neighbours, by applying an adapted transmission mechanism. However, it is clear that the ARIB and FCC frequency bands cannot be used simultaneously by the same node device since they overlap each other. The powerline channels and the RF channel are very hostile supports. The characteristics and parameters of the channel vary according to the frequency, the location, the time and the type of equipment connected thereto. The low-frequency regions (from 10 kHz to 200 kHz) are particularly sensitive to interference. Apart from the background noise, the channel is subject to pulsed noises and to narrow-band interference. The OFDM technology used by G3-PLC uses advanced channel-coding techniques. This combination allows very robust communication when narrow-band interference, pulsed noises and frequency-selective attenuations are present. For this purpose, an FEC (the acronym for “forward error correction”) encoder is in particular used. This FEC encoder is for example composed of a Reed-Solomon encoder and a convolutional encoder that make it possible to introduce redundancy at the bit level. They thus enable a destination node to find bits lost because of background noise and pulsed noises. In the preferential context of the G3-PLC standard, several types of modulation are defined, including: BPSK, DBPSK, QPSK, DQPSK, 8-PSK, DBPSK, 16-QAM and a so-called robust (or ROBO) mode. In normal mode, the FEC (forward error correction) encoder is composed of a Reed-Solomon encoder and a convolutional encoder. In robust mode, the FEC (forward error correction) encoder is composed of a Reed-Solomon encoder and a convolutional encoder as in the normal mode. However, in the case of the robust mode, the convolutional encoder is followed by a repetition code. The repetition code generates 4 copies of each bit output from the convolutional encoder. Thus the system is more robust to degradations of the channel at the cost of a reduction in bit rate by 4. The data thus obtained are next passed to the input of an interleaver. G3-PLC also defines a super-robust mode wherein the repetition code generates 6 copies of each bit output from the convolutional encoder, which increases the robustness. This mode is used only for a part of the frame control FCH (the acronym for “frame control header”) of a data frame. The modulation in the robust and super-robust modes is a BPSK modulation. FIG.2illustrates schematically a method for selecting a communication route between a first node device and a second node device of a mesh electrical supply network using powerline and/or radio-frequency communications. The second node device, e.g. the node132, can be reached from the first node device, e.g.133, by at least a first communication route, e.g. the route passing through the nodes130,131, and a second communication route, e.g. the route passing through the nodes134,135and136, different from said first communication route. In general terms, the second node device can be reached from the first node device by a plurality of N communication routes, N being a positive integer. Hereinafter, to facilitate the notations, each possible route is identified by an index k with k an integer number varying from 0 to N−1. The method starts at a step S200where k is equal to zero. In a step S210, the second node device obtains a route cost RCk for a communication route of index k from the plurality of N communication routes. The route cost RCkis equal to the sum of the link costs LCi,jbetween two successive node devices i and j, i.e. situated in the same network neighbourhood. For example, in the case of the first communication route, the route cost RC1=L133,130+L130,131+L131,132. and the cost RC2of the second communication route is equal to L133,134+L134,135+L135,136+L136,132. The cost of a link LCi,jbetween two successive node devices depends on the maximum value from a cost of the link LCi→jin the forward direction, i.e. from the sending node device to the receiving node device, and a link cost LCj→iin the backward direction, i.e. from the receiving node device to the sending node device. Thus, in a particular embodiment, the link cost LCi,jbetween a node device i and a node device j belonging to the network neighbourhood thereof is equal to a weighted sum between a maximum value from a link cost in a forward direction LCi→jand a link cost in a backward direction LCj→iand a ratio between a number of active routes and a maximum number of active routes. For example, the PLC link cost LCi,jis calculated as follows: LCi,j=max(LCi→j,LCj→i)+adpKrt*NumberOfActiveRoutesPLCMaximumNumberOfActiveRoutes+adpKh where LCi→jand LCj→iare the costs of the directional links (forward and backward directions, respectively) between the node device i and the node device j;max(a,b) is a function that returns the value a if a>b and b otherwise;NumberOfActiveRoutesPLCis the number of active routes in the internal routing table of the node device j that use a PLC communication, e.g. it is a case of the number of active routes for which the field MediaType defined in G3-PLC Hybrid is positioned at 0 in the respective inputs of the routing table;MaximumNumberOfActiveRoutes is the maximum number of active routes in the internal routing table of the node device j;adpKh is a weighting factor representing the cost of a hop;and adpKrt is a weighting factor associated with a number of active routes in the routing table of the node device j. By way of example, adpKrt has the value 0 and adpKh has the value 4. The RF link cost for its part is calculated from the formulae of the RF extension of the G3 standard: LCi,j=max(LCi→j,LCj→i)+adpKrtRF*NumberOfActiveRoutesRFMaximumNumberOfActiveRoutes+adpKhRF where adpKrtRFand adpKhRFare weighting factors;NumberOfActiveRoutesRFis the number of active routes in the internal routing table of the node device that use a radio-frequency communication, e.g. it is the number of active routes from which the MediaType field defined in G3-PLC Hybrid is positioned at 1 (RF) in the respective entries of the routing table. It should be noted that the value adpKh is added at each step S210. At the end, the second node device will be able to compare the potential routes, and to prefer one with fewer hops. According to a particular embodiment, a particular metric is defined for determining the costs of the directional links in order to adapt to node devices that have transmission capacities of the multi-band type. A node device has multi-band capacities in the case where it is configured to be able to simultaneously use a plurality of different frequency bands, e.g. CENELAC A and FCC or FCC and the RF channel, instead of selecting a single band. In a particular embodiment, the frequency bands are more particularly separate. For example, the node device in question may fragment the message into various fragments according to the 6LoWPAN protocol. The fragmentation method is more particularly described in section 5.3 of the RFC recommendation 4944 (published in September 2007). Each fragment is then sent independently of the other fragments on frequency bands that may be different. The associated frequency bands are for example selected from all the frequency bands authorised by G3-PLC and the extension thereof G3-PLC Hybrid RF, i.e. CENELEC A, CENELEC B, ARIB, FCC and the RF channel. In a variant, the first and second associated frequency bands are selected from a subset of frequency bands authorised by G3-PLC and G3-PLC Hybrid RF, the subset comprising at least two bands from all the bands authorised by G3-PLC and G3-PLC Hybrid RF. In another embodiment, a node device having multi-band capacities may transmit the same message simultaneously in all the frequency bands of the set of frequency bands adopted (by the sender and the receiver). This transmission mode is hereinafter referred to as the hyper-robust mode. In each frequency band, the robust mode of G3-PLC is then used. In this way, in the case of great frequency interference on a frequency band, the message may despite everything manage to pass over another frequency band. This is because the receiver needs to succeed in capturing the message only on one of the frequency bands on which it was sent. The hyper-robust mode is a particular mode newly defined for the case of a node device having multi-band capacities. In another embodiment, a node device having multi-band capacities can transmit a message on all the frequency bands, which then constitute a so-called extended frequency band. In all these embodiments, the multi-band capacity of a node device is characterised by the fact that the node is capable of using a plurality of frequency bands simultaneously instead of a single band as is the case conventionally in the case of the G3-PLC standard. A device having multi-band capacities can benefit from the characteristics of the various frequency bands in terms of bit rate, range and resistance to interference. The route cost as defined by G3-PLC in Appendix D thereof does not make it possible to take account of these multi-band capacities of a node device. This is because the G3-PLC communication standard allows the use of only one frequency band for a given network. According to a particular embodiment, the cost of the link LCi→jin a given direction, i.e. in the forward or backward direction, depends on the cost of the link, in said given direction, calculated for each frequency band LCi→j[m] of the set of frequency bands used by said two successive node devices i and j for communicating, said set comprising at least two different frequency bands. In a particular embodiment, the frequency bands of said set of frequency bands are more particularly separate. FIG.3illustrates schematically a method for calculating a cost LCi→jof a directional link between a first node device and a second node device of a mesh electrical supply network using powerline and/or radio-frequency communications. In this embodiment, the cost LCi→jof a directional link is calculated from a directional-link cost LCi→j[m] per frequency band (“bandplan” for the PLC, or RF channel), m being an index identifying the frequency band, m is an integer variant from 0 to NBP−1 where NBP is an integer equal to the number of frequency bands, e.g. NBP=5. For example, m=0 corresponds to the CENELEC-A band, m=1 corresponds to the FCC band, m=2 corresponds to the CENELEC-B, m=3 corresponds to the ARIB band and m=4 corresponds to the RF channel. The method begins at a step S300with m=0. In a step S310, LCi→j[m] in the PLC case is calculated as follows: LCi→j[m]=adpKr*MODKr+adpKm*MODKm+adpKc[m]*(MaximumNumberOfTones-NumberOfActiveTones)MaximumNumberOfTones+adpKq*MAX(0,MIN(1,adpHighLQIValue-LQIadpHighLQIValue-adpLowLQIValue)) where:MODKr=1 for the robust mode, 0 for the other modulations;MODKm=3 for the DBPSK or BPSK modulations (including the robust mode), 2 for the DQPSK or QPSK modulations, 1 for the DBPSK or 8-PSK modulations and 0 for the 16-QAM modulations;adpKr, adpKm, adpKq are weighting factors the values of which are predefined;adpKr is a weighting factor associated with the robust mode;adpKm is a weighting factor associated with the modulation;adpKc[m] is a weighting factor defined for each frequency band and is associated with the number of active subcarriers compared with the total number of subcarriers available. By way of illustrative example, adpKc[0]=2 and adpKc[1]=1, adpKc[0] being associated with the CENELEC-A band and adpKc[1] being associated with the FCC band. This is because the FCC band offers more subcarriers than the CENELEC-A, and it is therefore logical to have an adpKc[1] lower than the adpKc[0] in order to take it into account and thus obtain a comparable result between the various bands;LQI (the acronym for “Link Quality Indicator”) is a value representing quality of the link between the node devices i and j, the node j being the current node;adpHighLQIValue is a value representing a threshold above which an LQI value is considered to represent a “reliable” link;adpLowLQIValue is a value representing a threshold below which an LQI value is considered to represent a “unreliable” link′;adpKq is a weighting factor associated with the LQI;MaximumNumberOfTones is the number of available tones/subcarriers, e.g. MaximumNumberOfTones is equal to 36 for CENELEC-A and 72 for FCC. A tone map is a list of subcarriers used for communicating in a given frequency band. These subcarriers are chosen to be subject to the least interference possible in the light of the environment; andNumberOfActiveTones is the number of active tones/subcarriers. It should be noted that the tone map indicates a number of “groups of subcarriers” that are active (by corresponding bits at 1). The number of active subcarriers is obtained by multiplying this number “groups of subcarriers” by the number of subcarriers per group, e.g. 3 in FCC and 6 in CENELEC-A. LCi→j[m] in the case of the G3-PLC Hybrid RF extension of the G3 standard is calculated as follows: LCi→j[m]=adpKqRF*MAX(0,MIN(1,adpHighLQIValueRF-LQIRFadpHighLQIValueRF-adpLowLQIValueRF))+adpKdcRF*DutyCyclePenalty100 where:adpKqRFand adpKdcRFare weighting factors the values of which are predefined;adpHighLQIValueRFis a value representing a threshold above which an LQI value is considered to represent a “reliable” radio-frequency link;adpLowLQIValueRFis a value representing a threshold below which an LQI value is considered to represent a “unreliable” radio-frequency link;LQIRFis a value representing a quality of the radio-frequency link in the forward direction between the node devices i and j;DutyCyclePenalty is a value representing a degree of use of the radio frequency already reached with respect to an authorised maximum. It must be positioned at a configuration value denoted macDutyCycleUsage_RF for calculating a link cost in the backward direction, and at a duty cycle value of the neighbour for the forward direction. If the duty cycle information on the neighbour is not available, the value is positioned at 0. The duty cycle indicates the fraction of a period during which the neighbour is active. It should be noted that the values of the various parameters adpX, with X=Kq, adpHighLQIValue, adpLowLQIValue, Kc[m], etc mentioned can be adjusted according to the experience on the ground and transmitted through the application layer of the equipment. Some of these values may be equal to 0. In a step S320, m is incremented by 1 and compared with NBP−1. If m is less than or equal to NBP−1 then the method continues at the step S310, otherwise it continues at the step S330. Once LCi→j[m] is calculated for all the frequency bands, the smallest value Min_LCi→jis determined during a step S330. Min_LCi→jcorresponds to m=m0i.e. Min_LCi→j=LCi→j[m0]. In a step S240, the global directional cost LCi→jis calculated. According to a first embodiment, for each LCi→j[m] different from MinLCi→j, the contribution thereof to improving the global directional link cost (i.e. LCi→j) is calculated using the following formula: MinLCi→j*adpKmbLCi→j[m]*255. The global directional cost LCi→jis therefore calculated as follows: LCi→j=adpKhr*MODKhr+MinLCi→j1+∑m=0,m≠m0NBP-1(MinLCi→j*adpKmbLCi→j[m]*255)(Eq.1) where:MODKhr=1 for the hyper-robust mode, 0 otherwise;adpKhr is a weighting factor associated with the hyper-robust mode in calculating the link cost, e.g. adpKhr=4;adpKmb is a weighting factor for the route calculation in the multi-band case, e.g. adpKmb=130. To illustrate numerically the result of this first embodiment, let us take for example the case of an LCi→j[0]=50 for the CENELEC-A band and LCi→j[1]=100 for the FCC band (without hyper-robust mode). Taking a low weighting factor adpKmb=55, the global LCi→jis then equal to 50/(1+50/100*55/255)=45 (by rounding). Adding the FCC band, even with a high route cost, therefore makes it possible to improve the route cost compared with the CENELEC-A case alone, but moderately (taking account of the low bonus factor). According to a second embodiment, the following formula is used instead of the formula given by (Eq. 1) for calculating the global directional cost Ci→j: LCi→j=adpKhr*MODKhr+1∑m=0NBP-1(1LCi→j[m])(Eq.2) This formula is simpler than the one of (Eq.1) but does not allow weighting by the weighting factor adpKmb. To illustrate the result of this second embodiment numerically, let us take for example the case of an LCi→j[0]=50 for the CENELEC-A band and LCi→j[1]=100 for the FCC band (without hyper-robust mode). The global LCi→jis then equal to 1/( 1/50+ 1/100)=33 (by rounding). Adding the FCC band, even with a high route cost, therefore improves the global route cost with respect to the CENELEC-A case alone. In a particular embodiment, the hyper-robust mode is not used in the equations (1) and (2) and adpKhr has a zero value. The method ofFIG.3terminates at a step S350. The steps S300to S350are repeated in order to calculate the global directional cost LCi→jand thus to deduce therefrom the cost LCi,j. With reference once again toFIG.2, in a step S220, k is incremented by 1 and compared with N−1. If k is less than or equal to N−1, then the method continues at the step S210, otherwise it continues to the step S230. In a step S230, the second node device selects, from said N communication routes, the communication route corresponding to the lowest route cost. The method ofFIG.2terminates at the step S240. The initial metric as defined in Appendix B of G3-PLC takes into account only the proportion of active subcarriers. Thus the same result is obtained for CENELEC with 18 tones (proportion of 50%) and FCC with 36 tones (also proportion of 50%). The new metric proposed for determining a route cost has the advantage of representing in a parameterisable manner the saving in route cost due to a multi-band approach. As illustrated by the various numerical examples, adding a frequency band even with a high route cost improves the global route cost compared with the case of the use of a single frequency band. It makes it possible optionally to take account of a particular robust mode, i.e. the hyper-robust mode. It also covers the single-band case as defined in the appendix B of the current G3-PLC standard. This is because, using identical values of adpKc[m] on all the bands, the metric gives exactly the same value. This metric can easily be used on the existing node devices by virtue of a simple software update. FIG.4illustrates schematically an example of hardware architecture of a node device130of the mesh communication network120according to one embodiment. It should be noted thatFIG.4could also illustrate schematically an example of hardware architecture of a processing module including in the node device. According to the example of hardware architecture shown inFIG.4, the node device130then comprises, connected by a communication bus1400: a processor or CPU (Central Processing Unit)1401; a random access memory RAM1402; a read only memory ROM1403; a storage unit such as a hard disk (or a storage medium reader, such as an SD (“Secure Digital”) card reader1404; at least one communication interface1405enabling the node device130to communicate with the node devices belonging to its network neighbourhood, e.g. the node devices131and133. The processor1401is capable of executing instructions loaded in the RAM1402from the ROM1403, from an external memory (not shown), from a storage medium (such as an SD card), or from a communication network. When the node device is powered up, the processor1401is capable of reading instructions from the RAM1402and executing them. These instructions form a computer program causing the implementation, by the processor1401, of all or some of the methods described in relation toFIGS.2and3. The methods described in relation toFIGS.2and3can be implemented in software form by executing a set of instructions by a programmable machine, for example a DSP (digital signal processor) or a microcontroller, or be implemented in hardware form by a machine or a dedicated component, for example a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit). In general, the node device130comprises electronic circuitry configured to implement the methods described in relation toFIGS.2and3. | 31,895 |
11863425 | DESCRIPTION OF EMBODIMENTS Hereinafter, an exemplary embodiment of the present invention will be described with reference toFIGS.1to5. As mentioned above, when the trigger message is send over NAS, it is described in NPL 3 that trigger without NAS security protection should be discarded by MTC device. The trigger source or network node such as MTC-IWF will not know about the discard and repeatedly send the same trigger again, which may be discarded by the MTC device again. This can cause a few problems: 1) the trigger will not reach MTC device; 2) MTC device (power sensitive) will consume and waste battery; 3) network traffic waste. In order to address these problems, as shown inFIG.1, a system according to this exemplary embodiment includes a core network (3GPP network), one or more MTC devices10which connect to the core network through a RAN (Radio Access Network), and an SCS30and an SME40, each of which is placed outside the core network and serves as a transmission source of a trigger message. Among them, each MTC device10is a UE for MTC communication with the core network via the Um/Uu/LTE-Uu interface. The UE can host one or multiple MTC Applications. The corresponding MTC Applications in the external network are hosted on one or multiple ASs (Application Servers). Further, the SCS30and the SME40connect to the core network to communicate with the MTC device10. Furthermore, the core network includes an MTC-IWF21, an HSS22, and GGSN/P-GW23in the HPLMN (Home Public Land Mobile Network), and includes MME/SGSN/MSC24and an S-GW25in the VPLMN (Visited PLMN). In the core network, each of the MTC-IWF21and the GGSN/P-GW23serves as a network node which receives a trigger message from its transmission source, each of the MME/SGSN/MSC24and the S-GW25serves as a network element which forwards the trigger message to the MTC device10, and the HSS22(or e.g. HLR (Home Location Register)) serves as a server which provides various information to the network node. Typically, in a case of NAS message, the MTC-IWF21receives a trigger message from the SCS30via Tsp interface, and then forwards the trigger message to the MME via T5b interface. On the other hand, in a case of SMS message, the MTC-IWF21receives a trigger message from the SME40via T4 and Tsms interfaces (i.e. through SMS-SC/GMSC/IWMSC) or from the SCS30via Tsp interface, and then forwards the trigger message to the MME/SGSN/MSC24via T5b/T5a/T5c interface. Thus, the trigger message can be routed by the MME/SGSN/MSC24to the MTC device10. The HSS22stores MTC device capabilities and serving node information which will be described later, and notifies them to the MTC-IWF21via S6m interface. The GGSN/P-GW23receives a trigger message from the SCS30or directly from the AS via Gi/SGi interface, and then forwards the trigger message to the SGSN or the S-GW25through user plane, so that the trigger message can be also routed to the MTC device10. Next, operation examples of this exemplary embodiment will be described in detail with reference toFIG.2. In this exemplary embodiment, assume that the trigger source (i.e. SCS30or SME40) is properly authenticated to the network (Step S1). Mutual authentication between the MTC device10and the network is also performed. (1) Optimization of MTC Device Trigger Delivery 1) MTC-IWF21downloads UE capabilities from HSS22via interface S6m (Step S2). This can be a new message or the same message that MTC-IWF21retrieves UE's serving node information from HSS22. The UE capabilities can include, for example, information on which communication system (e.g. SAE (System Architecture Evolution)/LTE or 3G) the MTC device10supports. Preferably, as will be described in the following (2), the UE capabilities may include information as to whether or not the MTC device10supports IMS. On the other hand, the serving node information includes usage rates of the MME/SGSN/MSC24. Additionally, routing information can be downloaded from the HSS22or the HLR. Data of routing information, serving node information can be pushed or downloaded from HSS/HLR and saved locally in SMSC/SMS-GMSC. The downloading can happen when:(A) MTC-IWF21receives the first trigger; or(B) MTC device10is attached to the network and HSS22pushes the information to MTC-IWF21.2) MTC-IWF21stores the UE capabilities and serving node information locally, for a given period (Step S3).3) HSS22or MTC-IWF21creates a priority list of MTC device trigger delivery route, with an expiry timer (Step S4). The priority could be simply a random selection, or decided by operator policy of network usage, or based on the serving node information and UE capabilities. Taking as an example the case where the serving node information includes the usage rates, priority list includes records in which the MME/SGSN/MSC24are stored in association with their respective usage rates. Further, in the case where the list is created by the HSS22, the MTC-IWF21downloads the list from the HSS22. The downloading and/or creation are performed before the MTC-IWF21receives the trigger from the SCS30. Note that the list should be removed if MTC-IWF21is informed the MTC device10is detached or when it is expired.4) MTC-IWF21receives the trigger from the SCS30(Step S5).5) MTC-IWF21performs authorization to SCS30, to see whether it can send trigger message.6) MTC-IWF21checks security context at a given network element, e.g. MME (Steps S6and S7), which can be done by:(A) Pinging given network element for information or by analyzing the information received from the HSS; or(B) Check with the information that provided by HSS22when MTC-IWF21downloaded the serving node information, or pushed from HSS22e.g. when UE changed its location.7) If MME responds that it has no valid security context for the UE, MTC-IWF21will send the trigger message to the next serving node in the priority list, e.g. SGSN (Steps S8and S9). Then, SGSN forwards the trigger message to MTC device10(Step S10). MTC-IWF21should ensure that it does not choose the same route, by marking the failed path invalid. Thus, it is possible to prevent the trigger message from being redundantly re-forwarded through the failed path, so that the trigger message can more rapidly reach the MTC device10. The route can be valid if MTC-IWF21receives information from HSS22or MME that security context is established. Thus, in this exemplary embodiment, it is possible to ensure that the trigger message can securely reach the MTC device10, by deciding the network element which should transfer the trigger message based on the list. In the case where the MTC-IWF21creates the list, it is possible to rapidly select the valid path. This is because that the MTC-IWF21operates as an entrance into the core network. Further, in the case where the list includes records in which the MME/SGSN/MSC24are stored in association with their respective usage rates, the MTC-IWF21can select the MME/SGSN/MSC24in ascending order of usage rate. Therefore, it is possible to reduce congestion of the core network.8) UE (MTC device10) checks validity of the message carrying the trigger (this follows the current 3GPP specification security requirements) (Step S11).9) If message is not validated correctly then MTC device10discards the trigger message (Step S12) and sends a Reject message to MTC-IWF21indicating the reject cause (e.g. no proper security protection) (Step S13), otherwise accepts the trigger.10) After received the Reject message, MTC-IWF21can do as follows:(A) Choose the next path which is not marked as invalid from propriety list, and then forward the trigger through the chosen path (Step S14);(B) When there is no any control plane path available, MTC-IWF21can forward the Reject message to SCS30such that SCS30can send the trigger through user plane (Steps S15and S16);(C) Request MME to initiate AKA (Authentication and Key Agreement) and SMC (Short Message Control) procedure to establish security context such that it can forward the trigger message. Thus, in this exemplary embodiment, it is also possible to prevent the trigger message from being redundantly re-forwarded by use of the Reject message. Therefore, it is possible to reduce congestion of the core network and battery consumption of the MTC device10. For example, it can be ensured that an emergent trigger message or the like reaches the MTC device10. Although the illustration is omitted, with respect to user plane, the GGSN/P-GW23performs similar processing with that of the MTC-IWF21. Specifically, the GGSN/P-GW23receives from the MTC device10a Reject message with a cause indicating there was no proper user plane confidentiality protection, finds another path to deliver the trigger. For example, if a path via the SGSN is not protected, the GGSN/P-GW23chooses a protected path via the S-GW25to forward the trigger message. (2) Consideration of SMS Based Trigger for Non-IMS Support MTC Device When the trigger message is sent as SMS, MTC devices which do not support IMS should also be considered. An SMS trigger message carried in NAS message to a MTC device which does not support IMS, CSFB may be initiated such that MME will forward the message to MSC. This will cause unnecessary traffic and delay the trigger delivery. In order to avoid them, the operation of this exemplary embodiment is performed as follows.1) MTC-IWF21can download MTC device capability of support IMS from HSS22as described in (1). When an SMS trigger is to be forwarded, MTC-IWF21should check the local stored information to see whether MTC device10supports IMS or not.2) If the MTC device10does not support IMS, MTC-IWF21should forward the trigger directly to MSC, not MME. In this way, the SMS trigger message is directly forwarded to the MMC not through the MME. Therefore, it is possible to avoid causing unnecessary traffic from the MME to the MSC, and thus to prevent the SMS trigger message from being delayed due to the redundant routing through both of the MME and the MSC. As shown inFIG.3, the MTC-IWF21includes at least a part or all of a storage unit211, a selection unit212, a forwarding unit213, a reception unit214, a switching unit215, a check unit216, an exclusion unit217, and a downloading unit218. These units211to218are mutually connected with each other through a bus or the like. The storage unit211stores the priority list. The selection unit212selects one of the MME/SGSN/MSC24based on the priority list. The forwarding unit213forwards the trigger message to the MTC device10through the selected one of the MME/SGSN/MSC24. The reception unit214receives the trigger message from the SCS30or the SME40, and receives the Reject message from the MTC device10through the selected one of the MME/SGSN/MSC24. The switching unit215causes the forwarding unit213to forward the trigger message through a different one of the MME/SGSN/MSC24, when the Reject message is received by the reception unit214. The check unit216checks whether or not the selected one of the MME/SGSN/MSC24can securely forward the trigger message to the MTC device10. The exclusion unit217instructs the forwarding unit213to exclude the selected one of the MME/SGSN/MSC24upon the subsequent forwarding, when the check unit216determines that the selected one of the MME/SGSN/MSC24cannot securely forward the trigger message. The downloading unit218can download from the HSS22the priority list to be stored in the storage unit211. Further, the downloading unit218downloads the MTC device capability from the HSS22. When the MTC device capability indicates that the MTC device10does not support IMS, the forwarding unit213forwards the trigger message directly to the MSC. These units211to218can be configured by, for example, transceivers which respectively conduct communication with the HSS22, the MME/SGSN/MSC24, the SCS30and the SME40, and a controller which controls these transceivers to execute the processes shown at Steps S1to S9and S13to S15inFIG.2or processes equivalent thereto. The GGSN/P-GW23can be also configured as with the MTC-IWF21, except conducting communication with the SGSN, the S-GW25, the SCS30and the AS through the user plane. Further, as shown inFIG.4, the MTC device10includes at least a reception unit101, a validity unit102, and a transmission unit103. These units101to103are mutually connected with each other thorough a bus or the like. The reception unit102receives the trigger message from the core network. The validity unit102validates the trigger message. The transmission unit103transmits the Reject message to the core network, when the trigger message is not validated by the validity unit102. These units101to103can be configured by, for example, a transceiver which wirelessly conducts communication with the core network through the RAN, and a controller which controls this transceiver to execute the processes shown at Steps S10to S13and S16inFIG.2or processes equivalent thereto. Furthermore, as shown inFIG.5, the SCS30includes at least a transmission unit301, a reception unit302, and a send unit303. These units301to303are mutually connected with each other thorough a bus or the like. The transmission unit301transmits the trigger message to the core network through control plane (i.e. transmits the trigger message to the MTC-IWF21via Tsp interface). The reception unit302receives the Reject message from the MTC-IWF21. The send unit303sends the trigger message through user plane (i.e. sends the trigger message to the GGSN/P-GW23via Gi/SGi interface), when the Reject message is received by the reception unit302. These units301to303can be configured by, for example, transceivers which respectively conduct communication with the MTC-IWF21and the GGSN/P-GW23, and a controller which controls these transceivers to execute the processes shown at Steps S1, S5, S15and S16inFIG.2or processes equivalent thereto. The SME40can be also configured as with the SCS30, except transmitting the trigger message to the MSC-IWF21via the SMS-SC/GMSC/IWMSC. Note that the present invention is not limited to the above-mentioned exemplary embodiment, and it is obvious that various modifications can be made by those of ordinary skill in the art based on the recitation of the claims. For example, the MTC-IWF21or the GGSN/P-GW23may transfer the trigger message through a different network element, when a response to the trigger message is not received within a predetermined period of time. Specifically, the reception unit214receives the response from the MTC device10. If the response is not received by the reception unit214within the period of time, the switching unit215causes the forwarding unit213to forward the trigger message through a network element different from the selected network element. Note that the period of time can be measured by use of a timer, a counter or the like. Thus, it can be also ensured that the trigger message reaches the MTC device10. In this case, it may not be required for the MTC device10to sends the Reject message, so that modification to the MTC device10can be reduced compared with the above-mentioned exemplary embodiment. The whole or part of the exemplary embodiment disclosed above can be described as, but not limited to, the following supplementary notes. (Supplementary Note 1) MTC-IWF downloads (requesting or being pushed) MTC device capabilities from HSS via interface S6m including for example if MTC device supports IMS. This can be a new message or a new field in the message which MTC-IWF retrieves MTC device serving node information. (Supplementary Note 2) MTC device trigger delivery route priority list. This list is created based on the operator policy of network usage and/or by UE capability. The list can be created in HSS then pushed to MTC-IWF, or created by MTC-IWF after it downloaded the necessary information from HSS. The list can be stored in MTC-IWF locally. (Supplementary Note 3) If a MME is the serving node, MTC-IWF checks with MME to see if it has valid NAS security context. When MME does not have valid security context, MTC-IWF should forward the trigger to other entities like SGSN/MSC according to the delivery route priority. (Supplementary Note 4) When MTC device receives a trigger embedded in an unprotected NAS or user plane message, it sends a Trigger Reject message with cause indication to network node: MTC-IWF or GGSN/P-GW. (Supplementary Note 5) MTC-IWF, which receives a reject message with a cause indicating there was no proper NAS protection, finds another path to deliver the trigger. When all the control plane paths are not available, MTC-IWF can initiate AKA and SMC procedure. It also can forward the Reject message to SCS, such that SCS can send the trigger message via user plane. (Supplementary Note 6) GGSN/P-GW which receives a reject message with a cause indicating there was no proper user plane confidentiality protection, finds another path to deliver the trigger. 2. Discussion There are two issues discussed in this document. First, SA2 TS 23.682 considers roaming in the architecture. In this case, the visited network may not be trusted by the MTC device and the triggers forwarded from such network should not be trusted and taken as valid either. Thus MTC device should:verify if the MTC-IWF it communicates with is authorized.be able to verify if the trigger is from a authorized MTC-IWF. If it is from an invalid MTC-IWF, MTC device should inform MME such that MME will suspend the communication with MTC-IWF and may have a further action. Second, when the MTC device receives a trigger without NAS integrity protection, the MTC device (as described in TR 33.868) “could discard the trigger or alternatively look deeper into the trigger if end-to-end protection was applied”. A few things are concerned:The trigger cannot be received and MTC server or MTC user has no knowledge about the discard.It wastes network traffic and MTC device's battery, that if MME sends a trigger which will not be received. In order to solve the above described issue:MME should not send the trigger without protection in the first placeIf such trigger is received, MTC device should send Reject message to MME/MTC-IWF/SCS with a cause of reject such that network can act accordingly:MME can Initiate AKA procedure to establish security contextMTC-IWF can send the trigger from another path (i.e. via another network node), for example, SGSN. This can depend on operator policy and/or MTC device capabilities. Based on the discussion above, we propose to have the following change to TR 33.868. Solution 1: Triggering Via NAS Signaling The main Device triggering mechanisms currently being considered in SA2 TR 23.888 [10] are triggering via NAS signalling (e.g. a new information element in an existing NAS message or a new NAS message) and triggering via SMS. The SMS trigger may possibly also be sent from the network to the MTC Device using NAS as a transport. In this case, current NAS security mechanisms can be used to solve the security issue. After NAS SMC, NAS security is activated. All NAS signaling messages should be integrity-protected according to TS 33.401 [13], and therefore current LTE security mechanisms ensure that the trigger indication is not tampered with. In this case the SMS trigger will also benefit from the integrity protection of NAS signalling in LTE. Source verification needs to be considered which in this context is understood to mean that the MTC Device can verify that the source of the trigger is a valid MTC server. This could be achieved in the following way. MTC Device trusts the 3GPP network sending the NAS integrity protected trigger. In this case the MTC Device could be configured with identities of trusted 3GPP networks. (Somewhat analogically as trusted non3GPP access networks can be configured in the UE in TS 33.402.) In this context trusted 3GPP network would mean networks which have a secured interface from the MTC server to the 3GPP network, and which are trusted to ensure that only trigger indications received from authorized MTC Servers will lead to triggering of MTC Devices “belonging” to that MTC server. The network may not be trusted for example when MTC device is roaming in the visited network, or when there is a strict security requirement for MTC. The MTC device should verify if the trigger is forwarded from a valid MTC-IWF. When the MTC Device then receives a NAS integrity protected trigger, it can, after verifying NAS integrity protection, verify the 3GPP network in the sense as described above. If both can be verified, the trigger can be accepted. MME should not send the trigger in a NAS message without integrity protection. If there is no NAS integrity protection of the trigger or if the 3GPP network is not trusted, the MTC Device could discard the trigger and send a Reject message to MME and MTC-IWF with a proper cause or alternatively look deeper into the trigger if end-to-end protection was applied. When MME receives a reject response from MTC device with a cause indicating no integrity protection or integrity check failure, MME canInitiate 3GPP AKA procedure towards MTC device so that when there is security context shared between them MME can forward the trigger;Or forward the reject message to MTC-IWF, so that MTC-IWF can choose another route to send the trigger. This application is based upon and claims the benefit of priority from Japanese patent application No. 2012-147982, filed on Jun. 29, 2012, and Japanese patent application No. 2012-209393, filed on Sep. 24, 2012, the disclosures of which are incorporated herein in their entirety by reference. REFERENCE SIGNS LIST 10MTC DEVICE21MTC-IWF22HSS23GGSN/P-GW24MME/SGSN/MSC25S-GW30SCS40SME101,214,302RECEPTION UNIT102VALIDITY UNIT103,301TRANSMISSION UNIT211STORAGE UNIT212SELECTION UNIT213FORWARDING UNIT215SWITCHING UNIT216CHECK UNIT217EXCLUSION UNIT218DOWNLOADING UNIT303SEND UNIT | 21,927 |
11863426 | DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Currently, when an endpoint device requests a service, a network device of network determines a best path based on several parameters including application service level agreement (SLA) metrics (e.g., quality of experience, latency, jitter, and/or the like requirements). However, while choosing the best path, the network device fails to consider where multiple destination server devices (e.g., hosting the service) are located. The service could be hosted on multiple server devices located at different geographical regions. The network device may choose the best possible path based on the application SLA metrics and/or based on a geographical location of a destination server device (e.g., a server device that is physically closest to the endpoint device). However, the approach based on the application SLA metrics fails to consider that the service is available at multiple destination server devices and via multiple paths. The approach based on the geographical location of the destination server device always selects the nearest possible server device based on the policy. Thus, current techniques for selecting a destination server device for a service consume computing resources (e.g., processing resources, memory resources, communication resources, and/or the like), networking resources, and/or the like, associated with causing a degraded service to be provided by a nonoptimal server device over a nonoptimal path, handling complaints associated with a user experience due to the degraded service, generating unnecessary congestion in a network, losing traffic associated with the service due to the nonoptimal path, and/or the like. Some implementations described herein relate to a network device that determines a best destination over a best path using multifactor path selection. For example, a network device may receive a request for a service from an endpoint device located in a first region, and may determine whether destination addresses are identified for the service and the first region. The network device may determine whether the service and the first region are identified in a multifactor path selection (MFPS) lookup table, based on the destination addresses being identified for the service and the first region, and may receive performance metrics associated with multiple paths in the first region to the destination addresses, based on the service and the first region not being identified in the MFPS lookup table. The network device may generate a performance metrics matrix based on the performance metrics, and may identify a best destination and a best path, and a next best destination and a next best path, for the service in the first region based on the performance metrics matrix. The network device may provide data identifying the best destination, the best path, the next best destination, and the next best path, for the first region, in the MFPS lookup table, and may cause, for the endpoint device, a connection to the service to be established via the best destination and the best path for the first region. In this way, the network device determines a best destination over a best path using multifactor path selection. For example, the network device may determine the best destination over the best path based on SLA metrics associated with multiple destination server devices, multiple paths, and a service being provided. This may ensure that an endpoint device receives the service (e.g., an application) from the best destination server device of a plurality of server devices (e.g., hosting the application) associated with cloud service providers. Thus, the network device conserves computing resources, networking resources, and/or the like that would otherwise have been consumed by causing a degraded service to be provided by a nonoptimal server device over a nonoptimal path, handling complaints associated with a user experience due to the degraded service, generating unnecessary congestion in a network, losing traffic associated with the service due to the nonoptimal path, and/or the like. FIGS.1A-1Kare diagrams of an example100associated with determining a best destination over a best path using multifactor path selection. As shown inFIGS.1A-1K, example100includes an endpoint device, a network with a plurality of network devices, and a plurality of server devices. The server devices may include a domain name system (DNS) server device, a first server device with a first destination address (e.g., D1), a second server device with a second destination address (e.g., D2), a third server device with a third destination address (e.g., D3), and a fourth server device with a fourth destination address (e.g., D4). Further details of the endpoint device, the network, the network devices, and the server devices are provided elsewhere herein. As shown inFIG.1A, and by reference number105, the network device may receive a request for a service from an endpoint device located in a first region (e.g., a geographical region). For example, a first user (e.g., User A) in the first region (e.g., Region 1) may wish to access a service that may be provided by the first through fourth server devices. The first user may utilize the endpoint device to generate the request for the service and may cause the endpoint device to provide the request for the service to the network. The network device (e.g., a web gateway) may receive the request for the service from the endpoint device. As further shown inFIG.1A, and by reference number110, the network device may determine whether destination addresses are identified for the service and the first region. For example, the network device may determine whether destination addresses (e.g., of one or more of the first through fourth server devices) are identified for the service and the first region. The destination addresses for the service and the first region may be identified if the network device previously enabled access to the service for an endpoint device associated with the first region. In such situations, the network device may have previously received the destination addresses for the service and the first region from the DNS server device. The destination addresses for the service and the first region may not be identified if the network device has not enabled access to the service for an endpoint device associated with the first region. As further shown inFIG.1A, and by reference number115, the network device may request and receive the destination addresses when the destination addresses are not identified for the service and/or the first region. For example, when the network device determines that the destination addresses are not identified for the service and/or the first region, the network device may perform a DNS resolution to obtain the destination addresses. The DNS resolution may include the network device generating a request for the destination addresses (e.g., of one or more of the first through fourth server devices) associated with the service and the first region. The network device may provide the request for the destination addresses to the DNS server device, and the DNS server device may identify the destination addresses of the first through fourth server devices (e.g., D1, D2, D3, and D4) based on the request. The DNS server device may provide the destination addresses to the network device, and the network device may receive the destination addresses. As shown inFIG.1B, and by reference number120, the network device may determine whether the service and the first region are identified in an MFPS lookup table. For example, the network device may maintain and store an MFPS lookup table that identifies paths (e.g., through the network) for the destination addresses associated with the service and the first region, and availabilities associated with the paths. If the network device previously enabled the service to be provided to an endpoint device in the first region, from the first through fourth service devices, the network device may have previously included the service and the first region in the MFPS lookup table. Thus, the network device may determine that the service and the first region are identified in the MFPS lookup table. If the network device has not enabled the service to be provided to an endpoint device in the first region, from the first through fourth service devices, the network device may not have included the service and the first region in the MFPS lookup table. Thus, the network device may determine that the service and the first region are not identified in the MFPS lookup table. As shown inFIG.1C, and by reference number125, the network device may request and receive performance metrics associated with multiple paths in the first region to the destination addresses based on the service and the first region not being identified in the MFPS lookup table. For example, the network device may determine that the service and the first region are not identified in the MFPS lookup table when the network device has not enabled the service to be provided to an endpoint device in the first region from the first through fourth service devices. When the network device determines that the service and the first region are not identified in the MFPS lookup table, the network device may initiate SLA probes for the destination addresses over all available paths (e.g., through the network) associated with the destination addresses. An SLA probe is a network performance measurement and diagnostic tool that uses active monitoring via generation of traffic in a continuous, reliable, and predictable manner. An SLA probe may transmit traffic across the network to measure performance metrics associated with multiple destination addresses and multiple paths. The performance metrics may include availability metrics (e.g., percent availability of a path), jitter metrics, latency metrics, response times, packet loss metrics, and/or the like. The network device may utilize the SLA probes to request and receive, from the network, the performance metrics associated with the multiple paths in the first region to the destination addresses. In some implementations, the network device may initiate the SLA probes after a predetermined time period, when a request for a service is not received, every time a request for a service is received, and/or the like. As shown inFIG.1D, and by reference number130, the network device may generate a performance metrics matrix based on the performance metrics. For example, the network device may create the performance metrics matrix and may populate the performance metrics matrix with the performance metrics associated with the multiple paths in the first region to the destination addresses. In some implementations, the network device may utilize one or more of the performance metrics, associated with the multiple paths in the first region to the destination addresses, to populate the performance metrics matrix. For example, the network device may utilize availability metrics associated with the multiple paths in the first region to the destination addresses. As further shown inFIG.1D, the performance metrics matrix may include information indicating that a first path in the first region to the first destination address (e.g., D1) has a 30% availability, the first path in the first region to the second destination address (e.g., D2) has a 60% availability, the first path in the first region to the third destination address (e.g., D3) has a 95% availability, and the first path in the first region to the fourth destination address (e.g., D4) has a 16% availability. The performance metrics matrix may include information indicating that a second path in the first region to the first destination address (e.g., D1) has an 85% availability, the second path in the first region to the second destination address (e.g., D2) has a 60% availability, the second path in the first region to the third destination address (e.g., D3) has an 85% availability, and the second path in the first region to the fourth destination address (e.g., D4) has a 25% availability. The performance metrics matrix may include information indicating that a third path in the first region to the first destination address (e.g., D1) has a 90% availability, the third path in the first region to the second destination address (e.g., D2) has a 60% availability, the third path in the first region to the third destination address (e.g., D3) has a 10% availability, and the third path in the first region to the fourth destination address (e.g., D4) has a 50% availability. As shown inFIG.1E, and by reference number135, the network device may identify a best destination and a best path, and a next best destination and a next best path, for the service in the first region based on the performance metrics matrix. For example, the network device may rank the availabilities in the performance metrics matrix, and may determine that the first path in the first region to the third destination address (e.g., D3) (e.g., a 95% availability, which is the greatest availability) is the best path and the best destination, respectively, based on ranking the availabilities. The network device may determine that the third path in the first region to the first destination address (e.g., D1) (e.g., a 90% availability, which is the next greatest availability) are the next best path and the next best destination, respectively, based on ranking the availabilities. As further shown inFIG.1E, and by reference number140, the network device may provide data identifying the best destination, the best path, the next best destination, and the next best path, for the first region, in the MFPS lookup table. For example, the network device may populate the MFPS lookup table with the data identifying the best destination, the best path, the next best destination, and the next best path for the first region. As shown inFIG.1E, the network device may populate the MFPS lookup table with data identifying an index (e.g., 1) for the service, the service (e.g., X) requested by the endpoint device, the region (e.g., Region 1), the best destination (e.g., the third destination address, D3, of the third server device), the best path (e.g., the first path, Path 1), the next best destination (e.g., the first destination address, D1, of the first service device), and the next best path (e.g., the third path, Path 3). As shown inFIG.1F, and by reference number145, the network device may cause, for the endpoint device, a connection to the service to be established via the best destination and the best path for the first region. For example, the network device may cause the connection to the service to be established for the endpoint device. The connection may be established via the best destination (e.g., the third destination address, D3, of the third server device) and the best path (e.g., the first path, Path 1) for the first region. The third server device may utilize the connection to provide the service to the endpoint device. As shown inFIG.1G, and by reference number150, the network device may receive another request for the service from another endpoint device located in the first region. For example, a second user (e.g., User B) in the first region (e.g., Region 1) may wish to access the service that may be provided by the first through fourth server devices. The second user may utilize the other endpoint device to generate the other request for the service and may cause the other endpoint device to provide the other request for the service to the network. The network device may receive the other request for the service from the other endpoint device. As further shown inFIG.1G, and by reference number155, the network device may identify the best destination and the best path for the service in the first region from the MFPS lookup table. For example, the network device may determine that the destination addresses (e.g., of one or more of the first through fourth server devices) are identified for the service and the first region. Thus, the network device need not request and receive the destination addresses for the service and the first region from the DNS server device. The network device may also determine that the service and the first region are identified in the MFPS lookup table. Thus, the network device need not request and receive the performance metrics associated with the multiple paths in the first region to the destination addresses, and need not generate the performance metrics matrix based on the performance metrics. Rather, the network device may analyze the MFPS lookup table to identify the best destination and the best path for the service in the first region from the MFPS lookup table. For example, the network device may identify the first path in the first region to the third destination address (e.g., D3) (e.g., a 95% availability, which is the greatest availability) as the best path and the best destination for the service in the first region. As further shown inFIG.1G, and by reference number160, the network device may cause, for the other endpoint device, a connection to the service to be established via the best destination and the best path for the first region. For example, the network device may cause the connection to the service to be established for the other endpoint device. The connection may be established via the best destination (e.g., the third destination address, D3, of the third server device) and the best path (e.g., the first path, Path 1) for the first region. The third server device may utilize the connection to provide the service to the other endpoint device. As shown inFIG.1H, and by reference number165, the network device may receive another request for the service from another endpoint device located in a second region (e.g., another geographical region separate from the first region). For example, a third user (e.g., User C) in the second region (e.g., Region 2) may wish to access the service that may be provided by the first through fourth server devices. The third user may utilize the other endpoint device to generate the other request for the service and may cause the other endpoint device to provide the other request for the service to the network. The network device may receive the other request for the service from the other endpoint device. As further shown inFIG.1H, and by reference number170, the network device may request and receive additional performance metrics associated with multiple paths in the second region to the destination addresses based on the second region not being identified in the MFPS lookup table. For example, the network device may determine that the destination addresses (e.g., of one or more of the first through fourth server devices) are identified for the service and the second region. Thus, the network device need not request and receive the destination addresses for the service and the second region from the DNS server device. However, the network device may determine that the second region is not identified in the MFPS lookup table. Thus, the network device may request and receive, from the network, the additional performance metrics associated with the multiple paths in the second region to the destination addresses. The network device may determine that the second region is not identified in the MFPS lookup table when the network device has not enabled the service to be provided to an endpoint device in the second region from the first through fourth service devices. When the network device determines that the second region is not identified in the MFPS lookup table, the network device may initiate the SLA probes for the destination addresses over all available paths (e.g., through the network) associated with the destination addresses. The additional performance metrics may include availability metrics (e.g., percent availability of a path), jitter metrics, latency metrics, response times, packet loss metrics, and/or the like. The network device may utilize the SLA probes to request and receive, from the network, the performance metrics associated with the multiple paths in the second region to the destination addresses. As shown inFIG.1I, and by reference number175, the network device may modify the performance metrics matrix based on the additional performance metrics to generate a modified performance metrics matrix. For example, the network device may populate the performance metrics matrix with the additional performance metrics associated with the multiple paths in the second region to the destination addresses. In some implementations, the network device may utilize one or more of the additional performance metrics, associated with the multiple paths in the second region to the destination addresses, to modify the performance metrics matrix. For example, the network device may utilize availability metrics associated with the multiple paths in the second region to the destination addresses. As further shown inFIG.1I, the modified performance metrics matrix may include information indicating that the first path in the second region to the first destination address (e.g., D1) has a 50% availability, the first path in the second region to the second destination address (e.g., D2) has an 83% availability, the first path in the second region to the third destination address (e.g., D3) has a 15% availability, and the first path in the second region to the fourth destination address (e.g., D4) has a 75% availability. The performance metrics matrix may include information indicating that the second path in the second region to the first destination address (e.g., D1) has a 45% availability, the second path in the second region to the second destination address (e.g., D2) has a 95% availability, the second path in the second region to the third destination address (e.g., D3) has a 20% availability, and the second path in the second region to the fourth destination address (e.g., D4) has a 65% availability. The performance metrics matrix may include information indicating that the third path in the second region to the first destination address (e.g., D1) has a 15% availability, the third path in the second region to the second destination address (e.g., D2) has an 85% availability, the third path in the second region to the third destination address (e.g., D3) has a 50% availability, and the third path in the second region to the fourth destination address (e.g., D4) has a 91% availability. As shown inFIG.1J, and by reference number180, the network device may identify a best destination and a best path, and a next best destination and a next best path, for the service in the second region based on the modified performance metrics matrix. For example, the network device may rank the availabilities in the modified performance metrics matrix for the second region, and may determine that the second path in the second region to the second destination address (e.g., D2) (e.g., a 95% availability, which is the greatest availability) are the best path and the best destination, respectively, based on ranking the availabilities. The network device may determine that the third path in the second region to the fourth destination address (e.g., D4) (e.g., a 91% availability, which is the next greatest availability) are the next best path and the next best destination, respectively, based on ranking the availabilities. As further shown inFIG.1J, and by reference number185, the network device may provide data identifying the best destination, the best path, the next best destination, and the next best path, for the second region, in the MFPS lookup table. For example, the network device may populate the MFPS lookup table with the data identifying the best destination, the best path, the next best destination, and the next best path for the second region. As shown inFIG.1J, the network device may populate the MFPS lookup table with data identifying an index (e.g., 2) for the service, the service (e.g., X) requested by the other endpoint device, the region (e.g., Region 2), the best destination (e.g., the second destination address, D2, of the second server device), the best path (e.g., the second path, Path 2), the next best destination (e.g., the fourth destination address, D4, of the fourth service device), and the next best path (e.g., the third path, Path 3). As shown inFIG.1K, and by reference number190, the network device may cause, for the other endpoint device, a connection to the service to be established via the best destination and the best path for the second region. For example, the network device may cause the connection to the service to be established for the other endpoint device. The connection may be established via the best destination (e.g., the second destination address, D2, of the second server device) and the best path (e.g., the second path, Path 2) for the second region. The second server device may utilize the connection to provide the service to the other endpoint device. In this way, the network device determines a best destination over a best path using multifactor path selection. For example, the network device may determine the best destination over the best path based on SLA metrics associated with multiple destination server devices, multiple paths, and a service being provided. This may ensure that an endpoint device receives the service (e.g., an application) from the best destination server device of a plurality of server devices (e.g., hosting the application) associated with cloud service providers. Thus, the network device conserves computing resources, networking resources, and/or the like that would otherwise have been consumed by causing a degraded service to be provided by a nonoptimal server device over a nonoptimal path, handling complaints associated with a user experience due to the degraded service, generating unnecessary congestion in a network, losing traffic associated with the service due to the nonoptimal path, and/or the like. As indicated above,FIGS.1A-1Kare provided as an example. Other examples may differ from what is described with regard toFIGS.1A-1K. The number and arrangement of devices shown inFIGS.1A-1Kare provided as an example. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than those shown inFIGS.1A-1K. Furthermore, two or more devices shown inFIGS.1A-1Kmay be implemented within a single device, or a single device shown inFIGS.1A-1Kmay be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) shown inFIGS.1A-1Kmay perform one or more functions described as being performed by another set of devices shown inFIGS.1A-1K. FIG.2is a diagram of an example environment200in which systems and/or methods described herein may be implemented. As shown inFIG.2, environment200may include an endpoint device210, a group of network devices220(shown as network device220-1through network device220-N), a server device230, and a network240. Devices of the environment200may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections. The endpoint device210includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, such as information described herein. For example, the endpoint device210may include a mobile phone (e.g., a smart phone or a radiotelephone), a set-top box, a laptop computer, a tablet computer, a desktop computer, a handheld computer, a gaming device, a wearable communication device (e.g., a smart watch, a pair of smart glasses, a heart rate monitor, a fitness tracker, smart clothing, smart jewelry, or a head mounted display), a network device (e.g., a router, a residential gateway, and/or the like), or a similar type of device. In some implementations, the endpoint device210may receive network traffic from and/or may provide network traffic to other endpoint devices210and/or the server device230, via the network240(e.g., by routing packets using the network devices220as intermediaries). The network device220includes one or more devices capable of receiving, processing, storing, routing, and/or providing traffic (e.g., a packet or other information or metadata) in a manner described herein. For example, the network device220may include a router, such as a label switching router (LSR), a label edge router (LER), an ingress router, an egress router, a provider router (e.g., a provider edge router or a provider core router), a virtual router, a route reflector, an area border router, or another type of router. Additionally, or alternatively, the network device220may include a gateway, a switch, a firewall, a hub, a bridge, a reverse proxy, a server (e.g., a proxy server, a cloud server, or a data center server), a load balancer, and/or a similar device. In some implementations, the network device220may be a physical device implemented within a housing, such as a chassis. In some implementations, the network device220may be a virtual device implemented by one or more computer devices of a cloud computing environment or a data center. In some implementations, a group of network devices220may be a group of data center nodes that are used to route traffic flow through the network240. The server device230includes one or more devices capable of receiving, generating, storing, processing, providing, and/or routing information, as described elsewhere herein. The server device230may include a communication device and/or a computing device. For example, the server device230may include a server, such as an application server, a client server, a web server, a database server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), or a server in a cloud computing system. In some implementations, the server device230includes computing hardware used in a cloud computing environment. The network240includes one or more wired and/or wireless networks. For example, the network240may include a packet switched network, a cellular network (e.g., a fifth generation (5G) network, a fourth generation (4G) network, such as a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (NAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, or the like, and/or a combination of these or other types of networks. The number and arrangement of devices and networks shown inFIG.2are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.2. Furthermore, two or more devices shown inFIG.2may be implemented within a single device, or a single device shown inFIG.2may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of the environment200may perform one or more functions described as being performed by another set of devices of the environment200. FIG.3is a diagram of example components of one or more devices ofFIG.2. The example components may be included in a device300, which may correspond to the endpoint device210, the network device220, and/or the server device230. In some implementations, the endpoint device210, the network device220, and/or the server device230may include one or more devices300and/or one or more components of the device300. As shown inFIG.3, the device300may include a bus310, a processor320, a memory330, an input component340, an output component350, and a communication interface360. The bus310includes one or more components that enable wired and/or wireless communication among the components of the device300. The bus310may couple together two or more components ofFIG.3, such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling. The processor320includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. The processor320is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the processor320includes one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein. The memory330includes volatile and/or nonvolatile memory. For example, the memory330may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The memory330may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). The memory330may be a non-transitory computer-readable medium. The memory330stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of the device300. In some implementations, the memory330includes one or more memories that are coupled to one or more processors (e.g., the processor320), such as via the bus310. The input component340enables the device300to receive input, such as user input and/or sensed input. For example, the input component340may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator. The output component350enables the device300to provide output, such as via a display, a speaker, and/or a light-emitting diode. The communication interface360enables the device300to communicate with other devices via a wired connection and/or a wireless connection. For example, the communication interface360may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna. The device300may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., the memory330) may store a set of instructions (e.g., one or more instructions or code) for execution by the processor320. The processor320may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors320, causes the one or more processors320and/or the device300to perform one or more operations or processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, the processor320may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.3are provided as an example. The device300may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.3. Additionally, or alternatively, a set of components (e.g., one or more components) of the device300may perform one or more functions described as being performed by another set of components of the device300. FIG.4is a diagram of example components of one or more devices ofFIG.2. The example components may be included in a device400. The device400may correspond to the network device220. In some implementations, the network device220may include one or more devices400and/or one or more components of the device400. As shown inFIG.4, the device400may include one or more input components410-1through410-B (B≥1) (hereinafter referred to collectively as input components410, and individually as input component410), a switching component420, one or more output components430-1through430-C (C≥1) (hereinafter referred to collectively as output components430, and individually as output component430), and a controller440. The input component410may be one or more points of attachment for physical links and may be one or more points of entry for incoming traffic, such as packets. The input component410may process incoming traffic, such as by performing data link layer encapsulation or decapsulation. In some implementations, the input component410may transmit and/or receive packets. In some implementations, the input component410may include an input line card that includes one or more packet processing components (e.g., in the form of integrated circuits), such as one or more interface cards (IFCs), packet forwarding components, line card controller components, input ports, processors, memories, and/or input queues. In some implementations, the device400may include one or more input components410. The switching component420may interconnect the input components410with the output components430. In some implementations, the switching component420may be implemented via one or more crossbars, via busses, and/or with shared memories. The shared memories may act as temporary buffers to store packets from the input components410before the packets are eventually scheduled for delivery to the output components430. In some implementations, the switching component420may enable the input components410, the output components430, and/or the controller440to communicate with one another. The output component430may store packets and may schedule packets for transmission on output physical links. The output component430may support data link layer encapsulation or decapsulation, and/or a variety of higher-level protocols. In some implementations, the output component430may transmit packets and/or receive packets. In some implementations, the output component430may include an output line card that includes one or more packet processing components (e.g., in the form of integrated circuits), such as one or more IFCs, packet forwarding components, line card controller components, output ports, processors, memories, and/or output queues. In some implementations, the device400may include one or more output components430. In some implementations, the input component410and the output component430may be implemented by the same set of components (e.g., and input/output component may be a combination of the input component410and the output component430). The controller440includes a processor in the form of, for example, a CPU, a GPU, an APU, a microprocessor, a microcontroller, a DSP, an FPGA, an ASIC, and/or another type of processor. The processor is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the controller440may include one or more processors that can be programmed to perform a function. In some implementations, the controller440may include a RAM, a ROM, and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, an optical memory, etc.) that stores information and/or instructions for use by the controller440. In some implementations, the controller440may communicate with other devices, networks, and/or systems connected to the device400to exchange information regarding network topology. The controller440may create routing tables based on the network topology information, may create forwarding tables based on the routing tables, and may forward the forwarding tables to the input components410and/or output components430. The input components410and/or the output components430may use the forwarding tables to perform route lookups for incoming and/or outgoing packets. The controller440may perform one or more processes described herein. The controller440may perform these processes in response to executing software instructions stored by a non-transitory computer-readable medium. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices. Software instructions may be read into a memory and/or storage component associated with the controller440from another computer-readable medium or from another device via a communication interface. When executed, software instructions stored in a memory and/or storage component associated with the controller440may cause the controller440to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.4are provided as an example. In practice, the device400may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.4. Additionally, or alternatively, a set of components (e.g., one or more components) of the device400may perform one or more functions described as being performed by another set of components of the device400. FIG.5is a flowchart of an example process500for determining a best destination over a best path using multifactor path selection. In some implementations, one or more process blocks ofFIG.5may be performed by a network device (e.g., the network device220). In some implementations, one or more process blocks ofFIG.5may be performed by another device or a group of devices separate from or including the network device, such as an endpoint device (e.g., the endpoint device210) and/or a server device (e.g., the server device230). Additionally, or alternatively, one or more process blocks ofFIG.5may be performed by one or more components of the device300, such as the processor320, the memory330, the input component340, the output component350, and/or the communication interface360. Additionally, or alternatively, one or more process blocks ofFIG.5may be performed by one or more components of the device400, such as the input component410, the switching component420, the output component430, and/or the controller440. As shown inFIG.5, process500may include receiving a request for a service from an endpoint device located in a first region (block510). For example, the network device may receive a request for a service from an endpoint device located in a first region, as described above. In some implementations, the network device is a web gateway. As further shown inFIG.5, process500may include determining whether destination addresses are identified for the service and the first region (block520). For example, the network device may determine whether destination addresses are identified for the service and the first region, as described above. As further shown inFIG.5, process500may include determining whether the service and the first region are identified in an MFPS lookup table, based on the destination addresses being identified for the service and the first region (block530). For example, the network device may determine whether the service and the first region are identified in an MFPS lookup table, based on the destination addresses being identified for the service and the first region, as described above. As further shown inFIG.5, process500may include receiving performance metrics associated with multiple paths in the first region to the destination addresses, based on the service and the first region not being identified in the MFPS lookup table (block540). For example, the network device may receive performance metrics associated with multiple paths in the first region to the destination addresses, based on the service and the first region not being identified in the MFPS lookup table, as described above. In some implementations, receiving the performance metrics associated with the multiple paths in the first region to the destination addresses includes requesting the performance metrics from server devices associated with the destination addresses, and receiving the performance metrics from the server devices associated with the destination addresses based on requesting the performance metrics. As further shown inFIG.5, process500may include generating a performance metrics matrix based on the performance metrics (block550). For example, the network device may generate a performance metrics matrix based on the performance metrics, as described above. As further shown inFIG.5, process500may include identifying a best destination and a best path, and a next best destination and a next best path, for the service in the first region based on the performance metrics matrix (block560). For example, the network device may identify a best destination and a best path, and a next best destination and a next best path, for the service in the first region based on the performance metrics matrix, as described above. In some implementations, identifying the best destination and the best path for the service in the first region from the MFPS lookup table includes determining whether the service and the first region are identified in the MFPS lookup table, and identifying the best destination and the best path for the service in the first region from the MFPS lookup table, based on the service and the first region being identified in the MFPS lookup table. As further shown inFIG.5, process500may include providing data identifying the best destination, the best path, the next best destination, and the next best path, for the first region, in the MFPS lookup table (block570). For example, the network device may provide data identifying the best destination, the best path, the next best destination, and the next best path, for the first region, in the MFPS lookup table, as described above. As further shown inFIG.5, process500may include causing, for the endpoint device, a connection to the service to be established via the best destination and the best path for the first region (block580). For example, the network device may cause, for the endpoint device, a connection to the service to be established via the best destination and the best path for the first region, as described above. In some implementations, process500includes requesting, from a DNS server device, the destination addresses based on the destination addresses not being identified for the service or the first region, and receiving, from the DNS server device, the destination addresses based on requesting the destination addresses. In some implementations, process500includes receiving updated performance metrics associated with the multiple paths in the first region after a predetermined time period. In some implementations, process500includes receiving another request for the service from another endpoint device located in the first region; identifying the best destination and the best path for the service in the first region from the MFPS lookup table; and causing, for the other endpoint device, another connection to the service to be established via the best destination and the best path for the first region. In some implementations, process500includes receiving another request for the service from another endpoint device located in a second region that is separate from the first region; receiving additional performance metrics associated with multiple paths in the second region to the destination addresses based on the second region not being identified in the MFPS lookup table; modifying the performance metrics matrix based on the additional performance metrics to generate a modified performance metrics matrix; identifying a best destination and a best path, and a next best destination and a next best path, for the service in the second region, based on the modified performance metrics matrix; and providing data identifying the best destination, the best path, the next best destination, and the next best path, for the second region, in the MFPS lookup table. In some implementations, process500includes causing, for the other endpoint device, another connection to the service to be established via the best destination and the best path for the second region. In some implementations, receiving the additional performance metrics associated with the multiple paths in the second region to the destination addresses includes determining whether the service and the second region are identified in the MFPS lookup table, and receiving the additional performance metrics associated with the multiple paths in the second region to the destination addresses, based on the second region not being identified in the MFPS lookup table. In some implementations, process500includes determining that the best destination or the best path for the second region is unavailable, and causing, for the other endpoint device, the other connection to the service to be established via the next best destination and the next best path for the second region based on determining that the best destination or the best path for the second region is unavailable. In some implementations, process500includes receiving updated additional performance metrics associated with the multiple paths in the first region after a predetermined time period. In some implementations, process500includes determining that the best destination or the best path for the first region is unavailable, and causing, for the endpoint device, a connection to the service to be established via the next best destination and the next best path for the first region based on determining that the best destination or the best path for the first region is unavailable. AlthoughFIG.5shows example blocks of process500, in some implementations, process500may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of process500may be performed in parallel. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein. Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense. | 54,180 |
11863427 | DESCRIPTION OF EXAMPLE EMBODIMENTS Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments. Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. Overview Disclosed herein are systems, methods, and computer-readable storage media for providing multicast-based performance routing and policy control for SDWANs. A method can include connecting a SDWAN including a plurality of receivers and a plurality of multicast replicators. The plurality of multicast replicators forming a plurality of multicast groups in a network environment. The method can also include determining a multicast application-route policy to determine a connection path between the plurality of receivers and the multicast replicators. Further, the method can include selecting a first multicast replicator of the plurality of multicast replicators based on the multicast application-route policy. Additionally, the method can include switching connection paths between the plurality of receivers and the multicast replicators based on the selected first multicast replicator to dynamically tune an overlay multicast tree of the network environment. The method can encompass situations when multicast application-route policy is based on at least one of the plurality of multicast groups, geographic location, bandwidth indications, system load, and performance. The method can also encompass situations when the switching of the connection paths occurs dynamically across the plurality of multicast replicators based on real-time selections of multicast replicators of the plurality of multicast replicators. The method can further include selecting a second multicast replicator of the plurality of multicast replicators based on the multicast application-route policy and switching the connection paths between the plurality of receiver and the multicast replicators based on selecting of the second multicast replicator to dynamically tune the overlay multicast tree of the network environment. The second multicast replicator is dynamically selected according to the multicast application-route policy based on changing network conditions in the network environment associated with the first multicast replicator. The changing network conditions in the network environment associated with the first multicast replicator include performance of the first multicast replicator operating to provide network service access through the overlay multicast tree in the network environment. The method can additional encompass when the first multicast replicator is configured to advertise replicator status information of the first multicast replicator to a plurality of multicast routers in the overlay multicast tree and at least one of the plurality of multicast routers are configured to facilitate the selection of first multicast replicator based on multicast application route-policy according to the advertised replicator status information of the first multicast replicator and the switching of the connection paths between the plurality of receivers and the multicast replicators based on the selection of the first multicast replicator. The method can moreover encompass when the multicast application-route policy is specific to one or more multicast groups and is selected based on inclusion of the first multicast replicator in the one or more multicast groups. The method can finally encompass when the multicast application-route policy is specific to one or more transport networks associated with multicast traffic and the application-route policy is selected based on a transport network associated with specific multicast traffic passing between the plurality of receivers and the multicast replicators. A system can include one or more processors and at least one computer-readable storage medium storing instructions which, when executed by the one or more processors, cause the one or more processors to connect an SDWAN including a plurality of receivers and a plurality of multicast replicators. The plurality of multicast replicators forming a plurality of multicast groups. The instructions can also cause the processor to determine a multicast application-route policy to determine a connection path between the plurality of receivers and the multicast replicators. Further, the instructions can cause the processor to select a first multicast replicator of the plurality of multicast replicators based on the multicast application-route policy. Additionally, the instructions can cause the processor to switch connection paths between the plurality of receivers and the multicast replicators based on the selected first multicast replicator. A non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor, can cause the processor to connect an SDWAN including a plurality of receivers and a plurality of multicast replicators. The plurality of multicast replicators forming a plurality of multicast groups. The instructions can also cause the processor to determine a multicast application-route policy to determine a connection path between the plurality of receivers and the multicast replicators. Further, the instructions can cause the processor to select a first multicast replicator of the plurality of multicast replicators based on the multicast application-route policy. Additionally, the instructions can cause the processor to switch connection paths between the plurality of receivers and the multicast replicators based on the selected first multicast replicator. DESCRIPTION The disclosed technology addresses the need in the art for efficiently and effectively controlling multicast routing in SDWANs. The present technology involves system, methods, and computer-readable media for providing multicast performance routing and policy control in SDWANs. Further, the present technology involves systems, methods, and computer-readable media for controlling multicast replicators in SDWANs based on load and performance conditions. FIG.1illustrates an example of a network architecture100for implementing aspects of the present technology. An example of an implementation of the network architecture100is the Cisco® SDWAN architecture. However, one of ordinary skill in the art will understand that, for the network architecture100and any other system discussed in the present disclosure, there can be additional or fewer component in similar or alternative configurations. The illustrations and examples provided in the present disclosure are for conciseness and clarity. Other embodiments may include different numbers and/or types of elements but one of ordinary skill the art will appreciate that such variations do not depart from the scope of the present disclosure. In this example, the network architecture100can comprise an orchestration plane102, a management plane120, a control plane130, and a data plane140. The orchestration plane can102assist in the automatic on-boarding of edge network devices142(e.g., switches, routers, etc.) in an overlay network. The orchestration plane102can include one or more physical or virtual network orchestrator appliances104. The network orchestrator appliance(s)104can perform the initial authentication of the edge network devices142and orchestrate connectivity between devices of the control plane130and the data plane140. In some embodiments, the network orchestrator appliance(s)104can also enable communication of devices located behind Network Address Translation (NAT). In some embodiments, physical or virtual Cisco® SD-WAN vBond appliances can operate as the network orchestrator appliance(s)104. The management plane120can be responsible for central configuration and monitoring of a network. The management plane120can include one or more physical or virtual network management appliances122. In some embodiments, the network management appliance(s)122can provide centralized management of the network via a graphical user interface to enable a user to monitor, configure, and maintain the edge network devices142and links (e.g., Internet transport network160, MPLS network162, 4G/LTE network164) in an underlay and overlay network. The network management appliance(s)122can support multi-tenancy and enable centralized management of logically isolated networks associated with different entities (e.g., enterprises, divisions within enterprises, groups within divisions, etc.). Alternatively or in addition, the network management appliance(s)122can be a dedicated network management system for a single entity. In some embodiments, physical or virtual Cisco® SD-WAN vManage appliances can operate as the network management appliance(s)122. The management plane120can include an analytics engine124to provide analytics for the network. The control plane130can build and maintain a network topology and make decisions on where traffic flows. The control plane130can include one or more physical or virtual network controller appliance(s)132. The network controller appliance(s)132can establish secure connections to each network device142and distribute route and policy information via a control plane protocol (e.g., Overlay Management Protocol (OMP) (discussed in further detail below), Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Border Gateway Protocol (BGP), Protocol-Independent Multicast (PIM), Internet Group Management Protocol (IGMP), Internet Control Message Protocol (ICMP), Address Resolution Protocol (ARP), Bidirectional Forwarding Detection (BFD), Link Aggregation Control Protocol (LACP), etc.). In some embodiments, the network controller appliance(s)132can operate as route reflectors. The network controller appliance(s)132can also orchestrate secure connectivity in the data plane140between and among the edge network devices142. For example, in some embodiments, the network controller appliance(s)132can distribute crypto key information among the network device(s)142. This can allow the network to support a secure network protocol or application (e.g., Internet Protocol Security (IPSec), Transport Layer Security (TLS), Secure Shell (SSH), etc.) without Internet Key Exchange (IKE) and enable scalability of the network. In some embodiments, physical or virtual Cisco® SD-WAN vSmart controllers can operate as the network controller appliance(s)132. The data plane140can be responsible for forwarding packets based on decisions from the control plane130. The data plane140can include the edge network devices142, which can be physical or virtual network devices. The edge network devices142can operate at the edges various network environments of an organization, such as in one or more data centers or colocation centers150, campus networks152, branch office networks154, home office networks154, and so forth, or in the cloud (e.g., Infrastructure as a Service (IaaS), Platform as a Service (PaaS), SaaS, and other cloud service provider networks). The edge network devices142can provide secure data plane connectivity among sites over one or more WAN transports, such as via one or more Internet transport networks160(e.g., Digital Subscriber Line (DSL), cable, etc.), MPLS networks162(or other private packet-switched network (e.g., Metro Ethernet, Frame Relay, Asynchronous Transfer Mode (ATM), etc.), mobile networks164(e.g., 3G, 4G/LTE, 5G, etc.), or other WAN technology (e.g., Synchronous Optical Networking (SONET), Synchronous Digital Hierarchy (SDH), Dense Wavelength Division Multiplexing (DWDM), or other fiber-optic technology; leased lines (e.g., T1/E1, T3/E3, etc.); Public Switched Telephone Network (PSTN), Integrated Services Digital Network (ISDN), or other private circuit-switched network; small aperture terminal (VSAT) or other satellite network; etc.). The edge network devices142can be responsible for traffic forwarding, security, encryption, quality of service (QoS), and routing (e.g., BGP, OSPF, etc.), among other tasks. In some embodiments, physical or virtual Cisco® SD-WAN vEdge routers can operate as the edge network devices142. FIG.2illustrates an example of a network topology200showing various aspects of the network architecture100. The network topology200can include a management network202, a pair of network sites204A and204B (collectively,204) (e.g., the data center(s)150, the campus network(s)152, the branch office network(s)154, the home office network(s)156, cloud service provider network(s), etc.), and a pair of Internet transport networks160A and160B (collectively,160). The management network202can include one or more network orchestrator appliances104, one or more network management appliance122, and one or more network controller appliances132. Although the management network202is shown as a single network in this example, one of ordinary skill in the art will understand that each element of the management network202can be distributed across any number of networks and/or be co-located with the sites204. In this example, each element of the management network202can be reached through either transport network160A or160B. Each site can include one or more endpoints206connected to one or more site network devices208. The endpoints206can include general purpose computing devices (e.g., servers, workstations, desktop computers, etc.), mobile computing devices (e.g., laptops, tablets, mobile phones, etc.), wearable devices (e.g., watches, glasses or other head-mounted displays (MIDs), ear devices, etc.), and so forth. The endpoints206can also include Internet of Things (IoT) devices or equipment, such as agricultural equipment (e.g., livestock tracking and management systems, watering devices, unmanned aerial vehicles (UAVs), etc.); connected cars and other vehicles; smart home sensors and devices (e.g., alarm systems, security cameras, lighting, appliances, media players, HVAC equipment, utility meters, windows, automatic doors, doorbells, locks, etc.); office equipment (e.g., desktop phones, copiers, fax machines, etc.); healthcare devices (e.g., pacemakers, biometric sensors, medical equipment, etc.); industrial equipment (e.g., robots, factory machinery, construction equipment, industrial sensors, etc.); retail equipment (e.g., vending machines, point of sale (POS) devices, Radio Frequency Identification (RFID) tags, etc.); smart city devices (e.g., street lamps, parking meters, waste management sensors, etc.); transportation and logistical equipment (e.g., turnstiles, rental car trackers, navigational devices, inventory monitors, etc.); and so forth. The site network devices208can include physical or virtual switches, routers, and other network devices. Although the site204A is shown including a pair of site network devices and the site204B is shown including a single site network device in this example, the site network devices208can comprise any number of network devices in any network topology, including multi-tier (e.g., core, distribution, and access tiers), spine-and-leaf, mesh, tree, bus, hub and spoke, and so forth. For example, in some embodiments, one or more data center networks may implement the Cisco® Application Centric Infrastructure (ACI) architecture and/or one or more campus networks may implement the Cisco® Software Defined Access (SD-Access or SDA) architecture. The site network devices208can connect the endpoints206to one or more edge network devices142, and the edge network devices142can be used to directly connect to the transport networks160. In some embodiments, “color” can be used to identify an individual WAN transport network, and different WAN transport networks may be assigned different colors (e.g., mpls, private1, biz-internet, metro-ethernet, lte, etc.). In this example, the network topology200can utilize a color called “biz-internet” for the Internet transport network160A and a color called “public-internet” for the Internet transport network160B. In some embodiments, each edge network device208can form a Datagram Transport Layer Security (DTLS) or TLS control connection to the network controller appliance(s)132and connect to any network control appliance132over each transport network160. In some embodiments, the edge network devices142can also securely connect to edge network devices in other sites via IPSec tunnels. In some embodiments, the BFD protocol may be used within each of these tunnels to detect loss, latency, jitter, and path failures. On the edge network devices142, color can be used help to identify or distinguish an individual WAN transport tunnel (e.g., no same color may be used twice on a single edge network device). Colors by themselves can also have significance. For example, the colors metro-ethernet, mpls, and private1, private2, private3, private4, private5, and private6 may be considered private colors, which can be used for private networks or in places where there is no NAT addressing of the transport IP endpoints (e.g., because there may be no NAT between two endpoints of the same color). When the edge network devices142use a private color, they may attempt to build IPSec tunnels to other edge network devices using native, private, underlay IP addresses. The public colors can include 3g, biz, internet, blue, bronze, custom1, custom2, custom3, default, gold, green, lte, public-internet, red, and silver. The public colors may be used by the edge network devices142to build tunnels to post-NAT IP addresses (if there is NAT involved). If the edge network devices142use private colors and need NAT to communicate to other private colors, the carrier setting in the configuration can dictate whether the edge network devices142use private or public IP addresses. Using this setting, two private colors can establish a session when one or both are using NAT. FIG.3illustrates an example of a diagram300showing the operation of an OMP, which may be used in some embodiments to manage an overlay of a network (e.g., the network architecture100). In this example, OMP messages302A and302B (collectively,302) may be transmitted back and forth between the network controller appliance132and the edge network devices142A and142B, respectively, where control plane information, such as route prefixes, next-hop routes, crypto keys, policy information, and so forth, can be exchanged over respective secure DTLS or TLS connections304A and304B. The network controller appliance132can operate similarly to a route reflector. For example, the network controller appliance132can receive routes from the edge network devices142, process and apply any policies to them, and advertise routes to other edge network devices142in the overlay. If there is no policy defined, the edge network devices142may behave in a manner similar to a full mesh topology, where each edge network device142can connect directly to another edge network device142at another site and receive full routing information from each site. OMP can Advertise Three Types of Routes:OMP routes, which can correspond to prefixes that are learned from the local site, or service side, of the edge network device142. The prefixes can be originated as static or connected routes, or from within, for example, the OSPF or BGP protocols, and redistributed into OMP so they can be carried across the overlay. OMP routes can advertise attributes such as transport location (TLOC) information (which can similar to a BGP next-hop IP address) and other attributes such as origin, originator, preference, site identifier, tag, and virtual private network (VPN). An OMP route may be installed in the forwarding table if the TLOC to which it points is active.TLOC routes, which can correspond to logical tunnel termination points on the edge network devices142that connect into the transport networks160. In some embodiments, a TLOC route can be uniquely identified and represented by a three-tuple, including an IP address, link color, and encapsulation (e.g., Generic Routing Encapsulation (GRE), IPSec, etc.). In addition to system IP address, color, and encapsulation, TLOC routes can also carry attributes such as TLOC private and public IP addresses, carrier, preference, site identifier, tag, and weight. In some embodiments, a TLOC may be in an active state on a particular edge network device142when an active BFD session is associated with that TLOC.Service routes, which can represent services (e.g., firewall, distributed denial of service (DDoS) mitigator, load balancer, intrusion prevent system (IPS), intrusion detection systems (IDS), WAN optimizer, etc.) that may be connected to the local sites of the edge network devices142and accessible to other sites for use with service insertion. In addition, these routes can also include VPNs; the VPN labels can be sent in an update type to tell the network controller appliance132what VPNs are serviced at a remote site. In the example ofFIG.3, OMP is shown running over the DTLS/TLS tunnels304established between the edge network devices142and the network controller appliance132. In addition, the diagram300shows an IPSec tunnel306A established between TLOC308A and308C over the WAN transport network160A and an IPSec tunnel306B established between TLOC308B and TLOC308D over the WAN transport network160B. Once the IPSec tunnels306A and306B are established, BFD can be enabled across each of them. FIG.4illustrates an example of a diagram400showing the operation of VPNs, which may be used in some embodiments to provide segmentation for a network (e.g., the network architecture100). VPNs can be isolated from one another and can have their own forwarding tables. An interface or sub-interface can be explicitly configured under a single VPN and may not be part of more than one VPN. Labels may be used in OMP route attributes and in the packet encapsulation, which can identify the VPN to which a packet belongs. The VPN number can be a four-byte integer with a value from0to65530. In some embodiments, the network orchestrator appliance(s)104, network management appliance(s)122, network controller appliance(s)132, and/or edge network device(s)142can each include a transport VPN402(e.g., VPN number 0) and a management VPN404(e.g., VPN number 512). The transport VPN402can include one or more physical or virtual network interfaces (e.g., network interfaces410A and410B) that respectively connect to WAN transport networks (e.g., the MPLS network162and the Internet transport network160). Secure DTLS/TLS connections to the network controller appliance(s)132or between the network controller appliance(s)132and the network orchestrator appliance(s)104can be initiated from the transport VPN402. In addition, static or default routes or a dynamic routing protocol can be configured inside the transport VPN402to get appropriate next-hop information so that the control plane130may be established and IPSec tunnels306(not shown) can connect to remote sites. The management VPN404can carry out-of-band management traffic to and from the network orchestrator appliance(s)104, network management appliance(s)122, network controller appliance(s)132, and/or edge network device(s)142over a network interface410C. In some embodiments, the management VPN404may not be carried across the overlay network. In addition to the transport VPN402and the management VPN404, the network orchestrator appliance(s)104, network management appliance(s)122, network controller appliance(s)132, or edge network device(s)142can also include one or more service-side VPNs406. The service-side VPN406can include one or more physical or virtual network interfaces (e.g., network interfaces410D and410E) that connect to one or more local-site networks412and carry user data traffic. The service-side VPN(s)406can be enabled for features such as OSPF or BGP, Virtual Router Redundancy Protocol (VRRP), QoS, traffic shaping, policing, and so forth. In some embodiments, user traffic can be directed over IPSec tunnels to other sites by redistributing OMP routes received from the network controller appliance(s)132at the site412into the service-side VPN routing protocol. In turn, routes from the local site412can be advertised to other sites by advertising the service VPN routes into the OMP routing protocol, which can be sent to the network controller appliance(s)132and redistributed to other edge network devices142in the network. Although the network interfaces410A-E (collectively,410) are shown to be physical interfaces in this example, one of ordinary skill in the art will appreciate that the interfaces410in the transport and service VPNs can also be sub-interfaces instead. As discussed previously, a multicast VPN solution has been developed that includes a BGP-based Next Generation MVPN that supports different types of tunneling technology. Specifically, SD-WAN overlay multicast can leverage an overlay management protocol (OMP) as a control plane protocol for signaling a message exchange with multicast protocols at customer VPNs. The tunneling for overlay-multicast (e.g., “Ingress Replication”) can use IPSec tunnels as unicast traffic. While, SD-WAN unicast forwarding supports performance service level agreement (SLA)-based policy routing based on flow classification such as Prefix/Port, DSCP, or App-id/Group, similar multicast performance routing and policy control features remain absent from current SD-WAN solutions. There therefore exist needs for systems, methods, and computer-readable media for providing the features of SD-WAN unicast forwarding in multicast routing in SD-WANs. Furthermore, given the different working mechanisms between unicast and multicast forwarding, it is a huge challenge to implement traffic engineering and quality of service (QoS)-based multicast networks. For example, a multicast replicator is likely to be overloaded and congested if not properly load-balanced. This will not only impact the overall networking performance with mixed unicast and multicast traffic over the same overlay tunnel across the WAN, this will also result in network outages for SD-WAN networks due to the exponential expansion of bandwidth of the multicast traffic replication. Multicast replicators are typically responsible for traffic replication for SD-WAN multicast traffic, which usually becomes bottlenecked and congested. There therefore exist needs for systems, methods, and computer-readable media for controlling multicast replicators to reduce bottlenecks and congestion in multicast routing in SD-WANs. The present includes systems, methods, and computer-readable media for solving these problems/discrepancies. FIGS.5A and5Billustrate an example of a topology for selection of multicast replicators.FIG.5Aillustrates static methods for selecting multicast replicators in a typical deployment without the benefit of the present technology.FIG.5Billustrates the present disclosure's methods for selecting multicast replicators in a typical deployment, showing the changes in methods and improvements in deployment achieved through the disclosed technology fromFIG.5A. Multicast virtual private network (VPN) solutions can leverage a variety of different protocols, such as border gate protocol (BGP), which support various kinds of tunneling technologies, or overlay management protocol (OMP), which supports IP Security (IPsec) tunneling. While current unicast forwarding solutions can support routing based on flow classifications including differentiated services code points (DSCP), application ID, group ID, etc., there does not exist such a solution for multicast situations. Furthermore, dynamic changes to multicast forwarding to account for load and performance are lacking, despite the necessity of avoiding overload, congestion, and load imbalance. There is a glaring need in the art for multicast forwarding solutions that allow for dynamic forwarding adjustments based on policy and performance. The present disclosure aims to address this need. A user500using a device505can send a network packet to a router510on a software-defined network. User500can be an employee, contractor, patron, or private individual, while device505can be a laptop, tablet, smartphone, or other networking device. Router510can receive the packet and pass it through rendezvous515, which can route packets through data centers520and525. Data centers520and525each contain multicast replicators522,523,526, and527. In traditional multicast deployments, multicast replicators522,523,526, and527can advertise auto-discovery routes, including attributes like preference, threshold, and location (GPS), etc. Routers546,551, and556in branch sites545,550and555, respectively, can receive internet group multicast protocol (IGMP) or protocol-independent multicast (PIM) joins from receivers547,552, and557, respectively. Routers546,551, and556can use simple algorithms to choose the best available multicast replicator522,523,526, or527, using attributes like preference, threshold, or location. After selection, routers546,551, and556can join a multicast tree using OMP. Controller540can coordinate the control plane among replicators522,523,527, and528, and routers546,551, and556. Data plane packets can pass through either multi-protocol label switching network530or internet network535. This process and resultant multicast tree, using these traditional methods, is shown inFIG.5A. In some embodiments, multicast replicators522,523,526, or527can include higher performance and throughput with maximum (*,G) and (S,G) joins and tunnel outgoing interfaces. These existing standards do not account for policies that may be pursuant to user500, device505, receivers547,552, or557, or other network packet factors. Furthermore, dynamic multicast deployment that accounts for changing policies or performance considerations are missing entirely. The present disclosure addresses these deficiencies in order to create a better deployment scheme, as shown inFIG.5B. Controller540can receive policies from a network administrator and publish these policies to all multicast replicators522,523,527, and528as well as multicast routers546,551, and556. These policies can apply based on source of the traffic, the context of user500, the context of device505, or other factors. Different policies can have different service-level agreement (SLA) requirements and path-preferences. SLA requirements can be classified and also based on a specific receiver547,552, or557. An example SLA requirement policy can look like this, in which SLA requirements can be triggered by a multicast receiver router as part of the OMP PIM join message: policysla-class video_sla_classloss 1latency 25jitter 50!sla-class audio_sla_classloss 1latency 25jitter 50!app-route-policy test_app_route_policyvpn-list vpn_listsequence 100matchsource-ip 10.1.1.1/32destination-ip 232.1.1.1/32!actionsla-class video_sla_class strict preferred-color biz-internet public-internet!!sequence 200matchsource-ip 10.2.2.2/32destination-ip 232.2.2.2/32!actionsla-class strict audio_sla_class preferred-color public-internet!!!! When a receiver547,552, or557connects to a router545,550, or555, the IGMP/PIM messages indicate information about the receiver's policy status, such as user context or device context. When replicators522,523,527, or528publish a multicast auto-discover route, they can publish information relevant to load management, such as preference, threshold, leafs, location, load utilization, bandwidth indicators, or other relevant factors. Then, a receiver547,552, or557, paired with a router545,550, or555, can choose a deployment from the available replicators522,523,527, and528based on policy considerations as well as load management considerations. In some embodiments, receivers547,552, or557can be running on a customer VPN. Multicast deployments can have different characteristics and consumption models when compared to unicast, and a multicast policy can be used to match particular source, group, or receiver pair and SLA requirements. In some embodiments, multicast replication can download and use transport tunnels between multicast replication and receiver routers. InFIG.5B, these changes are implemented, and resultant changes in the data plane are visible. Policy information can be sent from controller540and received by replicators522,523,527, and528and receivers547,552, and557. Receiver547receives data from replicator522via network530and receiver557receives data from replicator527via network530. Receiver552initially receives data from replicator522via network530. However, as replicator522is then serving both receiver547and receiver552, it can become overloaded and yield poor QoS. Replicator522can then publish its load information. Multicast bandwidth can be explosive based on the number of regress replications or receivers, and can become congested on egress WAN interfaces. In response, receiver522can choose to dynamically switch its service deployment from replicator522to replicator523via network530, resulting in improved quality of service due to the dynamic switching capabilities of the present technology. To make the switch, branch550can send an OMP join message via router551to replicator523. Replicator523can forward this OMP join via router521to rendezvous point515, which can route the multicast deployment to replicator523. When receiver552receives its first network packet from replicator523, it can send an OMP prune message to replicator522to stop the old multicast forwarding tree. In some embodiments, after a multicast forwarding tree and route are established, instead of using a single hash lookup for a “system-IP” with multiple SD-WAN tunnels going to a remote node of “system-IP” for label-switched multicast (LSM) replication encapsulation, a candidate next-hop or tunnel can be downloaded and used for app-route path selection. This can provide significant benefits for bandwidth utilization for multiple transports based on weight and bandwidth capacity. Furthermore, some transports can be excluded and used for multicast flows for security and cost perspective. In another example, a multicast app-route policy can evaluate the SLA metric in real-time and switch from one path to another if there is a SLA violation without the need of involving a multicast control-plane from a multicast reflector to a branch site receiver. In another embodiment, multicast receivers can evaluate SLA metrics. If a multicast replicator is congested and overloaded with available WAN paths for multicast flows, an app-route probe can detect the performance downgrade and send a SLA violation to an SD-WAN control-plane. Thereafter, an OMP agent can process the events and evaluate all of the available multicast replicators within the domain. A new multicast replicator can be selected based on a number of selection criteria such as preference, replication threshold, load, and SLA metrics. A multicast receiver router can include an OMP multicast join message for a new replicator and setup an optimized multicast tree and forwarding path. Once the multicast tree setup is successful, the multicast receiver can switch from an old reflector to a new reflector with an OMP multicast punt message that can be sent to the old replicator. FIG.6illustrates an example method in accordance with various aspects of the present disclosure. The method illustrated can be carried out by the system illustrated inFIG.5B. However, nothing inFIG.6should be considered limiting of the system illustrated inFIG.5B, and likewise, the system illustrated inFIG.5Bshould not be interpreted to limit the method ofFIG.6. Any limitation of the depicted system or method will be recited in the appended claims. The method begins when replicators522,523,527, and528and receivers547,552, and557receive (600) a connection map for the network shown inFIG.5B. This map allows receivers and replicators to know what data centers520and525are present and, for receivers547,552, and557, what replicators522,523,527, and528are available. Controller540sends at least one multicast route policy for the network to replicators522,523,527, and528and receivers547,552, and552, which receive (610) the policy or set of policies. These policies may be based on multicast groups, geographic location, bandwidth indications, system load, or performance. Rendezvous point515receives (620) a network packet from user500via device505and router510. Upon receiving the network packet, rendezvous point may have information available about the SD-WAN, device505, and user500, among other possible information. Rendezvous point may reside within the SD-WAN or outside the SD-WAN. It uses this information to forward (630) the network packet to one of replicators522,523,527, and528. As shown inFIG.5B, replicators522,523, and527are selected from the set of replicators to create a multicast route towards receivers547,552, and557. Replicators547,552, and557forward (640) the network packets through the multicast tree to receivers547,552, and557. Each receiver547,552, or557chooses its own replicator from the available multicast auto-discovery route options, based on considerations from policy, performance, system balance, or load management. After a time, changes to the multicast tree can mean that switching replicators could result in better system performance. Replicator522becomes overloaded handling the traffic for receiver547and552. When this information is published, receiver552can determine (650) to switch replicators, in this case to replicator523. Rendezvous point515receives (660) this request from receiver552to recast the multicast tree through replicator523. Rendezvous point515switches (670) the multicast route to pass through replicator523to receiver552. To do this, it leaves open the old multicast route through replicator522while beginning to forward data through replicator523. Once receiver552receives its first network packet from replicator523, it can notify rendezvous point515to end the other route. Thus receiver552does not experience any gap in service during the switching process. FIG.7illustrates a system in accordance with various embodiments. Persons of ordinary skill in the art will also readily appreciate that other systems are possible. FIG.7illustrates an example of a bus computing system700wherein the components of the system are in electrical communication with each other using a bus705. The computing system700can include a processing unit (CPU or processor)710and a system bus705that may couple various system components including the system memory715, such as read only memory (ROM)720and random access memory (RAM)725, to the processor710. The computing system700can include a cache712of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor710. The computing system700can copy data from the memory715, ROM720, RAM725, and/or storage device730to the cache712for quick access by the processor710. In this way, the cache712can provide a performance boost that avoids processor delays while waiting for data. These and other modules can control the processor710to perform various actions. Other system memory715may be available for use as well. The memory715can include multiple different types of memory with different performance characteristics. The processor710can include any general purpose processor and a hardware module or software module, such as module1732, module2734, and module3736stored in the storage device730, configured to control the processor710as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor710may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing system700, an input device745can represent any number of input mechanisms, such as a microphone for speech, a touch-protected screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device735can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing system700. The communications interface740can govern and manage the user input and system output. There may be no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. The storage device730can be a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memory, read only memory, and hybrids thereof. As discussed above, the storage device730can include the software modules732,734,736for controlling the processor710. Other hardware or software modules are contemplated. The storage device730can be connected to the system bus705. In some embodiments, a hardware module that performs a particular function can include a software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor710, bus705, output device735, and so forth, to carry out the function. For clarity of explanation, in some instances the various embodiments may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Some examples of such form factors include general purpose computing devices such as servers, rack mount devices, desktop computers, laptop computers, and so on, or general purpose mobile computing devices, such as tablet computers, smart phones, personal digital assistants, wearable devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. | 48,251 |
11863428 | DETAILED DESCRIPTION FIG.1illustrates a computing environment100to manage communication paths between branches according to an implementation. Computing environment100includes branches105-107, wherein branches105-107include edge gateways110-112that provide connectivity for corresponding compute elements120-122. Computing environment100further includes internet150, private network151, gateways130-131, hub edge gateways140-141, and controller160. In some examples, computing environment100may represent a Wide Area Network (WAN) or software defined WAN (SD-WAN). In operation, an organization may employ multiple computing sites with computing resources to provide different functions in different geographic regions. Here, compute elements120-122are deployed by an organization and are coupled to internet150and private network151using edge gateways110-112. Compute elements120-122may be representative of virtual machines, containers, host computing systems, networking elements (switches, routers, and the like), or some other compute element. Edge gateways110-112are representative of network elements that can provide static routing, dynamic routing, virtual private networking, load balancing, firewall operations, Dynamic Host Configuration Protocol (DHCP), network address translation (NAT), or Internet Protocol Security (IPSec) communications. As an example, gateway130may provide a connection to a cloud datacenter available to compute elements120-122, wherein the datacenter may provide data and applications to the compute elements. In accessing the datacenter, communications from a compute element may be forwarded via the corresponding edge gateway to the data center using gateway130. In some implementations, computing elements across the different branches105-107may be required to communicate to provide various different functions. These functions may include video conferencing, file sharing, data processing, or some other function or service. As the different edge gateways for the corresponding branches may not be configured to directly communicate, the communications between the various edges may be required to traverse at least one gateway of gateways130-131or hub edge gateways140-141, wherein the gateways and hub edge gateways may comprise next hops to deliver packets between the edge gateways. For example, one or more compute elements from branch105may be required to communicate with one or more compute elements from branch107. To provide the communications, edge gateway110may route packets from the compute element to a hub edge gateway140via private network151. Once received, the hub edge gateway may forward the packet to edge gateway112of branch107using private network151. Private network151may represent a Multiprotocol Label Switching (MPLS) private network, a virtual private network (VPN), or some other private network that can be implemented between edge gateways and hub edge gateways. In at least one example, the hub edge gateways may be used to aggregate traffic from multiple edge gateways and, in some examples, provide access to additional services and/or data for the edge gateways. In implementing the communication paths between the different branches, controller160is provided. Controller160may monitor network characteristics to determine how packets should be routed between the various branches. The network characteristics may include latency, data throughput, jitter, packet loss, or some other information related to the communication of data between the sites. Based on the network characteristics, controller160may modify or change the routing between the branches. Thus, while applications and compute elements for branch105may use a first route of hub edge gateway140to communicate with branch107, controller160may change the route to using hub edge gateway141to provide the required communications. In some implementations in addition to, or in place of, using the aforementioned information related to communication of data between the sites, controller160may further use information about the flow rate or the number of communication flows as a function of time handled by the edge gateway, gateway, or hub edge gateway. As the number of flows increase for a particular gateway or hub edge gateway, controller160may select another routing path for the communications to limit the communication flows traversing the gateway or hub edge gateway. FIG.2illustrates an operation200of a controller to manage communication paths between branches according to an implementation. The processes of operation200are referenced parenthetically in the paragraphs that follow with reference to systems and elements ofFIG.1. In operation, controller160applies (201) a first route configuration for a first edge gateway to communicate with a second edge gateway. For example, controller160may provide routing and communication configuration information to edge gateways110-111to permit communications between compute elements at branches105-106. Additionally, in some instances, controller160may provide route configuration information to one or more of gateways130-131or hub edge gateways140-141to provide an intermediary for the communications. For example, to permit the communication between branches110-111, edge gateways110-111may be configured to communicate using gateway130. After applying the first route configuration, controller160may monitor (202) network characteristics associated with routes from the first edge gateway to the second edge gateway. In some examples, edge gateways, hub edge gateways, and gateways may provide network characteristic information to controller160. This information may be provided periodically, at the request of controller160, during downtimes or low network usage times, or at some other interval. The information provided may include latency measurements, throughput measurements, jitter or packet loss reports, or some other network information associated with the quality of service provided at various network points. While monitoring the network characteristics, controller160further determines (203) that the first route configuration fails to satisfy at least one criterion based on the network characteristics. In some examples, the criterion may comprise minimum or maximum values associated with latency, throughput, bandwidth, or some other value. In some implementations, the minimum or maximum values may be associated with individual applications and may be defined by an administrator associated with the computing environment. As an example, a file sharing application may have different minimum requirements or quality of service than a video conferencing application, wherein the video conferencing application may place a higher priority on latency than bandwidth. As a result, if latency for a connection between a first edge gateway and a second edge gateway fell below a minimum latency value, controller160may determine that the minimum criteria is no longer satisfied and that a new configuration is required. In response to determining that the first route configuration fails to satisfy the at least one criterion, controller160may determine (204) a second mute configuration based on the network characteristics and apply the second route configuration for the first edge gateway to communicate with the second edge gateway. In some implementations, the second route configuration may use one or more different gateways or hub edge gateways to support the communications between the edge gateways. The different gateways and hub edge gateways may be identified based on gateways or hub edge gateways that are capable of providing the required minimum quality of service for the connection, may be selected based on gateways or hub edge gateways that are capable of providing the best quality of service for the connection, or may be selected in any other manner. As an illustrative example, when the first route configuration for edge gateway110to communicate with edge gateway111fails to satisfy one or more criteria associated with the network characteristics, controller160may be required to identify a second route configuration. Thus, if the connection initially used gateway130for the communications, controller160may select a different communication path from gateway131or hub edge gateways140-141to provide the required connection. In some implementations, the configuration modification may be based on a single application. For example, while a first application and a second application may be routed through the same gateway of gateways130-131as part of a first route configuration, when one or more criteria are satisfied to change to a second route configuration, the first application may be routed to a different gateway when the failed one or more criteria are only associated with the first application. FIG.3illustrates a timing diagram300to change communication path configurations for branch edges according to an implementation. Timing diagram300includes edge gateways120-121, gateways130-131, and controller160from computing environment100ofFIG.1. As described herein, to provide connectivity between different branches of a computing environment, edge gateways at each of the branches may be required to traverse gateways or hub edge gateways of the computing environment. In determining how the connections are routed between the branches and the edge gateways, controller160is provided that can dynamically modify the routes for the communications. Here, edge gateway120is configured to communicate with edge gateway121using gateway130, wherein gateway130is representative of a router that provides access for packets into and/or out of a local network. Gateway130may be representative of a connection to a router for an organization's central datacenter or a cloud datacenter in some examples. The communications between edge gateways120-121may comprise file sharing application packets, video or voice communication packets, or some other packets. In configuring the first route configuration, controller160may provide routing and permissions information to edge gateways120-121and gateway130. As the first configuration communications are exchanged, controller160further monitors network characteristics associated with the routes for edge gateways120-121. In some implementations, in monitoring the routes, controller160may obtain network characteristic information or statistics from various networking elements in the computing environment, wherein the networking elements may comprise the edge gateways, the gateways, any hub edge gateways, or some other networking element. The networking elements may provide the information periodically, after a request from the controller, during a networking downtime, or at some other interval. When the network characteristics are obtained, controller160may further determine when criteria are satisfied that indicates that the first route configuration requires a modification. In determining that the criteria are satisfied (or no longer satisfied) controller160may compare the network characteristics to values that can be defined by an administrator for the computing environment. These values may include a minimum available bandwidth, a maximum latency, a maximum error rate, or some other value. For example, for a video conferencing application a maximum latency value may be provided to controller160that can be used to compare against current network conditions or characteristics. When the latency for the first configuration fails to provide latency less than the maximum value, controller160may determine that the route configuration requires an update. In response to determining that an update is required, controller160may identify a new configuration for the communications between edge gateway120and edge gateway121. In some implementations, controller160may use the information from other edge gateways, gateways, hub edge gateways, or any other networking element to identify the new configuration. In some examples, controller160may identify any intermediary gateway or hub edge that can provide a minimum quality of service for the connection between the edge gateways. Once the subset of gateways and/or hub edges are identified, controller160may select a new intermediary for the communication randomly from the subset, may select the new intermediary based on the intermediary most likely to provide the highest quality of service, or may select the intermediary in any other manner. As an example, when edge gateways120-121require a new configuration, controller160may select a new intermediary gateway or hub edge gateway based on the network characteristics provided by the gateways and edge gateways. For example, based on the networking characteristics obtained from the computing environment, controller160may estimate the best intermediary for the communication based on the networking characteristics, and select the best intermediary using the estimation. The estimation may be based on current connections using the gateway or hub edge gateway, the current applications that traverse the gateway or hub edge gateway, physical location information associated with the gateway or hub edge gateway, or some other factor for the estimation. Although described in the previous example as selecting a new intermediary gateway or hub edge gateway based on network characteristics identified for the gateways and hub edge gateways, it should be understood that that the new path may be randomly selected from the available intermediaries without consideration of the networking performance. In this manner, if the new intermediary fails to satisfy the communication criteria for the edge gateways, the process may be repeated as necessary until a suitable intermediary gateway or hub edge gateway is identified. Here, controller160determines that gateway131should be used as the communication intermediary for edge gateways120-121. As a result, controller160may apply the configuration by providing addressing, permissions, and other required rules, such that the communications between edge gateway120and edge gateway121traverse gateway131over gateway130. Once configured, second communications are provided between edge gateways120-121using the second configuration. Although demonstrated as updating the configuration at edge gateways120-121, it should be understood that a policy configuration may be provided to gateways130-131to provide the required operations. In some examples, the configuration modification may be used to update the connections for all applications associated with a particular edge gateway. In other examples, the configuration modification may be unique to the needs of a specific application or group of applications. For example, if the first route configuration fails to provide adequate latency for a video communications application, controller160may make a modification to the networking configuration associated with that application. In some examples, this may permit the edge gateways to identify the application (or packet type) associated with a particular packet and forward the packet based on the configuration provided by controller160. In some examples, in addition to selecting the hub edge gateway or gateway for the communications of the applications, controller160may further differentiate between the use of internet150or private network151. For example, as hub edge gateways140-141are coupled to both internet150and private network151, controller160may monitor network characteristics associated with each of the networks and select one of the networks for the use by the edge gateways. Thus, controller160may select the gateway or hub edge gateway to support the communications and may further determine a network for the communications based on the network characteristics obtained for the hub edge gateways, the gateways, and the networks supported by network elements and gateways. Although demonstrated in the previous examples as causing a configuration modification when the network characteristics fail to satisfy minimum quality of service criteria, it should be understood that the modification to the configuration may occur in response to other events or satisfying other criteria. For example, another event may occur when a second intermediary gateway or hub edge gateway provides a quality of service higher than the current intermediary, wherein the quality of service may be based on data throughput, bandwidth, latency, or some other factor, including combinations thereof. Once the quality of service differs by a requisite or threshold amount, the controller may change the route configuration to use the second intermediary element. FIG.4illustrates a data structure400to manage communication requirements associated with applications according to an implementation. Data structure400includes application identifier column410and requirement columns420-423. Application identifier column410includes applications411-414, which are representative of applications deployed in a computing environment. Although demonstrated as individual applications, it should be understood that similar operations may be applied to groups of applications. Requirement columns420-423further include requirements430-438that each correspond to a network requirement for an application. These requirements may include bandwidth requirements, throughput requirements, latency requirements, or some other networking requirement. In some implementations, a controller for a computing network may maintain data structure400, such that requirements430-438may be compared to network characteristics or statistical values obtained from networking elements in the computing environment. For example, a first edge gateway may communicate with a second edge gateway using a first route configuration. While using the first route configuration, network characteristics (statistical information) about the route configuration may be provided to the controller. Once received, the controller may compare the network characteristics to entries in data structure400to determine whether a modification is required to the first route configuration. Here, the statistical minimum requirements are associated with individual applications. When the network characteristics are obtained that are associated with a first route configuration, the controller may determine applications that are associated with the first route configuration. Once the applications are identified, the computing system may determine whether the network characteristics provide the requisite quality of service for the applications. For example, if application412were communicating between a first edge gateway and a second edge gateway, then the controller may compare a network statistic from the network characteristics to requirement434. If the network statistic satisfied requirement434, then the controller may maintain the current configuration. If the network statistic did not satisfy requirement434, then the controller may determine that a modification is required to the route configuration. In some examples, the modification to the configuration may be made for all applications associated with a connection between edge gateways. Thus, if all applications at a first edge gateway used a first gateway to communicate with a second edge gateway, then the controller may change the route configuration from communicating via the first gateway to communicating via one or more other gateways or hub edge gateways. In other examples, rather than modifying the route or path associated with each of the applications, the controller may modify the route for one or more applications affected by a network condition. Returning to the example of requirement434, when requirement434is no longer satisfied for application412in a route between two edge gateways, then the controller may modify a routing configuration associated with application412and the two edge gateways. This modification may include selecting a new intermediary gateway to provide the connection for the application, wherein the selection may comprise a random selection, a selection based on the network characteristics, or a selection based on some other similar factor. FIG.5illustrates an operational scenario500of generating a new communication path configuration according to an implementation. Operational scenario500includes first configuration characteristics505, minimum requirements510, other configuration requirements506, and second configuration530. Operational scenario500further includes operations520-521that are representative of operations that may be provided by a controller540for a computing environment that employs edge gateways, gateways, and, in some examples, hub edge gateways. The computing environment may comprise a SD-WAN for an organization in some examples. As depicted, a controller may obtain first configuration characteristics505associated with a first route configuration between two edge gateways. As the configuration characteristics are obtained, the controller may perform criteria operation520by comparing first configuration characteristics505to minimum requirements510to determine whether the characteristics satisfy or fail to satisfy criteria associated with the application(s). The minimum requirements may correspond to connection requirements for all applications or may correspond to requirements associated with each individual application. For example, the minimum requirements associated with a file sharing application may be different than the requirements associated with a video conferencing application. As a result, while a current route may provide the minimum quality of service associated with the file sharing route, the same route configuration may fail to provide requisite quality of service associated with the video conferencing application. Once it is determined that the current configuration fails to satisfy the minimum criteria for the application, the controller will further provide new configuration operation521. New configuration operation521may identify other configuration characteristics506, which are representative of network statistics associated with other routes from the first edge gateway to the second edge gateway. This information may include current available bandwidth for the other networking components in the network, any current latency values determined in association with the networking components, any throughput information associated with the other networking components, or some other information. From the information, new configuration operation521may select a new configuration for the edge gateways, such that the minimum requirements will be satisfied for the edge gateways and the corresponding applications. In some implementations, the new route configuration may be selected based on the route that can provide the best quality of service to the applications based on the network characteristics. In other implementations, the new route configuration may be determined by first identifying set of intermediary components or gateways that can provide the required quality of service for the applications, and subsequently selecting one of the components or gateways from the set. The selected gateway or component may be selected randomly, based on which of the gateways or components can provide the highest quality of service, or some other load distribution for the intermediary gateways or components. Here, the controller generates second configuration530and can apply the configuration in the computing environment or SD-WAN. Second configuration530may include new routes for a single application (i.e., the application affected by the degradation of the network characteristics) or may include new routes for any application associated with the first edge gateway to the second edge gateway. In some implementations, rather than selecting the second route configuration using network characteristics associated with alternative routes, the controller may select the second route configuration using any of the available intermediary components. This new intermediary may be selected randomly, based on a current usage rate associated with the intermediary component, or based on some other load balancing factor. If the newly identified configuration fails to satisfy the minimum requirements associated with the one or more applications that execute behind the edge gateway, then the controller may be used to identify another new configuration to support the operations of the one or more applications. FIG.6illustrates a controller computing system600to manage communication path configurations for edge branches according to an implementation. Computing system600is representative of any computing system or systems with which the various operational architectures, processes, scenarios, and sequences disclosed herein for an edge gateway. Computing system600is an example of gateway160, although other examples may exist. Computing system600includes storage system645, processing system650, and communication interface660. Processing system650is operatively linked to communication interface660and storage system645. Communication interface660may be communicatively linked to storage system645in some implementations. Computing system600may further include other components such as a battery and enclosure that are not shown for clarity. Communication interface660comprises components that communicate over communication links, such as network cards, ports, radio frequency (RF), processing circuitry and software, or some other communication devices. Communication interface660may be configured to communicate over metallic, wireless, or optical links. Communication interface660may be configured to use Time Division Multiplex (TDM), Internet Protocol (IP), Ethernet, optical networking, wireless protocols, communication signaling, or some other communication format—including combinations thereof. Communication interface660may be configured to communicate with one or more edge gateways, routers, hub edge gateways, or other similar components in a computing network or SD-WAN. Processing system650comprises microprocessor and other circuitry that retrieves and executes operating software from storage system645. Storage system645may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Storage system645may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems. Storage system645may comprise additional elements, such as a controller to read operating software from the storage systems. Examples of storage media include random access memory, read only memory, magnetic disks, optical disks, and flash memory, as well as any combination or variation thereof, or any other type of storage media. In some implementations, the storage media may be a non-transitory storage media. In some instances, at least a portion of the storage media may be transitory. It should be understood that in no case is the storage media a propagated signal. Processing system650is typically mounted on a circuit board that may also hold the storage system. The operating software of storage system645comprises computer programs, firmware, or some other form of machine-readable program instructions. The operating software of storage system645comprises statistics monitor632, criteria monitor633, and modify operation634, although any number of software modules may perform the operations described herein. Storage system645further stores application requirements630, which are representative of a data structure for network requirements associated with applications operating in the computing environment. The operating software on storage system645may further include an operating system, utilities, drivers, network interfaces, applications, or some other type of software. When read and executed by processing system650the operating software on storage system645directs computing system600to operate as described herein. In at least one implementation, edge gateways may be configured at branches or computing sites for an organization to provide connectivity to other branches or central data centers associated with the organization. To provide connectivity between the different branches, an organization may employ controller computing system600to apply route configurations to the edge gateways, permitting the edge gateways to communicate using hub edge gateways and/or gateways that belong to the same SD-WAN network. When a route configuration is deployed for a first edge gateway to communicate with a second edge gateway, statistics monitor632directs processing system650to monitor network characteristics associated with the routes from the first edge gateway to the second edge gateway. These network characteristics may comprise latency, throughput, jitter, packet loss, or some other network characteristic. The characteristic information may be obtained from the edge gateways, from gateways, from hub edge gateways, or from some other element associated with the computing network. As the network characteristics are obtained, criteria monitor633directs processing system650to determine whether the first route configuration satisfies criteria associated with the connection between the first edge gateway and the second edge gateway. The criteria may be unique to each application or may correspond to a group of applications. In at least one example, criteria monitor633may determine what applications are executing behind the communicating edge gateway and identify requirements from application requirements630that correspond to the applications. If the criteria are satisfied, then the current configuration may remain for the first edge gateway and the second edge gateway. However, if the criteria are not satisfied, then modify operation634may direct processing system650to identify a new configuration for the communications between the edge gateways. For example, while the first configuration may use a first intermediary gateway for the communications, a second configuration may use a second intermediary gateway. In some implementations, when determining the new configuration, modify operation634may use the network characteristics obtained from the network to identify a new intermediary capable of providing a requisite quality of service for the connection between the edge gateways. For example, based on the statistics, modify operation634may identify one or more intermediate gateways or hub edge gateways available to provide the requisite quality of service for the applications. Once identified, modify operation634may select at least one of the intermediate gateways or hub edge gateways for the new configuration. The selection may be based on identifying the best available quality of service, a random selection, or some other distribution mechanism. Although demonstrated as using minimum requirements to trigger the modification to the route configuration, it should be understood that other events or criteria may be used to trigger the modification. In at least one example, computing system600may monitor the quality of service associated with different routes between the first edge gateway and the second edge gateway. When a new configuration can provide a quality of service that satisfies performance criteria, the new configuration may be selected and applied for the first edge gateway and the second edge gateway. In other implementations, computing system600may monitor degradation associated with the network characteristics and determine when a first route configuration has degraded to a threshold value. For example, computing system600may monitor latency as a function of time for a connection from the first edge gateway to the second edge gateway. When the latency is increased a threshold amount, the computing system may determine that a modification event has occurred and initiate operations to implement a new route configuration. While these are two examples of using criteria to trigger the modification to a route configuration, other criteria may be used, including quantities of connections using the intermediary gateway, processing resource usage by the intermediary gateway, or some other criteria that can load balance connections and/or provide dynamic quality of service modifications to the route configurations. Although described as separate from the edge gateways in the examples ofFIGS.1-6, it should be understood that at least a portion of the operations described herein may be performed locally at the edge gateways. In particular, a first edge gateway communicating with a second edge gateway may monitor the network characteristics associated with the communications. From the monitoring the characteristics, the first edge gateway may determine when the characteristics fail to satisfy criteria to maintain the connection and may initiate operations to implement a change in the configuration. The included descriptions and figures depict specific implementations to teach those skilled in the art how to make and use the best mode. For the purpose of teaching inventive principles, some conventional aspects have been simplified or omitted. Those skilled in the art will appreciate variations from these implementations that fall within the scope of the invention. Those skilled in the art will also appreciate that the features described above can be combined in various ways to form multiple implementations. As a result, the invention is not limited to the specific implementations described above, but only by the claims and their equivalents. | 33,954 |
11863429 | DESCRIPTION OF EMBODIMENTS Some embodiments of the present disclosure relate to improvements to the selection of network paths. For example, when receiving a data flow subject to an SLA with multiple potential paths for reaching the destination, the network device decides which of the paths is to be used to route the data flow to the destination. An SLA may include an agreed upon threshold level for one or more network performance metrics, such as bandwidth, availability, jitter, latency, loss, and/or others. Rather than only considering a most recent snapshot of network path performance, in some embodiments of the present disclosure, the network device may consider the historical performance of the various paths relative to the SLA when deciding which path the data flow is to be routed along. For example, the network device may consider a number of times that each of the potential paths dropped below the SLA for a given time period, and select the path with the best performance. In some embodiments, a machine learning algorithm may be used to compare the historical performance of the various paths. Such a machine learning algorithm may include multiple factors (e.g., jitter, latency, loss, cost, carrier reputation, etc.) considered and continually refined over time. By using the historical performance and/or other factors, a selection may be made to find a path that is more likely to satisfy the SLA of a data flow, rather than simply selecting a path based on the most recent snapshot. Embodiments of the present disclosure may provide improvements to computer networks and to the operation of computers themselves. For example, the performance of applications utilizing SLAs may be improved because a path more likely to satisfy the performance of the SLA may be used for the application, allowing for increased response times and increased application performance. Additionally, network traffic may flow with increased performance by selecting paths that are more likely to satisfy SLAs of data flows. For example, using embodiments of the present disclosure may be more likely to place network traffic on more reliable paths, causing less retransmission of data. By providing for fewer retransmissions, valuable network resources such as bandwidth may be preserved, and increased response times may be provided. Additionally, because of the reduced number of retransmissions, the amount of traffic flowing through the internal network domain may be reduced, providing superior performance for the internal network domain. Another advantage in which the present disclosure may include cost savings as one factor in selecting a path may include considering costs associated with a particular path in the path selection process. Embodiments of the present disclosure are explained with reference to the accompanying drawings. FIG.1illustrates an example system100of network components implementing a software-defined network, in accordance with one or more embodiments of the present disclosure. In some embodiments, the network path selection may be implemented in a software-defined network such as that illustrated by the system100. The system100may include an internal network domain105and one or more external network domains. The system100may include one or more edge network devices110(such as the edge network devices110a-110d), a control device120, a communication network130, and external network devices140and141(such as the external network devices140a-140dand141a-141d). The system100may implement a software-defined network. A software-defined network may include a network that is managed by software rather than controlled by hardware. As such, a software-defined network may support multiple types of connections, such as the Internet, Multi-Protocol Label Switching (MPLS) connections, and/or cellular connections (such as Long Term Evolution (LTE), LTE Advanced, Worldwide Interoperability for Microwave Access (WiMAX), 4G, and/or others). Additionally, a software-defined network may support load balancing or load sharing between the various connections. Further, because of the distributed nature of a network, a software defined network may support virtual private networks (VPNs), firewalls, and other security services. In a software-defined network, for example, a control plane may be functionally separated from the physical topology. In some embodiments, a software-defined network may separate the control plane of the network (to be managed via software) from a data plane of the network (operating on the hardware of the network). As used herein, the term control plane may refer to communications and connections used in the control and administration of a network itself, rather than the transmission of data through the network, which may occur at the data plane. As used herein, the term data plane may refer to communications and connections used in the transmission and reception of data through the network. For example, the control plane may include administrative traffic directed to a network device within a network, while the data plane may include traffic that passes through network devices within the network. In some embodiments, a software-defined network may be implemented as a software-defined wide area network (SD-WAN), local area network (LAN), metropolitan area network (MAN), among others. While one or more embodiments of the network path selection may be described in the context of an SD-WAN, such embodiments may also be implemented in any network. In some embodiments, the control device120may be configured to manage the control plane of an internal network domain105by directing one or more aspects of the operation of the edge network devices110. For example, the control device120may generate and/or distribute policies to one or more of the edge network devices110. A policy may include a rule or set of rules bearing on the handling of network traffic, such as routing, priority, media, etc. In some embodiments, the policies may include SLAs for various data flows. For example, data flows associated with a video application may have an SLA that the data flow be routed along a path with latency below a first threshold, and data flows associated with a voice transmission application may have an SLA that the data flow be routed along a path with loss below a first threshold and jitter below a second threshold. The internal network domain105may operate as a secured and controlled domain with specific functionality and/or protocols. In some embodiments, the edge network devices110may operate based on one or more policies created and/or propagated by the control device120. In these and other embodiments, the edge network devices110may route data traffic within the internal network domain105based on the policies created and/or propagated by the control device120. In some embodiments, an edge network device (e.g., the edge network device110a) may receive a data flow to be routed to another edge network device (e.g., the edge network device110d). The edge network device110amay determine that the data flow is subject to an SLA and that there are multiple potential paths for the edge network device110ato route the traffic to the edge network device110d. The edge network device110amay consider the historical performance of the various paths in determining which path is to be used for the data flow. Such a path selection determination may be described with greater detail inFIGS.2-4. In some embodiments, the control device120may form a control plane connection with each of the edge network devices110. The control plane connection may facilitate the exchange of management data between the edge network devices110and the control device120for management and control of the internal network domain105. The control plane connection may operate as a tunnel through the communication network130, such as a Datagram Transport Layer Security (DTLS) tunnel. In some embodiments, data transmitted over the control plane connection may facilitate the control device120determining topology of the communication network130. For example, the control device120may communicate with the edge network devices110to determine what physical connections exist between and among the edge network devices110in the communication network130. Additionally or alternatively, data transmitted over the control plane connection may facilitate the control device120determining optimal or desired paths across the communication network130between and among the edge network devices110. Such communications may facilitate path selection. Additionally or alternatively, the control device120may communicate route information to the edge network devices110over the control plane connection. In these and other embodiments, the control plane connection may include a permanent connection between the control device120and the edge network devices110such that if the connection between the control device120and a given edge network device110is broken, the edge network device110may be unable or otherwise disallowed from communicating over the internal network domain105. In some embodiments, the control device120may maintain a central route table that stores route information within the internal network domain105. For example, the control device120may communicate with various edge network devices110to determine the physical connections available to the edge network devices110through the communication network130. In some embodiments, the edge network devices110may include one or more physical connections to each other. In these and other embodiments, the control device120may generate and/or update one or more policies in conjunction with the central route table to determine paths through the internal network domain105, and may communicate those paths to the edge network devices110. In at least one embodiment, the control device120may provide policies and other categorical rules related to data flows to the edge network devices110rather than being involved with every individual flow through the internal network domain105. In these and other embodiments, the edge network devices110may not have stored the topology and/or route paths of the entire system100. Each of the edge network devices110may not need to query each other individually to determine reachability. Instead, the control device120may provide such information to the edge network devices110. Additionally or alternatively, a subset of the reachability and/or infrastructure information may be provided to the edge network devices110, for example, based on one or more policies of the control device120. In network path selection decisions, if the network traffic is a data flow subject to an SLA, the edge network device110performing the path selection decision may consider the historical performance of the various potential paths over the connections through the internal network domain105. In addition to generating policies to guide the edge network devices110in making path selection decisions, the control device120may generate other policies that are to be followed by the edge network devices110. In some embodiments, the control device120may generate policies to cause certain network traffic flows within the internal network domain105to be routed over certain types of connections (e.g., LTE, MPLS) and/or through certain edge network devices110. For example, the control device120may check the central route table and determine that a direct connection exists between the edge network device110aand the edge network device110c. Rather than allowing data to be routed directly between the edge network device110aand the edge network device110c, the control device120may generate a policy to instead cause the data to be routed through the edge network device110d. For example, the data may be routed through the edge network device110dfor various reasons, such as because the edge network device110dmay include a firewall, data filter, security feature, data loss prevention (DLP) feature, export control, or government compliance feature, among others. As another example, the control device120may generate a policy to cause one or more of the edge network devices110to route traffic through an edge network device110associated with a data center, for example, because the data center includes a firewall, data filter, etc. Using such an approach, the flow of traffic within the internal network domain105may be readily controlled and guided based on policies and traffic routes propagated by the control device120to the edge network devices110. The edge network devices110may operate at a boundary of the internal network domain105. The edge network devices110may include one or more physical and/or logical connections that may operate within the internal network domain105. Such connections may be illustrated as part of the communication network130. Additionally or alternatively, the edge network devices110may include one or more physical and/or logical connections operating outside of the internal network domain105. For example, the edge network devices110may be connected to the external network device(s)140and/or141. In some embodiments, the edge network devices110may operate to route traffic from associated external network devices140and141into the internal network domain105. Additionally or alternatively, the edge network devices110may operate to route traffic from the internal network domain105to the associated external network devices140and141. In some embodiments, the edge network devices110may communicate with associated external network devices140and141using typical communication protocols, such as Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), Virtual Router Redundancy Protocol (VRRP), and Bi-directional Forwarding Detection (BFD), among others. Additionally or alternatively, the edge network devices110may support other network functionalities such as Virtual Local Area Network (VLAN) tagging, Quality of Service (QoS) monitoring, Internet Protocol (IP) forwarding, Internet Protocol Security (IPsec), Access Control Lists (ACL), among others. For example, with QoS monitoring, the edge network devices110may provide for one or more network performance metrics that may be monitored, such as jitter, bandwidth, error rate, bit rate, throughput, and/or others. In some embodiments, the edge network devices110may monitor the network performance metrics by periodically transmitting a message to measure the one or more network performance metrics. Such messages may take any format, such as an internet control message protocol (ICMP) echo probe, a jitter probe, a transmission control protocol (TCP) probe, a user datagram protocol (UDP) echo probe, etc. In these and other embodiments, the monitoring messages may be sent at any frequency, such as every thirty seconds, every sixty seconds, every two minutes, every five minutes, every ten minutes, etc. Additionally or alternatively, the monitoring probes may be sent in response to one or more events. In some embodiments, the frequency of such messages may be sent at a decreased frequency when no traffic is flowing and/or may be sent at an increased frequency when traffic is flowing along a path. In some embodiments, the edge network devices110may locally maintain one or more local route tables. In some embodiments, the edge network devices110may adjust or modify the local route tables based on one or more policies sent from the control device120. For example, one or more entries may be removed, discarded, or otherwise not added to the local route tables by the edge network devices110based on the one or more policies. In some embodiments, the edge network devices110may include logic to update, modify, and/or generate the local route tables based on traffic handled by the edge network devices110. The one or more local route tables may be automatically populated by the edge network devices110based on direct interface routes, static routes, and/or dynamic routes learned using one or more network protocols such as BGP and/or OSPF. In some embodiments, routing decisions for data outside of the internal network domain105may be performed by a particular edge network device110without specific direction, input, or control from the control device120. For example, the particular edge network device110may select a path based on the one or more policies that the particular edge network device110has received from the control device120, with reference to the local route table of the particular edge network device110, and/or based on historical performance of the paths. In some embodiments, one or more of the edge network devices110and/or the control device120may be implemented as one or more virtual machines operating on one or more physical computing devices. Additionally or alternatively, the edge network devices110and/or the control device120may each include an individual stand-alone computing device. Modifications, additions, or omissions may be made toFIG.1without departing from the scope of the present disclosure. For example, while illustrated as including four edge network devices110and one control device120, the system100may include any number of edge network devices110and control devices120, such as thousands or tens of thousands of edge network devices110and more than five control devices120. As another example, as illustrated as a single communication network130, the communication network130may include multiple types of communication connections. FIG.2illustrates an example system200with multiple paths between network devices210, in accordance with one or more embodiments of the present disclosure. The network devices210(such as a first network devices210aand a second network device210b) may be configured to route data flows through one or more networks230(such as a first network230aand a second network230b). There may be multiple paths220between the first and second network devices210aand210b(such as a first path220aand second path220bthrough the first network230a, and a third path220cand a fourth path220dthrough the second network230b). The network devices210a-bmay include any device or system configured to receive a data flow to be routed through one or more of the networks230a-b, and route the data flow through one or more of the networks230a-b. For example, the network devices210a-bmay be implemented as an edge network device110ofFIG.1. In some embodiments, the network devices210a-bmay receive the data flow from one or more electronic devices (not illustrated). In some embodiments, the network devices210a-bmay monitor the network performance of the paths220a-dthrough the networks230a-b. For example, the first network device210amay periodically send probes or other messages through the networks230a-bto measure the network performance of various metrics (such as QoS metrics) for the various paths. Additionally or alternatively, the network devices210a-bmay store data regarding the performance of the paths220a-d. Such stored network performance may be referred to as historical performance data. The historical performance data may be maintained locally, and/or may be communicated to a central device (such as the control device120ofFIG.1). After receiving a data flow at the first network device210adirected to the second network device210b, the first network device210amay determine which of the paths220a-dto use to rout the data flow to the second network device210a. For example, if the data flow is subject to an SLA, the first network device210amay determine whether any of the paths220a-dcomply with the SLA. For example, the first network device210amay observe, look-up, request, or otherwise obtain the most recent historical performance data associated with the SLA for the various paths220a-d. If only one path satisfied the network performance metrics associated with the SLA, the first network device210amay route the data flow along that path. However, if multiple paths satisfied the SLA network performance metrics, the historical performance of those paths (or all possible paths) may be considered. Additionally or alternatively, if none of the paths satisfied the SLA performance metrics, the historical performance of all of the paths220a-dmay be considered. Any of a variety of aspects of the historical performance of the paths220a-dmay be utilized to determine which path is to carry the data flow.FIG.3may illustrate various examples of historical performance data and may be used to articulate examples of such aspects of the historical performance. FIG.3illustrates example charts300aand300bof historical performance over time, and may be used to describe various aspects of historical performance and how they may affect path selection. The charts300aand300bmay include historical performance data310aand310b, and SLA thresholds320aand320b. For example, the chart300aillustrates that the historical performance data310ahas four regions completely above the SLA threshold320a, and the most recent historical performance data point was above the threshold320a. In some embodiments, compliance with an SLA may include being above the thresholds320aand/or320b, below the thresholds320aand/or320b, and/or between the thresholds320aand/or320band another threshold (not illustrated). The examples of the various aspects of historical performance will be described in terms of exceeding the threshold as being the agreed performance relative to the SLA. One example aspect of historical performance may include a number of instances during a given duration of time that the historical performance data dropped below the threshold320. Such a determination may count the act of going from above the threshold320to below the threshold320, although other mechanisms for such a determination may be used. In the chart300a, the historical performance data310adropped below the threshold320afour times. In the chart300b, the historical performance data310bdropped below the threshold320bthree times. Thus, in these and other embodiments, the path represented by the chart300bmay be more desirable than the path represented by the chart300abecause the historical performance data of that path dropped below the threshold320less frequently. Another example aspect of historical performance may include a duration of time that the historical performance data310was below the threshold for a given span of time. For example, in the chart300a, the historical performance data310ais below the threshold for short durations. In contrast, in the chart300b, the historical performance data310bis below the threshold for extended durations. Thus, in these and other embodiments, the path represented by the chart300amay be more desirable than the path represented by the chart300bbecause that path spent less time below the threshold320. Returning toFIG.2, another example aspect of historical performance may include considerations of the carrier of the path. For example, if the first path220aand the second path220bwere provided by a first carrier that provided the first network230a, and the third path220cand the fourth path220dwere provided by a second carrier that provided the second network230b, the performance and/or reputation of the first and second carriers may be used in path selection. For example, the historical performance data of both the first path220aand the second path220b(and/or any other connections in the first network230a) may be combined into carrier historical performance data for the first carrier. The same may be done for the third and fourth paths220cand220din the second network230bfor the second carrier. Additionally, other reputation, performance, opinion, etc. data of the first carrier may be included in the path selection. The charts300aand300bmay be illustrative of an example of the carrier historical performance data for the first and second carriers. Another example aspect that may be included in the path selection may be the cost associated with using a certain path. For example, if the first network230ais more expensive than the second network230bto carry data, the path selection decision may favor the less expensive paths through the second network230b. In some embodiments, determining a path based on the historical performance may include the use of analytics such as a machine learning algorithm or other analytics in determining the path. In some embodiments, the analytics may yield a given score for a path based on the analyzed historical performance and may represent an aggregate historical performance for a path. For example, the first network device210amay look at the score when performing path selection for the data flow rather than performing a historical data analysis each time a data flow is received. In some embodiments, the score may continually be refined over time. In some embodiments, the analytics to determine the aggregate historical performance may include a machine learning algorithm. One example of a machine learning algorithm consistent with the present disclosure may include a random forest algorithm where the variables in the algorithm may include one or more of the aspects of the historical performance, such as how many times the historical performance data dropped below the threshold, how long the historical performance data dropped below the threshold, and/or the reputation of the carrier of the path. In these and other embodiments, multiple aspects of the historical performance may be included in generating the aggregate historical performance, or in performing the path selection. For example, for a random forest algorithm, multiple decision trees to characterize performance of a given path as a score may be generated. The decision trees may include a set of variables being considered (e.g., duration of time below threshold, number of times crossing threshold, cost, carrier reputation, etc.) and an expected score for the given combination of variables. The decision trees may be generated based on random groupings of such known combinations of variables and their corresponding scores. The multiple decision trees may be used to analyze historical performance data of paths. For example, historical performance data for a path to be analyzed may be passed through the decision trees to generate a score for each of the decision trees. A common or average score between the decision trees may be used to provide a score for the path. In some embodiments, the historical performance data may be analyzed using the decision trees when a data flow is received, when a probe measuring network performance returns, periodically, or on any other bases. In some embodiments, the score may be stored such that when a network device performs a path selection decision, the score may be already generated such that the network device obtains the score and compares it to the scores of other potential paths in performing the path selection. While a random forest algorithm is described, any other machine learning, analytics, or other analysis may be performed to compare the historical performance of the paths to select a path for a data flow subject to an SLA. In some embodiments, the aggregate historical performance may include a weighting factor for one or more data points of the historical performance. For example, the more recent historical performance data points may be weighted more heavily than more distant in the past data points. In these and other embodiments, the weighting factor may include a half-life or other decay function such that certain data points become less and less impactful, and/or eventually have no impact on the aggregate historical performance. In some embodiments, a cutoff point may be used in deciding which data points of the historical performance are used in determining the aggregate historical performance. For example, such a cutoff point may focus the aggregate historical performance on a certain number of recent data points of the historical performance, or a certain duration of time of data points that may be used to contribute to the aggregate historical performance. In some embodiments, the aggregate historical performance may be based on near term historical performance (e.g., within a certain time period such as within the last week, last two weeks, or last month), long term historical performance (e.g., older than within a certain time period, such as older than a week, older than two weeks, or more than a month old), or a combination of both. Modifications, additions, or omissions may be made toFIGS.2and/or3without departing from the scope of the present disclosure. For example, while illustrated as including two network devices210a-b, the system200may include any number of network devices. As another example, as illustrated as four paths220a-d, any number of paths over any number of networks may be included. FIG.4illustrates a flowchart of an example method400of selecting a path over which to route a data flow, in accordance with one or more embodiments of the present disclosure. Although illustrated as discrete blocks, various blocks may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the particular implementation. At block405, a message may be periodically sent along one or more paths to a destination to determine network performance metrics for the paths. For example, a network device (such as the first network device210aofFIG.2) may periodically send a probe to determine jitter, latency, loss, etc. of various paths (such as the paths220a-d) through the network (such as the networks230a-bofFIG.2) to other network devices (such as the second network device210bofFIG.2). At block410, the network performance metrics for the paths may be monitored. For example, the network device may observe or otherwise perform one or more calculations or analyses on the messages of the block405to determine and/or monitor the network performance metrics of the paths. At block415, historical performance data of the network performance metrics for the paths may be stored. For example, the network device may locally store the historical performance data of the various paths. Additionally or alternatively, the network device may communicate the performance data to a centralized device such as the control device120ofFIG.1. At block420, a data flow directed to the destination may be received. The data flow may be subject to a network performance agreement, such as an SLA. For example, the network device may receive a data flow where a policy or other rule designates the data flow as being subject to an SLA. At block425, a determination may be made as to whether the paths satisfy the network performance agreement. For example, the network device may observe a most recent point of the historical performance data for the paths to determine the number, if any, of paths that satisfy the SLA. If only one path satisfies the agreement (block430), the method400may proceed to the block445. If multiple paths satisfy the agreement (block435), the method400may proceed to the block450. If no paths satisfy the agreement (block440), the method400may proceed to the block450. At block445, after determining that one path satisfies the network performance agreement at the block425, the data flow may be routed along the one path that satisfies the network performance. After routing the data flow along the path that satisfies the network performance, the method400may return to the block405. At block450, after determining that either multiple paths satisfy the network performance agreement (block435), or that no paths satisfy the network performance agreement (block440), aggregate historical performance for the paths may be determined. In some embodiments, determining the aggregate historical performance for the paths may include using analytics or a machine learning algorithm to combine historical performance data for the various paths into the aggregate historical performance. The analytics may be based on any number of aspects of the historical performance data, such as a number of times that the performance dropped below a threshold level for the metric of the SLA, or a duration of time that the performance dropped below the threshold level for the metric of the SLA. Additionally or alternatively, one or more features of the carrier or provider of the network may be included in determining the aggregate historical performance. In some embodiments, the aggregate historical performance may be represented by a score. At block455, the aggregate historical performances for the various paths may be compared. For example, if the aggregate historical performances are represented by scores, the network device may compare the scores of the various paths. At block460, the data flow may be routed along the path with the best aggregate historical performance. Additionally or alternatively, the data flow may be routed along one or more paths with aggregate historical performances above a threshold. Thus, the path more likely to satisfy the SLA may be used because of the consideration of historical performance of the paths. In some embodiments, if multiple paths have the same aggregate historical performance, the network device may identify multiple paths as the path to use and may route the data flow using a multi-path routing protocol, such as equal-cost multi-path (ECMP) routing. After the block460, the method400may return to the block405. One skilled in the art will appreciate that, for these processes, operations, and methods, the functions and/or operations performed may be implemented in differing order. Furthermore, the outlined functions and operations are only provided as examples, and some of the functions and operations may be optional, combined into fewer functions and operations, or expanded into additional functions and operations without detracting from the essence of the disclosed embodiments. FIG.5illustrates an example computing system500, according to at least one embodiment described in the present disclosure. The system500may include any suitable system, apparatus, or device configured to select a path over which to route a data flow, or facilitate such path selection. The computing system500may include a processor510, a memory520, a data storage530, and a communication unit540, which all may be communicatively coupled. In some embodiments, any of the network devices (e.g., the edge network devices110ofFIG.1or the network devices210ofFIG.2), or other computing devices of the present disclosure may be implemented as the computing system500. Additionally or alternatively, one or more of the network devices or other computing devices may be implemented as virtualized machines operating on a physical computing system such as the computing system500. Generally, the processor510may include any suitable special-purpose or general-purpose computer, computing entity, or processing device including various computer hardware or software modules and may be configured to execute instructions stored on any applicable computer-readable storage media. For example, the processor510may include a microprocessor, a microcontroller, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a Field-Programmable Gate Array (FPGA), or any other digital or analog circuitry configured to interpret and/or to execute program instructions and/or to process data. Although illustrated as a single processor inFIG.5, it is understood that the processor510may include any number of processors distributed across any number of network or physical locations that are configured to perform individually or collectively any number of operations described in the present disclosure. In some embodiments, the processor510may interpret and/or execute program instructions and/or process data stored in the memory520, the data storage530, or the memory520and the data storage530. In some embodiments, the processor510may fetch program instructions from the data storage530and load the program instructions into the memory520. After the program instructions are loaded into the memory520, the processor510may execute the program instructions, such as instructions to perform the method400ofFIG.4. For example, the processor510may determine that a data flow is associated with an SLA and may select a path for the data flow based on the historical performance of the path. The memory520and the data storage530may include computer-readable storage media or one or more computer-readable storage mediums for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable storage media may be any available media that may be accessed by a general-purpose or special-purpose computer, such as the processor510. In some embodiments, the computing system500may or may not include either of the memory520and the data storage530. By way of example, such computer-readable storage media may include non-transitory computer-readable storage media including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid state memory devices), or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general-purpose or special-purpose computer. Combinations of the above may also be included within the scope of computer-readable storage media. Computer-executable instructions may include, for example, instructions and data configured to cause the processor510to perform a certain operation or group of operations. The communication unit540may include any component, device, system, or combination thereof that is configured to transmit or receive information over a network, such as an MPLS connection, the Internet, a cellular network (e.g., an LTE network), etc. In some embodiments, the communication unit540may communicate with other devices at other locations, the same location, or even other components within the same system. For example, the communication unit540may include a modem, a network card (wireless or wired), an optical communication device, an infrared communication device, a wireless communication device (such as an antenna), a chipset (such as a Bluetooth device, an 802.6 device (e.g., Metropolitan Area Network (MAN)), a WiFi device, a WiMax device, cellular communication facilities, or others), and/or the like, or any combinations thereof. The communication unit540may permit data to be exchanged with a network and/or any other devices or systems described in the present disclosure. For example, the communication unit540may allow the system500to communicate with other systems, such as network devices, control devices, and/or other networks. Modifications, additions, or omissions may be made to the system500without departing from the scope of the present disclosure. For example, the data storage530may include multiple different storage mediums located in multiple locations and accessed by the processor510through a network. As indicated above, the embodiments described in the present disclosure may include the use of a special purpose or general purpose computer (e.g., the processor510ofFIG.5) including various computer hardware or software modules, as discussed in greater detail below. Further, as indicated above, embodiments described in the present disclosure may be implemented using computer-readable media (e.g., the memory520or data storage530ofFIG.5) for carrying or having computer-executable instructions or data structures stored thereon. As used in the present disclosure, the terms “module” or “component” may refer to specific hardware implementations configured to perform the actions of the module or component and/or software objects or software routines that may be stored on and/or executed by general purpose hardware (e.g., computer-readable media, processing devices, or some other hardware) of the computing system. In some embodiments, the different components, modules, engines, and services described in the present disclosure may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While some of the systems and methods described in the present disclosure are generally described as being implemented in software (stored on and/or executed by general purpose hardware), specific hardware implementations or a combination of software and specific hardware implementations are also possible and contemplated. In this description, a “computing entity” may be any computing system as previously defined in the present disclosure, or any module or combination of modulates running on a computing system. In accordance with common practice, the various features illustrated in the drawings may not be drawn to scale. The illustrations presented in the present disclosure are not meant to be actual views of any particular apparatus (e.g., device, system, etc.) or method, but are merely idealized representations that are employed to describe various embodiments of the disclosure. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may be simplified for clarity. Thus, the drawings may not depict all of the components of a given apparatus (e.g., device) or all operations of a particular method. Terms used in the present disclosure and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including, but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes, but is not limited to,” among others). Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases at least one and one or more” to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc. Further, any disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” should be understood to include the possibilities of “A” or “B” or “A and B.” However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one ore more”); the same holds true for the use of definite articles used to introduce claim recitations. Additionally, the use of the terms “first,” “second,” “third,” etc., are not necessarily used herein to connote a specific order or number of elements. Generally, the terms “first,” “second,” “third,” etc., are used to distinguish between different elements as generic identifiers. Absence a showing that the terms “first,” “second,” “third,” etc., connote a specific order, these terms should not be understood to connote a specific order. Furthermore, absence a showing that the terms “first,” “second,” “third,” etc., connote a specific number of elements, these terms should not be understood to connote a specific number of elements. For example, a first widget may be described as having a first side and a second widget may be described as having a second side. The use of the term “second side” with respect to the second widget may be to distinguish such side of the second widget from the “first side” of the first widget and not to connote that the second widget has two sides. All examples and conditional language recited in the present disclosure are intended for pedagogical objects to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the present disclosure. | 45,761 |
11863430 | DETAILED DESCRIPTION The subject disclosure describes, among other things, illustrative embodiments for dynamic Shared Risk Link Group (SRLG) compression. Other embodiments are described in the subject disclosure. One or more aspects of the subject disclosure provide a compressed SRLG list and minimize changes to this list from one time to another (e.g., from one day to the next). By reducing the number of SRLGs without sacrificing the set of common failure states they represent, various embodiments can significantly decrease the time required for many failure analyses and computations. In addition, significant reduction in the total number of unique SRLGs that need to be maintained can also reduce the storage requirements at routers and/or other computational devices. One or more aspects of the subject disclosure provide for calculation of a compressed SRLG list that maintains the integrity of the worst failure scenarios and compress the size of the original SRLG list. This compressed SRLG list can then be pushed to the network routers (e.g., in real-time) for use in FRR and/or other analyses. In other embodiments, one or more centralized algorithms can also (or instead) use the compressed SRLG list to calculate failure scenarios more efficiently. In other embodiments, as network change(s) necessitate change(s) in the compressed SRLG list, an algorithm can be used that maximally reuses the previous compressed SRLG list. One or more aspects of the subject disclosure provide a device comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: generating a set of Shared Risk Link Groups (SRLGs) associated with a set of link bundles, wherein the set of SRLGs comprises for each Shared Risk Link Group (SRLG) in the set of SRLGs an indication for each failed link bundle in a particular SRLG a respective bandwidth failure fraction, greater than 0 and less than or equal to 1, and wherein for at least one of the failed link bundles the failure is less than a complete failure; generating a set of dominance relationships among the SRLGs in the set of SRLGs, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the another of the two SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the another of the SRLGs having a respective bandwidth failure fraction that is less than or equal to a respective bandwidth failure fraction of the corresponding failed link bundle of the one of the two SRLGs; and generating, based at least in part upon the set of SRLGs and the set of dominance relationships, a packed set of SRLGs, wherein the packed set of SRLGs comprises a subset of the set of SRLGs. One or more aspects of the subject disclosure provide a non-transitory machine-readable medium comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising: obtaining first information associated with a plurality of Shared Risk Link Groups (SRLGs); obtaining second information associated with a plurality of link bundles, each link bundle of the plurality of link bundles being associated with one or more SRLGs of the plurality of SRLGs; creating a list comprising each SRLG of the plurality of SRLGs, wherein the list identifies a first number of SRLGs, and wherein the list indicates for each SRLG in the list a respective bandwidth loss of each associated link bundle, wherein at least one of the bandwidth losses is less than a complete loss of bandwidth; creating a plurality of dominance relationships, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the another of the two SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the another of the SRLGs having a respective bandwidth loss that is less than or equal to a respective bandwidth loss of the corresponding failed link bundle of the one of the two SRLGs; selecting a subset of the set of SRLGs, wherein the subset is selected based at least in part upon the list of SRLGs and the plurality of dominance relationships; and creating, based at least in part upon the subset, a compressed set of SRLGs that comprises a second number of SRLGs, wherein the second number is less than the first number. One or more aspects of the subject disclosure provide a method comprising: creating, by a processing system including a processor, a first list comprising a plurality of Shared Risk Link Groups (SRLGs), wherein the first list identifies for each Shared Risk Link Group (SRLG) in the list one or more associated link bundles of a plurality of link bundles, wherein the first list indicates for each link bundle a respective bandwidth loss, and wherein at least one of the bandwidth losses is such that the respective link bundle maintains at least some available bandwidth; creating, by the processing system, a plurality of dominance relationships, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the another of the two SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the another of the two SRLGs having a respective bandwidth loss that is less than or equal to a respective bandwidth loss of the corresponding failed link bundle of the one of the two SRLGs; creating, by the processing system, based at least in part upon the first list and the dominance relationships, a second list of SRLGs, wherein the second list of SRLGs comprises a subset of the first list of SRLGs; and facilitating, by the processing system, use of the second list of SRLGs to modify flow of traffic on a communication network. Referring now toFIG.1, a block diagram is shown illustrating an example, non-limiting embodiment of a system100in accordance with various aspects described herein. For example, system100can facilitate in whole or in part dynamic compression applied to a list of Shared Risk Link Groups (as well as transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses). In particular, a communications network125is presented for providing broadband access110to a plurality of data terminals114via access terminal112, wireless access120to a plurality of mobile devices124and vehicle126via base station or access point122, voice access130to a plurality of telephony devices134, via switching device132and/or media access140to a plurality of audio/video display devices144via media terminal142. In addition, communication network125is coupled to one or more content sources175of audio, video, graphics, text and/or other media. While broadband access110, wireless access120, voice access130and media access140are shown separately, one or more of these forms of access can be combined to provide multiple access services to a single client device (e.g., mobile devices124can receive media content via media terminal142, data terminal114can be provided voice access via switching device132, and so on). The communications network125includes a plurality of network elements (NE)150,152,154,156, etc. for facilitating the broadband access110, wireless access120, voice access130, media access140and/or the distribution of content from content sources175. The communications network125can include a circuit switched or packet switched network, a voice over Internet protocol (VoIP) network, Internet protocol (IP) network, a cable network, a passive or active optical network, a 4G, 5G, or higher generation wireless access network, WIMAX network, UltraWideband network, personal area network or other wireless access network, a broadcast satellite network and/or other communications network. In various embodiments, the access terminal112can include a digital subscriber line access multiplexer (DSLAM), cable modem termination system (CMTS), optical line terminal (OLT) and/or other access terminal. The data terminals114can include personal computers, laptop computers, netbook computers, tablets or other computing devices along with digital subscriber line (DSL) modems, data over coax service interface specification (DOCSIS) modems or other cable modems, a wireless modem such as a 4G, 5G, or higher generation modem, an optical modem and/or other access devices. In various embodiments, the base station or access point122can include a 4G, 5G, or higher generation base station, an access point that operates via an 802.11 standard such as 802.11n, 802.11ac or other wireless access terminal. The mobile devices124can include mobile phones, e-readers, tablets, phablets, wireless modems, and/or other mobile computing devices. In various embodiments, the switching device132can include a private branch exchange or central office switch, a media services gateway, VoIP gateway or other gateway device and/or other switching device. The telephony devices134can include traditional telephones (with or without a terminal adapter), VoIP telephones and/or other telephony devices. In various embodiments, the media terminal142can include a cable head-end or other TV head-end, a satellite receiver, gateway or other media terminal142. The display devices144can include televisions with or without a set top box, personal computers and/or other display devices. In various embodiments, the content sources175include broadcast television and radio sources, video on demand platforms and streaming video and audio services platforms, one or more content data networks, data servers, web servers and other content servers, and/or other sources of media. In various embodiments, the communications network125can include wired, optical and/or wireless links and the network elements150,152,154,156, etc. can include service switching points, signal transfer points, service control points, network gateways, media distribution hubs, servers, firewalls, routers, edge devices, switches and other network nodes for routing and controlling communications traffic over wired, optical and wireless links as part of the Internet and other public networks as well as one or more private networks, for managing subscriber access, for billing and network management and for supporting other network functions. Referring now toFIG.2A, this is a block diagram illustrating an example, non-limiting embodiment of a network (which can function fully or partially within the communication network ofFIG.1) in accordance with various aspects described herein. More particularly, in this example:There are 6 Optical nodes: A, B, C, D, E, FThe Optical Nodes are connected by 7 Optical Links: S1, S2, S3, S4, S5, S6, S7these are the 7 SRLGsThere are 4 IP Routers: R1, R2, R3, R4(collocated with optical nodes A, B, E and F respectively)There are 2 Link Bundles (L1and L2) connecting the IP RoutersL1connects R1and R3, has total capacity of 500 Gbps, and has two membersMember 1 is 400 Gbps, uses the A-D-E path and so has SRLGs S3and S6Member 2 is 100 Gbps, uses the A-B-E path and so has SRLGs S1and S4L2connects R2and R4, has total capacity of 500 Gbps and has two membersMember 1 is 400 Gbps, uses the B-C-F path and so has SRLGs S2and S5Member 2 is 100 Gbps, uses the B-E-F path and so has SRLGs S4and S7 Still referring toFIG.2A, a discussion of impact of SRLG failures and the dominant SRLGs will now be made. More particularly, with regard to impact of SRLG failures of this example: S1takes down 100 Gbps (20%) of L1; S2takes down 400 Gbps (80%) of L2; S3takes down 400 Gbps (80%) of L1; S4takes down 100 Gbps (20%) of L1and 100 Gbps (20%) of L2; S5takes down 400 Gbps (80%) of L2; S6takes down 400 Gbps (80%) of L1; S7takes down 100 Gbps (20%) of L2. Further, with regard to an example packed (or compressed list of) SRLGs, the packed SRLGs are S2, S3and S4(explained as follows): S1and S7are not included since they are dominated by S4(both of their failures are a subset of the failure of S4); S2and S5are equivalent but only S2is included since it is earlier in the order; S3and S6are equivalent but only S3is included since it is earlier in the order. Further still, a variation can be as follows: Suppose at a later point of time we add a third Link bundle L3from R1to R2with a single 100 Gbps member going from A to B using the SRLG S1. In this scenario, S1will also need to be in the packed SRLG list since it will now fail 20% of L1and 100% of L3and so would no longer be a subset of the failure of S4. Referring now to FIG.2B, this is a block diagram illustrating an example, non-limiting embodiment of SRLG lists282,284(which can function fully or partially within the communication network ofFIG.1and/or the network ofFIG.2A) in accordance with various aspects described herein. As seen in the example of this figure, a starting SRLG list282can include SRLG-1, SRLG-2, SRLG-3, SRLG-4. . . SRLG-N (wherein “N” is an integer having a maximum value of, for example, thousands). Each of SRLG-1, SRLG-2, SRLG-3, SRLG-4. . . SRLG-N has associated therewith a respective plurality of link bundles. Further, as a result of a compression or packing process as described herein, SRLG list284is created. This list284includes a subset of the SRLGs from list282(wherein “K” is an integer less than “N”). In various embodiments, K and/or N can be input and/or capped by a user. In various examples, a typical value of N can be thousands whereas a typical value of K can be a few tens. Referring now toFIG.2C, various steps of a method2000according to an embodiment are shown. As seen in thisFIG.2C, step2002comprises generating a set of Shared Risk Link Groups (SRLGs) associated with a set of link bundles, wherein the set of SRLGs comprises for each Shared Risk Link Group (SRLG) in the set of SRLGs an indication for each failed link bundle in a particular SRLG a respective bandwidth failure fraction, greater than 0 and less than or equal to 1, and wherein for at least one of the failed link bundles the failure is less than a complete failure. Next, step2004comprises generating a set of dominance relationships among the SRLGs in the set of SRLGs, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the another of the two SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the another of the two SRLGs having a respective bandwidth failure fraction that is less than or equal to a respective bandwidth failure fraction of the corresponding failed link bundle of the one of the two SRLGs. Next, step2006comprises generating, based at least in part upon the set of SRLGs and the set of dominance relationships, a packed set of SRLGs, wherein the packed set of SRLGs comprises a subset of the set of SRLGs. While for purposes of simplicity of explanation, the respective processes are shown and described as a series of blocks inFIG.2C, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methods described herein. Referring now toFIG.2D, various steps of a method2100according to an embodiment are shown. As seen in thisFIG.2D, step2102comprises obtaining first information associated with a plurality of Shared Risk Link Groups (SRLGs). Next, step2104comprises obtaining second information associated with a plurality of link bundles, each link bundle of the plurality of link bundles being associated with one or more SRLGs of the plurality of SRLGs. Next, step2106comprises creating a list comprising each SRLG of the plurality of SRLGs, wherein the list identifies a first number of SRLGs, and wherein the list indicates for each SRLG in the list a respective bandwidth loss of each associated link bundle, wherein at least one of the bandwidth losses is less than a complete loss of bandwidth. Next, step2108comprises creating a plurality of dominance relationships, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the another of the two SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the another of the two SRLGs having a respective bandwidth loss that is less than or equal to a respective bandwidth loss of the corresponding failed link bundle of the one of the two SRLGs. Next, step2110comprises selecting a subset of the set of SRLGs, wherein the subset is selected based at least in part upon the list of SRLGs and the plurality of dominance relationships. Next, step2112comprises creating, based at least in part upon the subset, a compressed set of SRLGs that comprises a second number of SRLGs, wherein the second number is less than the first number. While for purposes of simplicity of explanation, the respective processes are shown and described as a series of blocks inFIG.2D, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methods described herein. Referring now toFIG.2E, various steps of a method2200according to an embodiment are shown. As seen in thisFIG.2E, step2202comprises creating, by a processing system including a processor, a first list comprising a plurality of Shared Risk Link Groups (SRLGs), wherein the first list identifies for each Shared Risk Link Group (SRLG) in the list one or more associated link bundles of a plurality of link bundles, wherein the first list indicates for each link bundle a respective bandwidth loss, and wherein at least one of the bandwidth losses is such that the respective link bundle maintains at least some available bandwidth. Next, step2204comprises creating, by the processing system, a plurality of dominance relationships, wherein each dominance relationship comprises for two SRLGs an indication that one of the two SRLGs dominates another of the two SRLGs, and wherein the one of the two SRLGs dominating the another of the two SRLGs comprises: (a) the other of the SRLGs having each associated failed link bundle also being failed by the one of the two SRLGs; and (b) each associated failed link bundle of the other of the SRLGs having a respective bandwidth loss that is less than or equal to a respective bandwidth loss of the corresponding failed link bundle of the one of the two SRLGs. Next, step2206comprises creating, by the processing system, based at least in part upon the first list and the dominance relationships, a second list of SRLGs, wherein the second list of SRLGs comprises a subset of the first list of SRLGs. Next, step2208comprises facilitating, by the processing system, use of the second list of SRLGs to modify flow of traffic on a communication network. While for purposes of simplicity of explanation, the respective processes are shown and described as a series of blocks inFIG.2E, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methods described herein. Reference will now be made to packing SRLG lists with partial failures of link bundles according to an embodiment. For the purposes of this discussion, suppose there are N SRLGs and the i-th SRLG is denoted by Siwhere i ranges from 1 to N (it is assumed for this discussion that every SRLG in the list fails at least one link bundle fully or partially).Suppose Sifails Milink bundlesLet lijrepresent the j-th link bundle failed by SiLet pijrepresent the fraction (in terms of bandwidth) of link bundle lijfailed by Si0<pij≤1The above represents the set of all SRLGs and what fraction of which specific link bundle is failed by each of the SRLGs.Next, we establish the dominance relationship among the SRLGs.An SRLG Sidominates another SRLG Sj(and conversely Sjis dominated by Si) if and only if:Each link bundle failed by Sjis also failed by Siand the fraction of the link bundle failed by Sjis less than or equal to the fraction of the link bundle failed by Si.In the above scenario, the dominance becomes strict dominance if at least one of the “less than or equal” conditions is satisfied as “strictly less than” condition or the dominating SRLG fails at least one link bundle that is not failed by the dominated SRLG.It is possible that two SRLGs Siand Sjhave identical failure signatures. In that case they dominate each other.From the original list of N SRLGs, we create a shorter packed list of K SRLGs (let's denote the i-th SRLG of the shorter packed list by Pi) such that no SRLG in the shorter packed list is dominated by another SRLG in the shorter packed list and every SRLG in the original list is dominated by at least one SRLG in the shorter packed list.One way of creating the shorter packed list is this:Start the shorter packed list with SRLG S1. So, set K=1 and set P1=S1.Next consider S2for possible inclusion to the shorter packed list.If S2is dominated by P1then do nothing and there is no change to the shorter packed list.Else If P1is dominated by S2then keep K=1 but set P1=S2(so, basically remove S1from the shorter packed list and replace it with S2).Else If S2is not dominated by P1and P1is not dominated by S2then increase K to 2 and set P2=S2.Continue the above process for each of the other SRLGs in the original list as explained below:Suppose currently there are K SRLGs in the shorter packed list and the SRLG Siof the original list is being considered for possible inclusion to the shorter packed list. Compare Sito each of the members of the current shorter packed list.If it (i.e., Si) is dominated by any member then it (i.e., Si) drops out (that is, is not added to the shorter packed list).Else If it (i.e., Si) dominates any member for the first time, then it (i.e., Si) replaces that member. If it (i.e., Si) has found its place in the list this way, then it (i.e., Si) continues to check against every other member and if it (i.e., Si) dominates that other member then that member is removed (from the shorter packed list).Else If it (i.e., Si) is not dominated by any existing member of the packed list and it (i.e., Si) does not dominate any existing member of the packed list then K is increased by 1 and we set PK=Si.In one specific example, if there are two SRLGs that have identical failure signatures then the one that comes later in the original list will be dropped (that is, is not added to the shorter packed list).What happens if a packed list of size Koldexisted (from previous time) before we start the above process? In one example, do the following:Re-order the list of N new SRLGs in such a way that the first Koldmembers are the same as the members of the original packed list (if any member of the older packed list is no longer a member of the new list of N SRLGs then that member is disregarded).The above will ensure that there is maximal chance of retaining members of the original packed list. A new SRLG will be added to the list only if it is not dominated by any of the original members of the packed list. An original member of the packed list will be dropped only if it is strictly dominated by a different SRLG in the new environment. As described herein, various embodiments provide for compression of a list of SRLGs such that only the dominant ones (from an original list) are included in the compressed list. In one example, the compressed list can be generated dynamically. In another example, the compressed list can be sent (e.g., dynamically) to one or more routers in a network. In various examples, the compressed list can accurately provide necessary risk information, can enable efficient setup of bypass tunnels, and/or can improve failure analysis applications. As described herein, various embodiments can enable a savings in setup time (e.g., a set-up time for desired router configuration), can reduce (or minimize) errors, and/or can avoid potentially expensive paths. As described herein, various embodiments can be applied to optical SRLGs for FRR and/or failure analysis. As described herein, various embodiments can provide a mechanism that accounts for partial failure(s) vs. failure of a whole span. As described herein, various embodiments can provide a mechanism that accounts for a fraction of bandwidth. As described herein, various embodiments can provide a mechanism that covers worst case failures. As described herein, various embodiments can provide a mechanism that is dynamic. As described herein, various embodiments can provide a mechanism that takes into account (e.g., periodically) network changes over time. As described herein, various embodiments can provide a mechanism that uses one or more secondary factors to decide which SRLG from a list to include in a shortened (or compressed) version of the list. As described herein, various embodiments can be applied to a configuration in which link bundles are not completely independent. As described herein, various embodiments can be applied to a network that is growing and in which more link bundles and/or SRLGs are dynamically added. As described herein, various embodiments can be applied to a network that is shrinking and in which link bundles and/or SRLGs are dynamically deleted. As described herein, various embodiments can be applied in a dynamic manner depending upon network conditions (e.g., depending on network conditions dynamically add link bundles/SRLGs or dynamically delete link bundles/SRLGs). Referring now toFIG.3, a block diagram300is shown illustrating an example, non-limiting embodiment of a virtualized communication network in accordance with various aspects described herein. In particular a virtualized communication network is presented that can be used to implement some or all of the subsystems and functions of system100ofFIG.1, some or all of the subsystems and functions of system200ofFIG.2A, some or all of the features of the lists ofFIG.2B, some or all of method2000ofFIG.2C, some or all of method2100ofFIG.2Dand/or some or all of method2200ofFIG.2E. For example, virtualized communication network300can facilitate in whole or in part dynamic compression applied to a list of Shared Risk Link Groups (as well as transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses). In particular, a cloud networking architecture is shown that leverages cloud technologies and supports rapid innovation and scalability via a transport layer350, a virtualized network function cloud325and/or one or more cloud computing environments375. In various embodiments, this cloud networking architecture is an open architecture that leverages application programming interfaces (APIs); reduces complexity from services and operations; supports more nimble business models; and rapidly and seamlessly scales to meet evolving customer requirements including traffic growth, diversity of traffic types, and diversity of performance and reliability expectations. In contrast to traditional network elements—which are typically integrated to perform a single function, the virtualized communication network employs virtual network elements (VNEs)330,332,334, etc. that perform some or all of the functions of network elements150,152,154,156, etc. For example, the network architecture can provide a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services. This infrastructure can include several types of substrates. The most typical type of substrate being servers that support Network Function Virtualization (NFV), followed by packet forwarding capabilities based on generic computing resources, with specialized network technologies brought to bear when general purpose processors or general purpose integrated circuit devices offered by merchants (referred to herein as merchant silicon) are not appropriate. In this case, communication services can be implemented as cloud-centric workloads. As an example, a traditional network element150(shown inFIG.1), such as an edge router can be implemented via a VNE330composed of NFV software modules, merchant silicon, and associated controllers. The software can be written so that increasing workload consumes incremental resources from a common resource pool, and moreover so that it's elastic: so the resources are only consumed when needed. In a similar fashion, other network elements such as other routers, switches, edge caches, and middle-boxes are instantiated from the common resource pool. Such sharing of infrastructure across a broad set of uses makes planning and growing infrastructure easier to manage. In an embodiment, the transport layer350includes fiber, cable, wired and/or wireless transport elements, network elements and interfaces to provide broadband access110, wireless access120, voice access130, media access140and/or access to content sources175for distribution of content to any or all of the access technologies. In particular, in some cases a network element needs to be positioned at a specific place, and this allows for less sharing of common infrastructure. Other times, the network elements have specific physical layer adapters that cannot be abstracted or virtualized, and might require special DSP code and analog front-ends (AFEs) that do not lend themselves to implementation as VNEs330,332or334. These network elements can be included in transport layer350. The virtualized network function cloud325interfaces with the transport layer350to provide the VNEs330,332,334, etc. to provide specific NFVs. In particular, the virtualized network function cloud325leverages cloud operations, applications, and architectures to support networking workloads. The virtualized network elements330,332and334can employ network function software that provides either a one-for-one mapping of traditional network element function or alternately some combination of network functions designed for cloud computing. For example, VNEs330,332and334can include route reflectors, domain name system (DNS) servers, and dynamic host configuration protocol (DHCP) servers, system architecture evolution (SAE) and/or mobility management entity (MME) gateways, broadband network gateways, IP edge routers for IP-VPN, Ethernet and other services, load balancers, distributers and other network elements. Because these elements don't typically need to forward large amounts of traffic, their workload can be distributed across a number of servers—each of which adds a portion of the capability, and overall which creates an elastic function with higher availability than its former monolithic version. These virtual network elements330,332,334, etc. can be instantiated and managed using an orchestration approach similar to those used in cloud compute services. The cloud computing environments375can interface with the virtualized network function cloud325via APIs that expose functional capabilities of the VNEs330,332,334, etc. to provide the flexible and expanded capabilities to the virtualized network function cloud325. In particular, network workloads may have applications distributed across the virtualized network function cloud325and cloud computing environment375and in the commercial cloud, or might simply orchestrate workloads supported entirely in NFV infrastructure from these third party locations. Turning now toFIG.4, there is illustrated a block diagram of a computing environment in accordance with various aspects described herein. In order to provide additional context for various embodiments of the embodiments described herein,FIG.4and the following discussion are intended to provide a brief, general description of a suitable computing environment400in which the various embodiments of the subject disclosure can be implemented. In particular, computing environment400can be used in the implementation of network elements150,152,154,156, access terminal112, base station or access point122, switching device132, media terminal142, and/or VNEs330,332,334, etc. Each of these devices can be implemented via computer-executable instructions that can run on one or more computers, and/or in combination with other program modules and/or as a combination of hardware and software. For example, computing environment400can facilitate in whole or in part dynamic compression applied to a list of Shared Risk Link Groups (as well as transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses). Generally, program modules comprise routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the methods can be practiced with other computer system configurations, comprising single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices. As used herein, a processing circuit includes one or more processors as well as other application specific circuits such as an application specific integrated circuit, digital logic circuit, state machine, programmable gate array or other circuit that processes input signals or data and that produces output signals or data in response thereto. It should be noted that while any functions and features described herein in association with the operation of a processor could likewise be performed by a processing circuit. The illustrated embodiments of the embodiments herein can be also practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. Computing devices typically comprise a variety of media, which can comprise computer-readable storage media and/or communications media, which two terms are used herein differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer and comprises both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data or unstructured data. Computer-readable storage media can comprise, but are not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM),flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or other tangible and/or non-transitory media which can be used to store desired information. In this regard, the terms “tangible” or “non-transitory” herein as applied to storage, memory or computer-readable media, are to be understood to exclude only propagating transitory signals per se as modifiers and do not relinquish rights to all standard storage, memory or computer-readable media that are not only propagating transitory signals per se. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium. Communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and comprises any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media comprise wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. With reference again toFIG.4, the example environment can comprise a computer402, the computer402comprising a processing unit404, a system memory406and a system bus408. The system bus408couples system components including, but not limited to, the system memory406to the processing unit404. The processing unit404can be any of various commercially available processors. Dual microprocessors and other multiprocessor architectures can also be employed as the processing unit404. The system bus408can be any of several types of bus structure that can further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory406comprises ROM410and RAM412. A basic input/output system (BIOS) can be stored in a non-volatile memory such as ROM, erasable programmable read only memory (EPROM), EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer402, such as during startup. The RAM412can also comprise a high-speed RAM such as static RAM for caching data. The computer402further comprises an internal hard disk drive (HDD)414(e.g., EIDE, SATA), which internal HDD414can also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD)416, (e.g., to read from or write to a removable diskette418) and an optical disk drive420, (e.g., reading a CD-ROM disk422or, to read from or write to other high capacity optical media such as the DVD). The HDD414, magnetic FDD416and optical disk drive420can be connected to the system bus408by a hard disk drive interface424, a magnetic disk drive interface426and an optical drive interface428, respectively. The hard disk drive interface424for external drive implementations comprises at least one or both of Universal Serial Bus (USB) and Institute of Electrical and Electronics Engineers (IEEE) 1394 interface technologies. Other external drive connection technologies are within contemplation of the embodiments described herein. The drives and their associated computer-readable storage media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer402, the drives and storage media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable storage media above refers to a hard disk drive (HDD), a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of storage media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, can also be used in the example operating environment, and further, that any such storage media can contain computer-executable instructions for performing the methods described herein. A number of program modules can be stored in the drives and RAM412, comprising an operating system430, one or more application programs432, other program modules434and program data436. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM412. The systems and methods described herein can be implemented utilizing various commercially available operating systems or combinations of operating systems. A user can enter commands and information into the computer402through one or more wired/wireless input devices, e.g., a keyboard438and a pointing device, such as a mouse440. Other input devices (not shown) can comprise a microphone, an infrared (IR) remote control, a joystick, a game pad, a stylus pen, touch screen or the like. These and other input devices are often connected to the processing unit404through an input device interface442that can be coupled to the system bus408, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a universal serial bus (USB) port, an IR interface, etc. A monitor444or other type of display device can be also connected to the system bus408via an interface, such as a video adapter446. It will also be appreciated that in alternative embodiments, a monitor444can also be any display device (e.g., another computer having a display, a smart phone, a tablet computer, etc.) for receiving display information associated with computer402via any communication means, including via the Internet and cloud-based networks. In addition to the monitor444, a computer typically comprises other peripheral output devices (not shown), such as speakers, printers, etc. The computer402can operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s)448. The remote computer(s)448can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically comprises many or all of the elements described relative to the computer402, although, for purposes of brevity, only a remote memory/storage device450is illustrated. The logical connections depicted comprise wired/wireless connectivity to a local area network (LAN)452and/or larger networks, e.g., a wide area network (WAN)454. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which can connect to a global communications network, e.g., the Internet. When used in a LAN networking environment, the computer402can be connected to the LAN452through a wired and/or wireless communication network interface or adapter456. The adapter456can facilitate wired or wireless communication to the LAN452, which can also comprise a wireless AP disposed thereon for communicating with the adapter456. When used in a WAN networking environment, the computer402can comprise a modem458or can be connected to a communications server on the WAN454or has other means for establishing communications over the WAN454, such as by way of the Internet. The modem458, which can be internal or external and a wired or wireless device, can be connected to the system bus408via the input device interface442. In a networked environment, program modules depicted relative to the computer402or portions thereof, can be stored in the remote memory/storage device450. It will be appreciated that the network connections shown are example and other means of establishing a communications link between the computers can be used. The computer402can be operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This can comprise Wireless Fidelity (Wi-Fi) and BLUETOOTH® wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi can allow connection to the Internet from a couch at home, a bed in a hotel room or a conference room at work, without wires. Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE 802.11 (a, b, g, n, ac, ag, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which can use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands for example or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10BaseT wired Ethernet networks used in many offices. Turning now toFIG.5, an embodiment500of a mobile network platform510is shown that is an example of network elements150,152,154,156, and/or VNEs330,332,334, etc. For example, platform510can facilitate in whole or in part dynamic compression applied to a list of Shared Risk Link Groups (as well as transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses). In one or more embodiments, the mobile network platform510can generate and receive signals transmitted and received by base stations or access points such as base station or access point122. Generally, mobile network platform510can comprise components, e.g., nodes, gateways, interfaces, servers, or disparate platforms, that facilitate both packet-switched (PS) (e.g., internet protocol (IP), frame relay, asynchronous transfer mode (ATM)) and circuit-switched (CS) traffic (e.g., voice and data), as well as control generation for networked wireless telecommunication. As a non-limiting example, mobile network platform510can be included in telecommunications carrier networks, and can be considered carrier-side components as discussed elsewhere herein. Mobile network platform510comprises CS gateway node(s)512which can interface CS traffic received from legacy networks like telephony network(s)540(e.g., public switched telephone network (PSTN), or public land mobile network (PLMN)) or a signaling system #7 (SS7) network560. CS gateway node(s)512can authorize and authenticate traffic (e.g., voice) arising from such networks. Additionally, CS gateway node(s)512can access mobility, or roaming, data generated through SS7 network560; for instance, mobility data stored in a visited location register (VLR), which can reside in memory530. Moreover, CS gateway node(s)512interfaces CS-based traffic and signaling and PS gateway node(s)518. As an example, in a 3GPP UMTS network, CS gateway node(s)512can be realized at least in part in gateway GPRS support node(s) (GGSN). It should be appreciated that functionality and specific operation of CS gateway node(s)512, PS gateway node(s)518, and serving node(s)516, is provided and dictated by radio technology(ies) utilized by mobile network platform510for telecommunication over a radio access network520with other devices, such as a radiotelephone575. In addition to receiving and processing CS-switched traffic and signaling, PS gateway node(s)518can authorize and authenticate PS-based data sessions with served mobile devices. Data sessions can comprise traffic, or content(s), exchanged with networks external to the mobile network platform510, like wide area network(s) (WANs)550, enterprise network(s)570, and service network(s)580, which can be embodied in local area network(s) (LANs), can also be interfaced with mobile network platform510through PS gateway node(s)518. It is to be noted that WANs550and enterprise network(s)570can embody, at least in part, a service network(s) like IP multimedia subsystem (IMS). Based on radio technology layer(s) available in technology resource(s) or radio access network520, PS gateway node(s)518can generate packet data protocol contexts when a data session is established; other data structures that facilitate routing of packetized data also can be generated. To that end, in an aspect, PS gateway node(s)518can comprise a tunnel interface (e.g., tunnel termination gateway (TTG) in 3GPP UMTS network(s) (not shown)) which can facilitate packetized communication with disparate wireless network(s), such as Wi-Fi networks. In embodiment500, mobile network platform510also comprises serving node(s)516that, based upon available radio technology layer(s) within technology resource(s) in the radio access network520, convey the various packetized flows of data streams received through PS gateway node(s)518. It is to be noted that for technology resource(s) that rely primarily on CS communication, server node(s) can deliver traffic without reliance on PS gateway node(s)518; for example, server node(s) can embody at least in part a mobile switching center. As an example, in a 3GPP UMTS network, serving node(s)516can be embodied in serving GPRS support node(s) (SGSN). For radio technologies that exploit packetized communication, server(s)514in mobile network platform510can execute numerous applications that can generate multiple disparate packetized data streams or flows, and manage (e.g., schedule, queue, format . . . ) such flows. Such application(s) can comprise add-on features to standard services (for example, provisioning, billing, customer support . . . ) provided by mobile network platform510. Data streams (e.g., content(s) that are part of a voice call or data session) can be conveyed to PS gateway node(s)518for authorization/authentication and initiation of a data session, and to serving node(s)516for communication thereafter. In addition to application server, server(s)514can comprise utility server(s), a utility server can comprise a provisioning server, an operations and maintenance server, a security server that can implement at least in part a certificate authority and firewalls as well as other security mechanisms, and the like. In an aspect, security server(s) secure communication served through mobile network platform510to ensure network's operation and data integrity in addition to authorization and authentication procedures that CS gateway node(s)512and PS gateway node(s)518can enact. Moreover, provisioning server(s) can provision services from external network(s) like networks operated by a disparate service provider; for instance, WAN550or Global Positioning System (GPS) network(s) (not shown). Provisioning server(s) can also provision coverage through networks associated to mobile network platform510(e.g., deployed and operated by the same service provider), such as the distributed antennas networks shown inFIG.1(s)that enhance wireless service coverage by providing more network coverage. It is to be noted that server(s)514can comprise one or more processors configured to confer at least in part the functionality of mobile network platform510. To that end, the one or more processor can execute code instructions stored in memory530, for example. It is should be appreciated that server(s)514can comprise a content manager, which operates in substantially the same manner as described hereinbefore. In example embodiment500, memory530can store information related to operation of mobile network platform510. Other operational information can comprise provisioning information of mobile devices served through mobile network platform510, subscriber databases; application intelligence, pricing schemes, e.g., promotional rates, flat-rate programs, couponing campaigns; technical specification(s) consistent with telecommunication protocols for operation of disparate radio, or wireless, technology layers; and so forth. Memory530can also store information from at least one of telephony network(s)540, WAN550, SS7 network560, or enterprise network(s)570. In an aspect, memory530can be, for example, accessed as part of a data store component or as a remotely connected memory store. In order to provide a context for the various aspects of the disclosed subject matter,FIG.5, and the following discussion, are intended to provide a brief, general description of a suitable environment in which the various aspects of the disclosed subject matter can be implemented. While the subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a computer and/or computers, those skilled in the art will recognize that the disclosed subject matter also can be implemented in combination with other program modules. Generally, program modules comprise routines, programs, components, data structures, etc. that perform particular tasks and/or implement particular abstract data types. Turning now toFIG.6, an illustrative embodiment of a communication device600is shown. The communication device600can serve as an illustrative embodiment of devices such as data terminals114, mobile devices124, vehicle126, display devices144or other client devices for communication via either communications network125. For example, computing device600can facilitate in whole or in part dynamic compression applied to a list of Shared Risk Link Groups (as well as transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses). The communication device600can comprise a wireline and/or wireless transceiver602(herein transceiver602), a user interface (UI)604, a power supply614, a location receiver616, a motion sensor618, an orientation sensor620, and a controller606for managing operations thereof. The transceiver602can support short-range or long-range wireless access technologies such as Bluetooth®, ZigBee®, WiFi, DECT, or cellular communication technologies, just to mention a few (Bluetooth® and ZigBee® are trademarks registered by the Bluetooth® Special Interest Group and the ZigBee® Alliance, respectively). Cellular technologies can include, for example, CDMA-1X, UMTS/HSDPA, GSM/GPRS, TDMA/EDGE, EV/DO, WiMAX, SDR, LTE, as well as other next generation wireless communication technologies as they arise. The transceiver602can also be adapted to support circuit-switched wireline access technologies (such as PSTN), packet-switched wireline access technologies (such as TCP/IP, VoIP, etc.), and combinations thereof. The UI604can include a depressible or touch-sensitive keypad608with a navigation mechanism such as a roller ball, a joystick, a mouse, or a navigation disk for manipulating operations of the communication device600. The keypad608can be an integral part of a housing assembly of the communication device600or an independent device operably coupled thereto by a tethered wireline interface (such as a USB cable) or a wireless interface supporting for example Bluetooth®. The keypad608can represent a numeric keypad commonly used by phones, and/or a QWERTY keypad with alphanumeric keys. The UI604can further include a display610such as monochrome or color LCD (Liquid Crystal Display), OLED (Organic Light Emitting Diode) or other suitable display technology for conveying images to an end user of the communication device600. In an embodiment where the display610is touch-sensitive, a portion or all of the keypad608can be presented by way of the display610with navigation features. The display610can use touch screen technology to also serve as a user interface for detecting user input. As a touch screen display, the communication device600can be adapted to present a user interface having graphical user interface (GUI) elements that can be selected by a user with a touch of a finger. The display610can be equipped with capacitive, resistive or other forms of sensing technology to detect how much surface area of a user's finger has been placed on a portion of the touch screen display. This sensing information can be used to control the manipulation of the GUI elements or other functions of the user interface. The display610can be an integral part of the housing assembly of the communication device600or an independent device communicatively coupled thereto by a tethered wireline interface (such as a cable) or a wireless interface. The UI604can also include an audio system612that utilizes audio technology for conveying low volume audio (such as audio heard in proximity of a human ear) and high volume audio (such as speakerphone for hands free operation). The audio system612can further include a microphone for receiving audible signals of an end user. The audio system612can also be used for voice recognition applications. The UI604can further include an image sensor613such as a charged coupled device (CCD) camera for capturing still or moving images. The power supply614can utilize common power management technologies such as replaceable and rechargeable batteries, supply regulation technologies, and/or charging system technologies for supplying energy to the components of the communication device600to facilitate long-range or short-range portable communications. Alternatively, or in combination, the charging system can utilize external power sources such as DC power supplied over a physical interface such as a USB port or other suitable tethering technologies. The location receiver616can utilize location technology such as a global positioning system (GPS) receiver capable of assisted GPS for identifying a location of the communication device600based on signals generated by a constellation of GPS satellites, which can be used for facilitating location services such as navigation. The motion sensor618can utilize motion sensing technology such as an accelerometer, a gyroscope, or other suitable motion sensing technology to detect motion of the communication device600in three-dimensional space. The orientation sensor620can utilize orientation sensing technology such as a magnetometer to detect the orientation of the communication device600(north, south, west, and east, as well as combined orientations in degrees, minutes, or other suitable orientation metrics). The communication device600can use the transceiver602to also determine a proximity to a cellular, WiFi, Bluetooth®, or other wireless access points by sensing techniques such as utilizing a received signal strength indicator (RSSI) and/or signal time of arrival (TOA) or time of flight (TOF) measurements. The controller606can utilize computing technologies such as a microprocessor, a digital signal processor (DSP), programmable gate arrays, application specific integrated circuits, and/or a video processor with associated storage memory such as Flash, ROM, RAM, SRAM, DRAM or other storage technologies for executing computer instructions, controlling, and processing data supplied by the aforementioned components of the communication device600. Other components not shown inFIG.6can be used in one or more embodiments of the subject disclosure. For instance, the communication device600can include a slot for adding or removing an identity module such as a Subscriber Identity Module (SIM) card or Universal Integrated Circuit Card (UICC). SIM or UICC cards can be used for identifying subscriber services, executing programs, storing subscriber data, and so on. The terms “first,” “second,” “third,” and so forth, as used in the claims, unless otherwise clear by context, is for clarity only and doesn't otherwise indicate or imply any order in time. For instance, “a first determination,” “a second determination,” and “a third determination,” does not indicate or imply that the first determination is to be made before the second determination, or vice versa, etc. In the subject specification, terms such as “store,” “storage,” “data store,” data storage,” “database,” and substantially any other information storage component relevant to operation and functionality of a component, refer to “memory components,” or entities embodied in a “memory” or components comprising the memory. It will be appreciated that the memory components described herein can be either volatile memory or nonvolatile memory, or can comprise both volatile and nonvolatile memory, by way of illustration, and not limitation, volatile memory, non-volatile memory, disk storage, and memory storage. Further, nonvolatile memory can be included in read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can comprise random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). Additionally, the disclosed memory components of systems or methods herein are intended to comprise, without being limited to comprising, these and any other suitable types of memory. Moreover, it will be noted that the disclosed subject matter can be practiced with other computer system configurations, comprising single-processor or multiprocessor computer systems, mini-computing devices, mainframe computers, as well as personal computers, hand-held computing devices (e.g., PDA, phone, smartphone, watch, tablet computers, netbook computers, etc.), microprocessor-based or programmable consumer or industrial electronics, and the like. The illustrated aspects can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network; however, some if not all aspects of the subject disclosure can be practiced on stand-alone computers. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. In one or more embodiments, information regarding use of services can be generated including services being accessed, media consumption history, user preferences, and so forth. This information can be obtained by various methods including user input, detecting types of communications (e.g., video content vs. audio content), analysis of content streams, sampling, and so forth. The generating, obtaining and/or monitoring of this information can be responsive to an authorization provided by the user. In one or more embodiments, an analysis of data can be subject to authorization from user(s) associated with the data, such as an opt-in, an opt-out, acknowledgement requirements, notifications, selective authorization based on types of data, and so forth. Some of the embodiments described herein can also employ artificial intelligence (AI) to facilitate automating one or more features described herein. The embodiments (e.g., in connection with automatically applying dynamic compression to a list of Shared Risk Link Groups (as well as automatic transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses) can employ various AI-based schemes for carrying out various embodiments thereof. Moreover, the classifier can be employed to determine a ranking or priority of each SRLG, each SRLG list, and/or each router. A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, . . . , xn), to a confidence that the input belongs to a class, that is, f(x)=confidence (class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to determine or infer an action that a user desires to be automatically performed. A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches comprise, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority. As will be readily appreciated, one or more of the embodiments can employ classifiers that are explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing UE behavior, operator preferences, historical information, receiving extrinsic information). For example, SVMs can be configured via a learning or training phase within a classifier constructor and feature selection module. Thus, the classifier(s) can be used to automatically learn and perform a number of functions, including but not limited to automatically applying dynamic compression to a list of Shared Risk Link Groups (as well as automatic transmission of such compressed list of Shared Risk Link Groups to one or more network routers (e.g., in real-time) for use in FRR and/or other analyses), etc. As used in some contexts in this application, in some embodiments, the terms “component,” “system” and the like are intended to refer to, or comprise, a computer-related entity or an entity related to an operational apparatus with one or more specific functionalities, wherein the entity can be either hardware, a combination of hardware and software, software, or software in execution. As an example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, computer-executable instructions, a program, and/or a computer. By way of illustration and not limitation, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry, which is operated by a software or firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can comprise a processor therein to execute software or firmware that confers at least in part the functionality of the electronic components. While various components have been illustrated as separate components, it will be appreciated that multiple components can be implemented as a single component, or a single component can be implemented as multiple components, without departing from example embodiments. Further, the various embodiments can be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device or computer-readable storage/communications media. For example, computer readable storage media can include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact disk (CD), digital versatile disk (DVD)), smart cards, and flash memory devices (e.g., card, stick, key drive). Of course, those skilled in the art will recognize many modifications can be made to this configuration without departing from the scope or spirit of the various embodiments. In addition, the words “example” and “exemplary” are used herein to mean serving as an instance or illustration. Any embodiment or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word example or exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, terms such as “user equipment,” “mobile station,” “mobile,” subscriber station,” “access terminal,” “terminal,” “handset,” “mobile device” (and/or terms representing similar terminology) can refer to a wireless device utilized by a subscriber or user of a wireless communication service to receive or convey data, control, voice, video, sound, gaming or substantially any data-stream or signaling-stream. The foregoing terms are utilized interchangeably herein and with reference to the related drawings. Furthermore, the terms “user,” “subscriber,” “customer,” “consumer” and the like are employed interchangeably throughout, unless context warrants particular distinctions among the terms. It should be appreciated that such terms can refer to human entities or automated components supported through artificial intelligence (e.g., a capacity to make inference based, at least, on complex mathematical formalisms), which can provide simulated vision, sound recognition and so forth. As employed herein, the term “processor” can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, a processor can refer to an integrated circuit, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. Processors can exploit nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor can also be implemented as a combination of computing processing units. As used herein, terms such as “data storage,” data storage,” “database,” and substantially any other information storage component relevant to operation and functionality of a component, refer to “memory components,” or entities embodied in a “memory” or components comprising the memory. It will be appreciated that the memory components or computer-readable storage media, described herein can be either volatile memory or nonvolatile memory or can include both volatile and nonvolatile memory. What has been described above includes mere examples of various embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing these examples, but one of ordinary skill in the art can recognize that many further combinations and permutations of the present embodiments are possible. Accordingly, the embodiments disclosed and/or claimed herein are intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim. In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained. As may also be used herein, the term(s) “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via one or more intervening items. Such items and intervening items include, but are not limited to, junctions, communication paths, components, circuit elements, circuits, functional blocks, and/or devices. As an example of indirect coupling, a signal conveyed from a first item to a second item may be modified by one or more intervening items by modifying the form, nature or format of information in a signal, while one or more elements of the information in the signal are nevertheless conveyed in a manner than can be recognized by the second item. In a further example of indirect coupling, an action in a first item can cause a reaction on the second item, as a result of actions and/or reactions in one or more intervening items. Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement which achieves the same or similar purpose may be substituted for the embodiments described or shown by the subject disclosure. The subject disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, can be used in the subject disclosure. For instance, one or more features from one or more embodiments can be combined with one or more features of one or more other embodiments. In one or more embodiments, features that are positively recited can also be negatively recited and excluded from the embodiment with or without replacement by another structural and/or functional feature. The steps or functions described with respect to the embodiments of the subject disclosure can be performed in any order. The steps or functions described with respect to the embodiments of the subject disclosure can be performed alone or in combination with other steps or functions of the subject disclosure, as well as from other embodiments or from other steps that have not been described in the subject disclosure. Further, more than or less than all of the features described with respect to an embodiment can also be utilized. | 77,729 |
11863431 | In the figures, like reference numerals refer to the same figure elements. DETAILED DESCRIPTION Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown. Overview The present disclosure describes systems and methods that facilitate fine-grain flow control (FGFC) in a network interface controller (NIC). The NIC allows a host to communicate with a data-driven network. The network can accommodate dynamic data traffic with fast, effective congestion control by maintaining state information of individual packet streams. More specifically, packets injected into the network of switches can be categorized into streams, which can be mapped to their layer-2, layer-3, or other protocol-specific header information. Each stream can be marked by a distinctive identifier that is local to an input port of a switch, and provided with a stream-specific input buffer so that each stream can be individually flow-controlled. In addition, packets of a respective stream can be acknowledged upon reaching the egress point of the network, and the acknowledgment packets can be sent back to the ingress point of the stream along the same data path in the reverse direction. As a result, each switch can obtain state information of active packet streams it is forwarding and can perform highly responsive, stream-specific flow control. Such flow control can allow the network to operate at higher capacity while providing versatile traffic-engineering capabilities. The embodiments described herein solve the problem of flow-level congestion management by (i) identifying a congestion-causing flow in the NIC, and (ii) throttling the forwarding rate for packets belonging to the flow at the NIC. Network congestion in a network, such as a switch fabric, may exhaust packet buffers of the switches in the network. With existing technologies, a switch facing congestion can instruct an upstream switch to pause or slow the packet injection rate for a specific class of traffic. However, this class-level congestion control approach may impact all data flows of the class. For example, traffic from a number of applications can belong to the same class of traffic. Consequently, packets that are not causing the congestion can be adversely affected by such a congestion control policy. To solve this problem, the congested switch can convey flow-specific congestion notifications to a link partner, which can be a NIC on a host device. The congestion notification can generate a “back pressure” on a sequence of packets that belongs to the congestion-causing flow (e.g., an Internet Protocol (IP) level flow or an application-level flow) instead of throttling traffic from all applications and services of a traffic class. By identifying flow-level congestion, the switch can allow the NIC to facilitate fine-grain flow control (FGFC). In some embodiments, upon detecting congestion, a switch can identify a sequence of packets that have caused that congestion. Such a sequence of packets can be referred to as a flow. The switch can then provide this information to the link partner, such as a NIC, by sending a “turn off” control frame, which can be referred to as an XOFF frame. Upon receiving the XOFF frame, the NIC can refrain from sending packets for that flow and buffer the packets in the NIC. The NIC then relies on the switch to manage the flow. Based on the congestion associated with the flow, the switch may send control frames, which can be referred to as credit frames, to the NIC. Upon receiving the credit frames, the NIC can forward more packets belonging to the flow to the switch based on the respective amount indicated by the credit frames. This allows the NIC to limit the number of packets for the flow while facilitating regular forwarding for other flows. If the congestion is mitigated, the switch can send a “turn on” control frame, which can be referred to as an XON frame. Upon receiving the XON frame, the NIC releases the flow from FGFC and initiates regular forwarding for the packets belonging to the flow. One embodiment of the present invention provides a NIC. The NIC can be equipped with a network interface, an FGFC logic block, and a traffic management logic block. During operation, the network interface can determine that a control frame from a switch is associated with FGFC. The network interface can then identify a data flow indicated in the control frame for applying the FGFC. The FGFC logic block can insert information from the control frame into an entry of a data structure stored in the NIC. The traffic management logic block can identify the entry in the data structure based on one or more fields of a packet belonging to the flow. Subsequently, the traffic management logic block can determine whether the packet is allowed to be forwarded based on the information in the entry. In a variation on this embodiment, the network interface can determine whether to process the control frame at the network interface based on a type of the control frame. In a further variation, the network interface can provide information from one or more fields of the control frame to the traffic management logic block based on the type of the control frame. In a variation on this embodiment, the network interface can generate an event for the flow based on a duration value and a credit value from the information in the control frame. The event can be an internal control message that can indicate whether to initiate or terminate the FGFC for the flow. In a variation on this embodiment, the FGFC logic block can insert the information into the entry by: (i) determining a duration value for applying the FGFC to the flow based on the information in the control frame, and (ii) updating a duration counter in the entry based on the duration value. In a variation on this embodiment, the FGFC logic block can insert the information into the entry by: (i) determining credit information, which indicates an amount of data of the flow that can be forwarded, from the information in the control frame, and (ii) updating a duration counter in the entry based on the duration value. In a further variation, the traffic management logic block can allocate the packet to a message chopping unit (MCU) of a plurality of MCUs. The traffic management logic block can then arbitrate among the plurality of MCUs to select an MCU for forwarding the packet based on the credit value in the entry. In a variation on this embodiment, the FGFC logic block can insert the information into the entry by: (i) determining whether one or more fields match an existing entry in the data structure, (ii) determining a new entry in the data structure if no match is found, and (iii) inserting information from the one or more fields into the new entry. In a further variation, the FGFC logic block can determine whether the data structure has availability for a new entry. If the data structure does not have availability, the FGFC logic block can discard the control frame. In a variation on this embodiment, the entry can include one or more of: an identifier, which can be the index of the entry, of the flow, a validity flag indicating whether the entry is valid, a duration counter indicating a duration value for applying FGFC to the flow, a credit value indicating an amount of data of the flow that can be forwarded, and an event queue identifier. In a variation on this embodiment, the FGFC logic block can be associated with the network interface or the traffic management logic block. In this disclosure, the description in conjunction withFIG.1is associated with the network architecture and the description in conjunction withFIG.2Aand onward provide more details on the architecture and operations associated with a NIC that supports FGFC. In this disclosure, packet streams can also be referred to as “packet flows,” or simply “flows.” The data path traversed by a flow, together with its configuration information maintained by switches, can be referred to as a “flow channel.” Furthermore, the terms “buffer” and “queue” are used interchangeably in this disclosure. FIG.1shows an exemplary network. In this example, a network100of switches, which can also be referred to as a “switch fabric,” can include switches102,104,106,108, and110. Each switch can have a unique address or ID within switch fabric100. Various types of devices and networks can be coupled to a switch fabric. For example, a storage array112can be coupled to switch fabric100via switch110; an InfiniBand (IB) based HPC network114can be coupled to switch fabric100via switch108; a number of end hosts, such as host116, can be coupled to switch fabric100via switch104; and an IP/Ethernet network118can be coupled to switch fabric100via switch102. In general, a switch can have edge ports and fabric ports. An edge port can couple to a device that is external to the fabric. A fabric port can couple to another switch within the fabric via a fabric link. Typically, traffic can be injected into switch fabric100via an ingress port of an edge switch, and leave switch fabric100via an egress port of another (or the same) edge switch. An ingress link can couple a NIC of an edge device (for example, an HPC end host) to an ingress edge port of an edge switch. Switch fabric100can then transport the traffic to an egress edge switch, which in turn can deliver the traffic to a destination edge device via another NIC. Exemplary NIC Architecture FIG.2Ashows an exemplary NIC chip with a plurality of NICs. With reference to the example inFIG.1, a NIC chip200can be a custom application-specific integrated circuit (ASIC) designed for host116to work with switch fabric100. In this example, chip200can provide two independent NICs202and204. A respective NIC of chip200can be equipped with a host interface (HI) (e.g., an interface for connecting to the host processor) and one High-speed Network Interface (HNI) for communicating with a link coupled to switch fabric100ofFIG.1. For example, NIC202can include an HI210and an HNI220, and NIC204can include an HI211and an HNI221. In some embodiments, HI210can be a peripheral component interconnect (PCI) or a peripheral component interconnect express (PCIe) interface. HI210can be coupled to a host via a host connection201, which can include N (e.g., N can be 16 in some chips) PCIe Gen 4 lanes capable of operating at signaling rates up to 25 Gbps per lane. HNI210can facilitate a high-speed network connection203, which can communicate with a link in switch fabric100ofFIG.1. HNI210can operate at aggregate rates of either 100 Gbps or 200 Gbps using M (e.g., M can be 4 in some chips) full-duplex serial lanes. Each of the M lanes can operate at 25 Gbps or 50 Gbps based on non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4 (PAM4), respectively. HNI220can support the Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet-based protocols as well as an enhanced frame format that provides support for higher rates of small messages. NIC202can support one or more of: point-to-point message passing based on Message Passing Interface (MPI), remote memory access (RMA) operations, offloading and progression of bulk data collective operations, and Ethernet packet processing. When the host issues an MPI message, NIC202can match the corresponding message type. Furthermore, NIC202can implement both eager protocol and rendezvous protocol for MPI, thereby offloading the corresponding operations from the host. Furthermore, the RMA operations supported by NIC202can include PUT, GET, and Atomic Memory Operations (AMO). NIC202can provide reliable transport. For example, if NIC202is a source NIC, NIC202can provide a retry mechanism for idempotent operations. Furthermore, connection-based error detection and retry mechanism can be used for ordered operations that may manipulate a target state. The hardware of NIC202can maintain the state necessary for the retry mechanism. In this way, NIC202can remove the burden from the host (e.g., the software). The policy that dictates the retry mechanism can be specified by the host via the driver software, thereby ensuring flexibility in NIC202. Furthermore, NIC202can facilitate triggered operations, a general-purpose mechanism for offloading, and progression of dependent sequences of operations, such as bulk data collectives. MC202can support an application programming interface (API) (e.g., libfabric API) that facilitates fabric communication services provided by switch fabric100ofFIG.1to applications running on host116. NIC202can also support a low-level network programming interface, such as Portals API. In addition, NIC202can provide efficient Ethernet packet processing, which can include efficient transmission if NIC202is a sender, flow steering if NIC202is a target, and checksum computation. Moreover, NIC202can support virtualization (e.g., using containers or virtual machines). FIG.2Bshows an exemplary architecture of a NIC. In NIC202, the port macro of HNI220can facilitate low-level Ethernet operations, such as physical coding sublayer (PCS) and media access control (MAC). In addition, NIC202can provide support for link layer retry (LLR). Incoming packets can be parsed by parser228and stored in buffer229. Buffer229can be a PFC Buffer provisioned to buffer a threshold amount (e.g., one microsecond) of delay bandwidth. HNI220can also include control transmission unit224and control reception unit226for managing outgoing and incoming packets, respectively. NIC202can include a Command Queue (CQ) unit230. CQ unit230can be responsible for fetching and issuing host side commands. CQ unit230can include command queues232and schedulers234. Command queues232can include two independent sets of queues for initiator commands (PUT, GET, etc.) and target commands (Append, Search, etc.), respectively. Command queues232can be implemented as circular buffers maintained in the memory of NIC202. Applications running on the host can write to command queues232directly. Schedulers234can include two separate schedulers for initiator commands and target commands, respectively. The initiator commands are sorted into flow queues236based on a hash function. One of flow queues236can be allocated to a unique flow. Furthermore, CQ unit230can further include a triggered operations module (or logic block)238, which is responsible for queuing and dispatching triggered commands. Outbound transfer engine (OXE)240can pull commands from flow queues236in order to process them for dispatch. OXE240can include an address translation request unit (ATRU)244that can send address translation requests to address translation unit (ATU)212. ATU212can provide virtual to physical address translation on behalf of different engines, such as OXE240, inbound transfer engine (IXE)250, and event engine (EE)216. ATU212can maintain a large translation cache214. ATU212can either perform translation itself or may use host-based address translation services (ATS). OXE240can also include message chopping unit (MCU)246, which can fragment a large message into packets of sizes corresponding to a maximum transmission unit (MTU). MCU246can include a plurality of MCU modules. When an MCU module becomes available, the MCU module can obtain the next command from an assigned flow queue. The received data can be written into data buffer242. The MCU module can then send the packet header, the corresponding traffic class, and the packet size to traffic shaper248. Shaper248can determine which requests presented by MCU246can proceed to the network. Subsequently, the selected packet can be sent to packet and connection tracking (PCT)270. PCT270can store the packet in a queue274. PCT270can also maintain state information for outbound commands and update the state information as responses are returned. PCT270can also maintain packet state information (e.g., allowing responses to be matched to requests), message state information (e.g., tracking the progress of multi-packet messages), initiator completion state information, and retry state information (e.g., maintaining the information required to retry a command if a request or response is lost). If a response is not returned within a threshold time, the corresponding command can be stored in retry buffer272. PCT270can facilitate connection management for initiator and target commands based on source tables276and target tables278, respectively. For example, PCT270can update its source tables276to track the necessary state for reliable delivery of the packet and message completion notification. PCT270can forward outgoing packets to HNI220, which stores the packets in outbound queue222. NIC202can also include an IXE250, which provides packet processing if NIC202is a target or a destination. IXE250can obtain the incoming packets from HNI220. Parser256can parse the incoming packets and pass the corresponding packet information to a List Processing Engine (LPE)264or a Message State Table (MST)266for matching. LPE264can match incoming messages to buffers. LPE264can determine the buffer and start address to be used by each message. LPE264can also manage a pool of list entries262used to represent buffers and unexpected messages. MST266can store matching results and the information required to generate target side completion events. MST266can be used by unrestricted operations, including multi-packet PUT commands, and single-packet and multi-packet GET commands. Subsequently, parser256can store the packets in packet buffer254. IXE250can obtain the results of the matching for conflict checking. DMA write and AMO module252can then issue updates to the memory generated by write and AMO operations. If a packet includes a command that generates target side memory read operations (e.g., a GET response), the packet can be passed to the OXE240. NIC202can also include an EE216, which can receive requests to generate event notifications from other modules or units in NIC202. An event notification can specify that either a fill event or a counting event is generated. EE216can manage event queues, located within host processor memory, to which it writes full events. EE216can forward counting events to CQ unit230. Congestion Management in NIC FIG.2Cshows an exemplary FGFC selection process in a NIC. NIC202can use the control frames to control the flow of packets at a fine-level. During operation, upon receiving an FGFC control frame280, NIC202can determine a type of frame based on one or more header fields of frame280. Frame280can be an Ethernet frame with a number of header fields, such as a destination MAC (DMAC) address, a source MAC (SMAC), address, an organizationally unique identifier (OUI) extended Ethertype, a protocol identifier (PID), an FGFC frame identifier (FID), an FGFC type, a pause period value (e.g., expressed as Ethernet pause quanta), an FGFC credit value, an FGFC identifier, an IPv4 source IP (SIP) address, and an IPv6 SIP address. The FGFC identifier can include one or more of: a virtual network identifier (VNI), a VLAN ID, IPv4 flow label, and IPv6 flow label. The FGFC FID can include a predetermined value associated with a respective FGFC frame. The PID can be expressed based on an OUI, which can indicate that the link partners are from supported vendors and may support the same protocol. Instead of specifying a traffic class for flow control, NIC202can identify a flow based on the VNI, which can be based on a source IP address and a hash over a number of fields of a packet, such as a protocol type, source and destination IP addresses, and source and destination ports, etc. VNIs can be added by NIC202if NIC202is a source NIC, and can be removed by NIC202if NIC202is a destination NIC. VNIs can be checked by the ingress and egress switches of a switch fabric. NIC202can facilitate Ethernet-based or an API-based FGFC. For example, if the link partner of NIC202supports Portals API, NIC202can provide API-based FGFC for the link partner. On the other hand, if the link partner supports Ethernet-based communication, NIC202can provide Ethernet-based FGFC. Upon receiving frame280, HNI220can inspect a number of fields of frame280, such as the DMAC address, Ethertype, the PID, and the FID, to determine that frame280is an FGFC frame. In some embodiments, HNI220can maintain a set of control and status registers (CSRs) to store the expected pieces of information and match the fields with the corresponding CSR. For example, the DMAC address field should match a CSR that can store a MAC address of NIC202. If HNI220determines that frame280is an FGFC frame, HNI220inspects the FGFC type field of frame280. The FGFC type can identify whether the FGFC frame is based on an API, such as portals API, or Ethernet, IPv4, or IPv6 protocol. HNI220can maintain a CSR for each of these types. If the FGFC type of frame280matches none of the types, HNI220can issue an error message and drop frame280. If the FGFC type indicates API-based FGFC, HNI220can provide the pause period, FGFC credit value, and the lower portion of the identifier (e.g., the lower 16 bits) of frame280to OXE240for further processing. On the other hand, if the FGFC type indicates Ethernet, IPv4, or IPv6, HNI220can determine that frame280is an Ethernet-based FGFC frame. In some embodiments, HNI220can then process frame280in HNI220. NIC202may also process frame280at any other element of NIC202. For example, OXE240or CQ unit230inFIG.2Bmay process an FGFC control frame. Furthermore, an MCU module may generate packets and stall a corresponding command queue. FIG.3Ashows an exemplary FGFC control process in a NIC. Since HNI220is the interface that forms a link with the switch fabric, NIC202receives an FGFC control frame300at HNI220. If HNI220determines that frame300is an Ethernet-based FGFC frame, HNI220can process frame300using a set of address CSRs310, an FGFC cache320, and output queue222. CSRs310can include a set of CSRs (e.g., 4 CSRs) for each of IPv4 and IPv6 addresses of NIC202. HNI220can match the IPv4 or IPv6 source address of frame300with the values stored in the corresponding CSRs. Each of the addresses can be associated with an event queue (EQ) identifier identifying a corresponding EQ, as described in conjunction withFIG.2B. Furthermore, CSRs310can include a programmable CSR for an EQ identifier. If the fields of frame300do not match the values stored in the corresponding CSRs, HNI220can discard the frame. FGFC cache320can have a plurality of entries, each of which can store information associated with a flow. For example, FGFC cache320can include a cache entry322, which can include information associated with a flow, such as a valid field (e.g., a flag), a type field, a tag for the source IP address, an identifier field, an EQ identifier field, and a pause counter. The valid field can indicate whether entry322is valid. The type field can indicate an FGFC type for entry322. The source IP address tag can indicate a type for a source IP address for entry322. For example, the tag can incorporate an integer value from 0 to 3, each indicating a type of IP address. A value of 0 can indicate a layer-2 frame. The identifier field can store a 32-bit identifier from frame300associated with the tag. The EQ identifier field can store the EQ identifier obtained from the matched address. Furthermore, the pause counter can be decremented periodically based on the Ethernet pause standard. The pause counter can be loaded from an FGFC frame and decrement over time based on the pause quanta. If HNI220can successfully match an address of frame300with an address stored in CSRs310, HNI220can determine whether cache320is enabled. If cache320is disabled, each frame matching an address in CSRs310can generate an event (e.g., to be managed by EE216inFIG.2B). The event can be an internal control message for communication among the elements of NIC202. On the other hand, if cache320is enabled, the type, the tag for a source IP address, and an identifier in frame300are checked against the information in a respective entry in cache320. If the fields of frame300match a valid entry and frame300has a pause period of zero, HNI220can set that entry in cache320as invalid (e.g., by modifying the valid field). HNI220can then forward an event (e.g., to EE216inFIG.2B). The event can indicate XON for the EQ identifier of the entry and include the credit value specified in frame300. On the other hand, if the fields of frame300match a valid entry and frame300has a non-zero pause period value, HNI220can update the pause counter based on the pause period value in frame300. HNI220can then forward an XOFF event that can include the non-zero credit value specified in frame300. However, if the credit value is zero, HNI220can update cache320without forwarding an event. If the fields of frame300do not match a valid entry, HNI220can determine whether frame300includes a non-zero pause period value and whether cache320has availability for a new entry (e.g., whether a cache line is available). If cache320has availability and frame300includes a non-zero pause period value, HNI220can generate an entry in cache320with the pause counter set to the pause period value in frame320. HNI220can also forward an XOFF event that can include the credit value specified in frame300. On the other hand, if cache320does not have availability and frame300includes a non-zero pause period value, HNI220can discard frame300without creating an event. If frame300includes a zero pause period value, HNI220can forward an XON event that can include the credit value specified in frame300. If an entry in cache320has a pause counter value below the pause quanta, HNI220can set a flag for the entry indicating that HNI220should create an XON event. HNI220can apply a round-robin arbitration process to select the entry. Subsequently, HNI220can invalidate the entry and forward an event. The event can indicate an XON status for the EQ identifier of the entry. However, if a subsequent FGFC frame arrives before the entry is selected via the arbitration, HNI220can update the pause counter in the entry and remove the request for arbitration for the entry. The EQ identifier from the entry can be used to locate the target event queue. In some embodiments, HNI220can perform the arbitration based on the clock of NIC202when there is no incoming Ethernet-based FGFC frame that matches an address and there is availability in queue222. Queue222allows HNI220HNI to process a small number of FGFC frames if EE216is backed up. Events forwarded from a prior state can be inserted into queue222. If queue222is full, the generated event can be discarded. A respective entry of queue222, such as entry324, can include a return code, a type field, a tag for a source IP address, an identifier field, credit information, an XOFF indicator, an EQ identifier, and an event type. The return code can be set to a constant, which indicates a valid return. The type field can indicate whether frame300corresponds to Ethernet, IPv4, or IPv6. The tag for the source IP can indicate a type of IP address of the source address of frame300. The respective values for the identifier and credit fields can be obtained from corresponding fields in frame300. The XOFF indicator can indicate whether an XOFF event should be generated. The EQ identifier field can store the EQ identifier obtained from the matched address. Moreover, the event type field can be set to Ethernet. The respective values for the type, tag, identifier, and EQ identifier fields can be obtained from cache320if a cache timeout occurs for an XON event. Furthermore, the value of the credits field can be set to zero for the cache timeout event. On the other hand, if the FGFC type indicates API-based FGFC, HNI220can provide information350associated with frame300to OXE240for further processing. Information350can include the pause period value, FGFC credit value, and the lower portion of the identifier (e.g., the lower 16 bits) of frame300. OXE240can then store information350in an FGFC table330. MC202can throttle packets belonging to a flow subjected to FGFC using table330. Table330can include a plurality of entries. A respective entry of table330, such as entry332, can include a VNI field, a valid field (e.g., a flag), a credit field, and a pause counter. These fields can include 16 bits, 1 bit, 24 bits, and 32 bits, respectively. OXE240can match the VNI field with an incoming FGFC packet and determine, from MCU246, an MCU module that is allowed to send more packets. The valid field can indicate whether a VNI is valid. The credit field can store the sum of credit values received in the FGFC frames, such as frame300. In some embodiments, each credit allows an MCU module to forward one byte. If the value of the credit field becomes negative, table330can have a shortage of credit to send a packet. The credit field can be associated with a maximum value (i.e., a maximum value to which the credit can be incremented). The pause counter can correspond to Ethernet Pause. The upper 16 bits can be loaded from frame300. The lower 16 bits can represent a fraction that can be decremented over time based on the pause quanta. Upon classifying frame300as an API-based FGFC frame, HNI220can pass frame300to OXE240for processing if table330is enabled. If frame300matches a valid entry for the VNI in frame300and frame300has a pause period value of zero, OXE240can mark the entry as invalid. Otherwise, if frame300matches a valid entry for the VNI in frame300and frame300has a non-zero pause period value, OXE240can increment the credit value in the entry based on the credit indicated in frame300and update the pause counter based on the pause value of frame300. If frame300does not match a valid entry and table330has availability (e.g., a line in table300is available), OXE240can create an entry in table330by inserting the VNI, the credit value, and the pause value from frame300into the entry. The initial credit can be subtracted by a credit adjustment constant. In some embodiments, the default value for this constant can be determined as (MTU+maximum header size+FCS). Here, FCS indicates a frame check sequence. If frame300does not match a valid entry and table330does not have availability, OXE240can drop frame300. FIG.3Bshows an exemplary data packet forwarding process associated with FGFC in a NIC. During operation, the host device of NIC202can send a data packet360belonging to a flow subject to FGFC via host interface210. Within NIC202, packet360can be forwarded to OXE240. When packet360arrives at OXE240, packet360can be allocated to one of a number MCU modules302,304, and306in MCU246. OXE240can use an arbitration module340to select an MCU module for forwarding a packet. In other words, the arbitration (e.g., based on a round-robin technique or a priority value) provided by arbitration module340can schedule packet forwarding from an MCU module. Suppose that packet360has been allocated to MCU module306. If arbitration module340selects MCU module306, MCU module306can check whether the VNI in packet360matches an entry in table330. If no entry matches packet360, OXE240can allow packet360to proceed and can be placed in output buffer242. If an entry exists and the credit is not negative in the entry, OXE240can allow packet360to proceed and deduct an amount of credit from the credit field of the matching entry. The amount of credit can be determined as: [(byte_len+extra_bytes+2round_pos-1)&∼(2round_pos-1)]. However, if an entry exists and the credit is negative, OXE240can set an FGFC flag for MCU module306and discards packet360(e.g., by disqualifying the selection of MCU module306in the arbitration process). Because MCU module306′s FGFC flag is set, arbitration module340can remove MCU module306from arbitration. OXE240can save the index of the corresponding entry (i.e., the entry that matched packet360) of table330. OXE240can then monitor the entry based on the index. If the entry becomes invalidated or the credit value in the entry is incremented to a non-negative value, OXE240can clear the FGFC flag of MCU module306. When the FGFC flag is cleared, arbitration module340can include MCU module306in the arbitration process. Furthermore, when FGFC is applied to an MCU module, in addition to selecting the MCU module based on the credit during the arbitration process, that MCU module can be in an “in order” mode. Consequently, that MCU module may forward packets based on their order until that MCU module is subject to FGFC. FIG.4Ashows a flow chart of an Ethernet-based FGFC process in a NIC. During operation, an HNI of the NIC can obtain an Ethernet-based FGFC frame (operation402). The HNI can then check whether the frame matches an entry in the FGFC cache and has a zero pause value (operation404). If the frame matches an entry in the FGFC cache and has a zero pause value, the HNI can mark the entry as invalid and forward an XON event with the credit from the frame (operation414). Otherwise, the HNI can check whether the frame matches an entry in the FGFC cache and has a non-zero pause value (operation406). If the frame matches an entry in the FGFC cache and has a non-zero pause value, the HNI can process the frame based on the credit value. The HNI can update the pause counter in the entry based on the non-zero pause value from the frame and forward an XOFF event with the non-zero credit from the frame if the frame has a non-zero credit value in the frame (denoted with parentheses) (operation416). On the other hand, HNI can update the pause counter in the entry based on the non-zero pause value from the frame without forwarding the XOFF event if the frame has a zero credit value in the frame (operation416). If the frame does not match an entry in the FGFC cache (operations404and406), the HNI can check whether the cache has availability (operation408). If the cache has availability, the HNI can create an entry with a pause counter based on the non-zero pause value from the frame and forward an XOFF event with the credit from the frame (operation418). If the cache does not have availability, the HNI can check whether the frame has a non-zero pause value (operation410). If the frame has a non-zero pause value, the HNI can forward an XON event with the credit from the frame (operation420). On the other hand, if the frame does not have a non-zero pause value (i.e., has a zero pause value), the HNI can defer the frame (operation412) (e.g., can wait for more credits to arrive). FIG.4Bshows a flow chart of an API-based FGFC process in a NIC. During operation, an OXE of the NIC can obtain an API-based FGFC frame (operation432). The OXE can then check whether the frame matches an entry in the FGFC table and has a zero pause value (operation434). If the frame matches an entry in the FGFC table and has a zero pause value, the OXE can mark the entry as invalid (operation442). Otherwise, the OXE can check whether the frame matches an entry in the FGFC table and has a non-zero pause value (operation436). If the frame matches an entry in the FGFC table and has a non-zero pause value, the OXE can update the pause counter in the entry based on the non-zero pause value from the frame and increment the credit value in the entry with the non-zero credit from the frame (operation444). If the frame does not match an entry in the FGFC table (operations434and436), the OXE can check whether the table has availability (operation438). If the table has availability, the OXE can create an entry in the FGFC table with a pause counter and a credit value, and subtract a default credit value (operation446). The pause counter can be based on the non-zero pause value and the credit value can be based on the credit from the frame. If the cache does not have availability, the OXE can discard the frame (operation440). FIG.4Cshows a flow chart of an exemplary packet processing for facilitating FGFC in a NIC. During operation, an OXE of the NIC can allocate a packet associated with FGFC to a corresponding MCU module (operation452) and select the MCU module based on arbitration (454). The OXE can then check whether the packet matches an entry in the FGFC table (operation456). If the packet matches an entry in the FGFC table, the OXE can allow the packet to proceed (operation464). Otherwise, the OXE can check whether the credit is not negative in the entry (operation458). If the credit is not negative in the entry, the OXE can allow the packet to proceed and deduct an amount of credit from the credit of the entry (operation466). On the other hand, if the credit is negative in the entry, the OXE can discard the packet and set an FGFC flag for the MCU module (operation460). The OXE can then remove the MCU module from arbitration and monitor the matched entry (operation462). Exemplary Computer System FIG.5shows an exemplary computer system equipped with a NIC with FGFC support. Computer system550includes a processor552, a memory device554, and a storage device556. Memory device554can include a volatile memory device (e.g., a dual in-line memory module (DIMM)). Furthermore, computer system550can be coupled to a keyboard562, a pointing device564, and a display device566. Storage device556can store an operating system570. An application572can operate on operating system570. Computer system550can be equipped with a host interface coupling a NIC520that facilitates efficient data request management. NIC520can provide one or more HNIs, such as HNI540, to computer system550. NIC520can be coupled to a switch502via HNI540. Upon receiving an FGFC control frame from switch502, HNI540can determine whether the frame is an Ethernet-based frame or an API-based frame. If the frame is an Ethernet-based frame, HNI540can compare the source IP addresses with the local addresses stored in registers532. Upon detecting a match, HNI540can process the frame based on the entries in FGLC cache534and the content of the frame. HNI540can also include a queue536that can store events that cannot be accommodated in an event engine of NIC520. If the frame is an API-based frame, HNI540can provide header information to an OXE logic block530of NIC520and send the frame to OXE logic block530. OXE logic block530can store the information in an entry in an FGLC table536. OXE logic block530can then process the frame based on the entries in FGLC table536and the content of the frame. Upon receiving a packet belonging to a flow subject to FGLC from computer system550via an HI of NIC520, OXE logic block530can allocate the packet to an MCU logic block532. An arbitration logic block534can select MCU logic block532based on an arbitration policy. If MCU logic block532is selected, OXE logic block530can process the packet based on a matching entry in FGLC table536and the content of the packet. In summary, the present disclosure describes a NIC that facilitates fine-grain flow control (FGFC). The NIC can be equipped with a network interface, an FGFC logic block, and a traffic management logic block. During operation, the network interface can determine that a control frame from a remote switch is for applying FGFC. The network interface can then identify a data flow indicated in the control frame for applying the FGFC. The FGFC logic block can insert information from the control frame into an entry of a data structure stored in the NIC. The traffic management logic block can identify the entry in the data structure based on one or more fields of a packet belonging to the flow. Subsequently, the traffic management logic block can determine whether the packet is allowed to be forwarded based on the information in the entry. The methods and processes described above can be performed by hardware logic blocks, modules, logic blocks, or apparatus. The hardware logic blocks, modules, or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware logic blocks, modules, or apparatus are activated, they perform the methods and processes included within them. The methods and processes described herein can also be embodied as code or data, which can be stored in a storage device or computer-readable storage medium. When a processor reads and executes the stored code or data, the processor can perform these methods and processes. The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims. | 41,372 |
11863432 | DESCRIPTION OF EXAMPLE EMBODIMENTS Overview According to one or more embodiments of the disclosure, a device identifies a potential change in user experience of an online application. The device selects, based on the potential change in user experience, a set of one or more users of the online application. The device obtains, from the set of one or more users of the online application, feedback regarding their experience with the online application. The device uses the feedback obtained from the set of one or more users of the online application to make a routing decision in a network regarding traffic of the online application. Description A computer network is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as personal computers and workstations, or other devices, such as sensors, etc. Many types of networks are available, with the types ranging from local area networks (LANs) to wide area networks (WANs). LANs typically connect the nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links, or Powerline Communications (PLC) such as IEEE 61334, IEEE P1901.2, and others. The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol consists of a set of rules defining how the nodes interact with each other. Computer networks may be further interconnected by an intermediate network node, such as a router, to extend the effective “size” of each network. Smart object networks, such as sensor networks, in particular, are a specific type of network having spatially distributed autonomous devices such as sensors, actuators, etc., that cooperatively monitor physical or environmental conditions at different locations, such as, e.g., energy/power consumption, resource consumption (e.g., water/gas/etc. for advanced metering infrastructure or “AMI” applications) temperature, pressure, vibration, sound, radiation, motion, pollutants, etc. Other types of smart objects include actuators, e.g., responsible for turning on/off an engine or perform any other actions. Sensor networks, a type of smart object network, are typically shared-media networks, such as wireless or PLC networks. That is, in addition to one or more sensors, each sensor device (node) in a sensor network may generally be equipped with a radio transceiver or other communication port such as PLC, a microcontroller, and an energy source, such as a battery. Often, smart object networks are considered field area networks (FANs), neighborhood area networks (NANs), personal area networks (PANs), etc. Generally, size and cost constraints on smart object nodes (e.g., sensors) result in corresponding constraints on resources such as energy, memory, computational speed and bandwidth. FIG.1Ais a schematic block diagram of an example computer network100illustratively comprising nodes/devices, such as a plurality of routers/devices interconnected by links or networks, as shown. For example, customer edge (CE) routers110may be interconnected with provider edge (PE) routers120(e.g., PE-1, PE-2, and PE-3) in order to communicate across a core network, such as an illustrative network backbone130. For example, routers110,120may be interconnected by the public Internet, a multiprotocol label switching (MPLS) virtual private network (VPN), or the like. Data packets140(e.g., traffic/messages) may be exchanged among the nodes/devices of the computer network100over links using predefined network communication protocols such as the Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM) protocol, Frame Relay protocol, or any other suitable protocol. Those skilled in the art will understand that any number of nodes, devices, links, etc. may be used in the computer network, and that the view shown herein is for simplicity. In some implementations, a router or a set of routers may be connected to a private network (e.g., dedicated leased lines, an optical network, etc.) or a virtual private network (VPN), such as an MPLS VPN thanks to a carrier network, via one or more links exhibiting very different network and service level agreement characteristics. For the sake of illustration, a given customer site may fall under any of the following categories:1.) Site Type A: a site connected to the network (e.g., via a private or VPN link) using a single CE router and a single link, with potentially a backup link (e.g., a 3G/4G/5G/LTE backup connection). For example, a particular CE router110shown in network100may support a given customer site, potentially also with a backup link, such as a wireless connection.2.) Site Type B: a site connected to the network by the CE router via two primary links (e.g., from different Service Providers), with potentially a backup link (e.g., a 3G/4G/5G/LTE connection). A site of type B may itself be of different types:2a.) Site Type B1: a site connected to the network using two MPLS VPN links (e.g., from different Service Providers), with potentially a backup link (e.g., a 3G/4G/5G/LTE connection).2b.) Site Type B2: a site connected to the network using one MPLS VPN link and one link connected to the public Internet, with potentially a backup link (e.g., a 3G/4G/5G/LTE connection). For example, a particular customer site may be connected to network100via PE-3and via a separate Internet connection, potentially also with a wireless backup link.2c.) Site Type B3: a site connected to the network using two links connected to the public Internet, with potentially a backup link (e.g., a 3G/4G/5G/LTE connection). Notably, MPLS VPN links are usually tied to a committed service level agreement, whereas Internet links may either have no service level agreement at all or a loose service level agreement (e.g., a “Gold Package” Internet service connection that guarantees a certain level of performance to a customer site).3.) Site Type C: a site of type B (e.g., types B1, B2 or B3) but with more than one CE router (e.g., a first CE router connected to one link while a second CE router is connected to the other link), and potentially a backup link (e.g., a wireless 3G/4G/5G/LTE backup link). For example, a particular customer site may include a first CE router110connected to PE-2and a second CE router110connected to PE-3. FIG.1Billustrates an example of network100in greater detail, according to various embodiments. As shown, network backbone130may provide connectivity between devices located in different geographical areas and/or different types of local networks. For example, network100may comprise local/branch networks160,162that include devices/nodes10-16and devices/nodes18-20, respectively, as well as a data center/cloud environment150that includes servers152-154. Notably, local networks160-162and data center/cloud environment150may be located in different geographic locations. Servers152-154may include, in various embodiments, a network management server (NMS), a dynamic host configuration protocol (DHCP) server, a constrained application protocol (CoAP) server, an outage management system (OMS), an application policy infrastructure controller (APIC), an application server, etc. As would be appreciated, network100may include any number of local networks, data centers, cloud environments, devices/nodes, servers, etc. In some embodiments, the techniques herein may be applied to other network topologies and configurations. For example, the techniques herein may be applied to peering points with high-speed links, data centers, etc. According to various embodiments, a software-defined WAN (SD-WAN) may be used in network100to connect local network160, local network162, and data center/cloud environment150. In general, an SD-WAN uses a software defined networking (SDN)-based approach to instantiate tunnels on top of the physical network and control routing decisions, accordingly. For example, as noted above, one tunnel may connect router CE-2at the edge of local network160to router CE-1at the edge of data center/cloud environment150over an MPLS or Internet-based service provider network in backbone130. Similarly, a second tunnel may also connect these routers over a 4G/5G/LTE cellular service provider network. SD-WAN techniques allow the WAN functions to be virtualized, essentially forming a virtual connection between local network160and data center/cloud environment150on top of the various underlying connections. Another feature of SD-WAN is centralized management by a supervisory service that can monitor and adjust the various connections, as needed. FIG.2is a schematic block diagram of an example node/device200(e.g., an apparatus) that may be used with one or more embodiments described herein, e.g., as any of the computing devices shown inFIGS.1A-1B, particularly the PE routers120, CE routers110, nodes/device10-20, servers152-154(e.g., a network controller/supervisory service located in a data center, etc.), any other computing device that supports the operations of network100(e.g., switches, etc.), or any of the other devices referenced below. The device200may also be any other suitable type of device depending upon the type of network architecture in place, such as IoT nodes, etc. Device200comprises one or more network interfaces210, one or more processors220, and a memory240interconnected by a system bus250, and is powered by a power supply260. The network interfaces210include the mechanical, electrical, and signaling circuitry for communicating data over physical links coupled to the network100. The network interfaces may be configured to transmit and/or receive data using a variety of different communication protocols. Notably, a physical network interface210may also be used to implement one or more virtual network interfaces, such as for virtual private network (VPN) access, known to those skilled in the art. The memory240comprises a plurality of storage locations that are addressable by the processor(s)220and the network interfaces210for storing software programs and data structures associated with the embodiments described herein. The processor220may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures245. An operating system242(e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc., another operating system, etc.), portions of which are typically resident in memory240and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processors and/or services executing on the device. These software processors and/or services may comprise a routing process248and/or a user feedback gathering process249, as described herein, any of which may alternatively be located within individual network interfaces. It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while processes may be shown and/or described separately, those skilled in the art will appreciate that processes may be routines or modules within other processes. In general, routing process248contains computer executable instructions executed by the processor220to perform routing functions in conjunction with one or more routing protocols. These functions may, on capable devices, be configured to manage a routing/forwarding table (a data structure245) containing, e.g., data used to make routing/forwarding decisions. In various cases, connectivity may be discovered and known, prior to computing routes to any destination in the network, e.g., link state routing such as Open Shortest Path First (OSPF), or Intermediate-System-to-Intermediate-System (ISIS), or Optimized Link State Routing (OLSR). For instance, paths may be computed using a shortest path first (SPF) or constrained shortest path first (CSPF) approach. Conversely, neighbors may first be discovered (e.g., a priori knowledge of network topology is not known) and, in response to a needed route to a destination, send a route request into the network to determine which neighboring node may be used to reach the desired destination. Example protocols that take this approach include Ad-hoc On-demand Distance Vector (AODV), Dynamic Source Routing (DSR), DYnamic MANET On-demand Routing (DYMO), etc. Notably, on devices not capable or configured to store routing entries, routing process248may consist solely of providing mechanisms necessary for source routing techniques. That is, for source routing, other devices in the network can tell the less capable devices exactly where to send the packets, and the less capable devices simply forward the packets as directed. In various embodiments, as detailed further below, routing process248and/or user feedback gathering process249may include computer executable instructions that, when executed by processor(s)220, cause device200to perform the techniques described herein. To do so, in some embodiments, routing process248and/or user feedback gathering process249may utilize artificial learning/machine learning. In general, artificial intelligence/machine learning is concerned with the design and the development of techniques that take as input empirical data (such as network statistics and performance indicators), and recognize complex patterns in these data. One very common pattern among these techniques is the use of an underlying model M, whose parameters are optimized for minimizing the cost function associated to M, given the input data. For instance, in the context of classification, the model M may be a straight line that separates the data into two classes (e.g., labels) such that M=a*x+b*y+c and the cost function would be the number of misclassified points. The learning process then operates by adjusting the parameters a,b,c such that the number of misclassified points is minimal. After this optimization phase (or learning phase), the model M can be used very easily to classify new data points. Often, M is a statistical model, and the cost function is inversely proportional to the likelihood of M, given the input data. In various embodiments, routing process248and/or user feedback gathering process249may employ one or more supervised, unsupervised, or semi-supervised machine learning models. Generally, supervised learning entails the use of a training set of data, as noted above, that is used to train the model to apply labels to the input data. For example, the training data may include sample data that has been labeled as indicative of acceptable user experience or poor user experience. On the other end of the spectrum are unsupervised techniques that do not require a training set of labels. Notably, while a supervised learning model may look for previously seen patterns that have been labeled as such, an unsupervised model may instead look to whether there are sudden changes or patterns in the behavior of the metrics. Semi-supervised learning models take a middle ground approach that uses a greatly reduced set of labeled training data. Example machine learning techniques that routing process248and/or user feedback gathering process249can employ may include, but are not limited to, nearest neighbor (NN) techniques (e.g., k-NN models, replicator NN models, etc.), statistical techniques (e.g., Bayesian networks, etc.), clustering techniques (e.g., k-means, mean-shift, etc.), neural networks (e.g., reservoir networks, artificial neural networks, etc.), support vector machines (SVMs), generative adversarial networks (GANs), long short-term memory (LSTM), logistic or other regression, Markov models or chains, principal component analysis (PCA) (e.g., for linear models), singular value decomposition (SVD), multi-layer perceptron (MLP) artificial neural networks (ANNs) (e.g., for non-linear models), replicating reservoir networks (e.g., for non-linear models, typically for timeseries), random forest classification, or the like. As noted above, in software defined WANs (SD-WANs), traffic between individual sites are sent over tunnels. The tunnels are configured to use different switching fabrics, such as MPLS, Internet, 4G or 5G, etc. Often, the different switching fabrics provide different QoS at varied costs. For example, an MPLS fabric typically provides high QoS when compared to the Internet, but is also more expensive than traditional Internet. Some applications requiring high QoS (e.g., video conferencing, voice calls, etc.) are traditionally sent over the more costly fabrics (e.g., MPLS), while applications not needing strong guarantees are sent over cheaper fabrics, such as the Internet. Traditionally, network policies map individual applications to Service Level Agreements (SLAs), which define the satisfactory performance metric(s) for an application, such as loss, latency, or jitter. Similarly, a tunnel is also mapped to the type of SLA that is satisfies, based on the switching fabric that it uses. During runtime, the SD-WAN edge router then maps the application traffic to an appropriate tunnel. Currently, the mapping of SLAs between applications and tunnels is performed manually by an expert, based on their experiences and/or reports on the prior performances of the applications and tunnels. The emergence of infrastructure as a service (IaaS) and software-as-a-service (SaaS) is having a dramatic impact of the overall Internet due to the extreme virtualization of services and shift of traffic load in many large enterprises. Consequently, a branch office or a campus can trigger massive loads on the network. FIGS.3A-3Billustrate example network deployments300,310, respectively. As shown, a router110located at the edge of a remote site302may provide connectivity between a local area network (LAN) of the remote site302and one or more cloud-based, SaaS providers308. For example, in the case of an SD-WAN, router110may provide connectivity to SaaS provider(s)308via tunnels across any number of networks306. This allows clients located in the LAN of remote site302to access cloud applications (e.g., Office 365™, Dropbox™, etc.) served by SaaS provider(s)308. As would be appreciated, SD-WANs allow for the use of a variety of different pathways between an edge device and an SaaS provider. For example, as shown in example network deployment300inFIG.3A, router110may utilize two Direct Internet Access (DIA) connections to connect with SaaS provider(s)308. More specifically, a first interface of router110(e.g., a network interface210, described previously), Int1, may establish a first communication path (e.g., a tunnel) with SaaS provider(s)308via a first Internet Service Provider (ISP)306a, denoted ISP1inFIG.3A. Likewise, a second interface of router110, Int2, may establish a backhaul path with SaaS provider(s)308via a second ISP306b, denoted ISP2inFIG.3A. FIG.3Billustrates another example network deployment310in which Int1of router110at the edge of remote site302establishes a first path to SaaS provider(s)308via ISP1and Int2establishes a second path to SaaS provider(s)308via a second ISP306b. In contrast to the example inFIG.3A, Int3of router110may establish a third path to SaaS provider(s)308via a private corporate network306c(e.g., an MPLS network) to a private data center or regional hub304which, in turn, provides connectivity to SaaS provider(s)308via another network, such as a third ISP306d. Regardless of the specific connectivity configuration for the network, a variety of access technologies may be used (e.g., ADSL, 4G, 5G, etc.) in all cases, as well as various networking technologies (e.g., public Internet, MPLS (with or without strict SLA), etc.) to connect the LAN of remote site302to SaaS provider(s)308. Other deployments scenarios are also possible, such as using Colo, accessing SaaS provider(s)308via Zscaler or Umbrella services, and the like. FIG.4Aillustrates an example SDN implementation400, according to various embodiments. As shown, there may be a LAN core402at a particular location, such as remote site302shown previously inFIGS.3A-3B. Connected to LAN core402may be one or more routers that form an SD-WAN service point406which provides connectivity between LAN core402and SD-WAN fabric404. For instance, SD-WAN service point406may comprise routers110a-110b. Overseeing the operations of routers110a-110bin SD-WAN service point406and SD-WAN fabric404may be an SDN controller408. In general, SDN controller408may comprise one or more devices (e.g., a device200) configured to provide a supervisory service, typically hosted in the cloud, to SD-WAN service point406and SD-WAN fabric404. For instance, SDN controller408may be responsible for monitoring the operations thereof, promulgating policies (e.g., security policies, etc.), installing or adjusting IPsec routes/tunnels between LAN core402and remote destinations such as regional hub304and/or SaaS provider(s)308inFIGS.3A-3B, and the like. A primary networking goal may be to design and optimize the network to satisfy the requirements of the applications that it supports. So far, though, the two worlds of “applications” and “networking” have been fairly siloed. More specifically, the network is usually designed in order to provide the best SLA in terms of performance and reliability, often supporting a variety of Class of Service (CoS), but unfortunately without a deep understanding of the actual application requirements. On the application side, the networking requirements are often poorly understood even for very common applications such as voice and video for which a variety of metrics have been developed over the past two decades, with the hope of accurately representing the Quality of Experience (QoE) from the standpoint of the users of the application (i.e., the user experience). More and more applications are moving to the cloud and many do so by leveraging an SaaS model. Consequently, the number of applications that became network-centric has grown approximately exponentially with the raise of SaaS applications, such as Office 365, ServiceNow, SAP, voice, and video, to mention a few. All of these applications rely heavily on private networks and the Internet, bringing their own level of dynamicity with adaptive and fast changing workloads. On the network side, SI)-WAN provides a high degree of flexibility allowing for efficient configuration management using SDN controllers with the ability to benefit from a plethora of transport access (e.g., MPLS, Internet with supporting multiple CoS, LTE, satellite links, etc.), multiple classes of service and policies to reach private and public networks via multi-cloud SaaS. Furthermore, the level of dynamicity observed in today's network has never been so high. Millions of paths across thousands of Service Provides (SP's) and a number of SaaS applications have shown that the overall QoS(s) of the network in terms of delay, packet loss, jitter, etc. drastically vary with the region, SP, access type, as well as over time with high granularity. The immediate consequence is that the environment is highly dynamic due to:New in-house applications being deployed;New SaaS applications being deployed everywhere in the network, hosted by a number of different cloud providers;Internet, MPLS, LTE transports providing highly varying performance characteristics, across time and regions;SaaS applications themselves being highly dynamic: it is common to see new servers deployed in the network. DNS resolution allows the network for being informed of a new server deployed in the network leading to a new destination and a potentially shift of traffic towards a new destination without being even noticed. According to various embodiments, application aware routing usually refers to the ability to rout traffic so as to satisfy the requirements of the application, as opposed to exclusively relying on the (constrained) shortest path to reach a destination IP address. Various attempts have been made to extend the notion of routing, CSPF, link state routing protocols (ISIS, OSPF, etc.) using various metrics (e.g., Multi-topology Routing) where each metric would reflect a different path attribute (e.g., delay, loss, latency, etc.), but each time with a static metric. At best, current approaches rely on SLA templates specifying the application requirements so as for a given path (e.g., a tunnel) to be “eligible” to carry traffic for the application. In turn, application SLAs are checked using regular probing. Other solutions compute a metric reflecting a particular network characteristic (e.g., delay, throughput, etc.) and then selecting the supposed ‘best path,’ according to the metric. The tem′ ‘SLA failure’ refers to a situation in which the SLA for a given application, often expressed as a function of delay, loss, or jitter, is not satisfied by the current network path for the traffic of a given application. This leads to poor QoE from the standpoint of the users of the application. Modern SaaS solutions like Viptela, CloudonRamp SaaS, and the like, allow for the computation of per application QoE by sending HyperText Transfer Protocol (HTTP) probes along various paths from a branch office and then route the application's traffic along a path having the best QoE for the application. At a first sight, such an approach may solve many problems. Unfortunately, though, there are several shortcomings to this approach:The SLA for the application is ‘guessed,’ using static thresholds.Routing is still entirely reactive: decisions are made using probes that reflect the status of a path at a given dine, in contrast with the notion of an informed decision.SLA failures are very common in the Internet and a good proportion of them could be avoided (e.g., using an alternate path), if predicted in advance. In various embodiments, the techniques herein allow for a predictive application aware routing engine to be deployed, such as in the cloud, to control routing decisions in a network. For instance, the predictive application aware routing engine may be implemented as part of an SDN controller (e.g., SDN controller408) or other supervisory service, or may operate in conjunction therewith. For instance,FIG.4Billustrates an example410in which SDN controller408includes a predictive application aware routing engine412(e.g., through execution of routing process248). Further embodiments provide for predictive application aware routing engine412to be hosted on a router110or at any other location in the network. During execution, predictive application aware routing engine412makes use of a high volume of network and application telemetry (e.g., from routers110a-110b, SD-WAN fabric404, etc.) so as to compute statistical and/or machine learning models to control the network with the objective of optimizing the application experience and reducing potential down times. To that end, predictive application aware routing engine412may compute a variety of models to understand application requirements, and predictably route traffic over private networks and/or the Internet, thus optimizing the application experience while drastically reducing SLA failures and downtimes. In other words, predictive application aware routing engine412may first predict SLA violations in the network that could affect the QoE of an application (e.g., due to spikes of packet loss or delay, sudden decreases in bandwidth, etc.). In other words, predictive application aware routing engine412may use SLA violations as a proxy for actual QoE information (e.g., ratings by users of an online application regarding their perception of the application), unless such QoE information is available from the provider of the online application. In turn, predictive application aware routing engine412may then implement a corrective measure, such as rerouting the traffic of the application, prior to the predicted SLA violation. For instance, in the case of video applications, it now becomes possible to maximize throughput at any given time, which is of utmost importance to maximize the QoE of the video application. Optimized throughput can then be used as a service triggering the routing decision for specific application requiring highest throughput, in one embodiment. In general, routing configuration changes are also referred to herein as routing “patches,” which are typically temporary in nature (e.g., active for a specified period of time) and may also be application-specific (e.g., for traffic of one or more specified applications). As would be appreciated, modern SaaS applications are typically delivered globally via public cloud infrastructure: using cloud native services. Even though public cloud providers may have a high number of points of presence (PoPs) and use those to deliver the application, globally. Still, testing has shown that user quality of experience (QoE) may vary greatly based on the location of the user. This is because all public cloud providers are delivering services which are region-based and applications are running in specific region(s) and location(s). Indeed, even though it might seem that an online application is global (e.g., because of its use of globally-available CloudFront POPs, etc.), in reality it might run in a single region/location and user experience might vary greatly based on the location. To determine the QoE for a particular online/SaaS application, various approaches are possible such as:Obtaining user feedback directly from the applicationApplying traffic analytics, such as by analyzing Netflow records that include extra metrics like Application Response Time (ART)Sending synthetic path probes to measure networking metrics to each SaaS application from each location. These probes are ‘synthetic’ in that they seek to mimic the actual characteristics of the traffic of the application under scrutiny.Using hand-crafted heuristics based on domain expertise and other quantities (e.g., the concealment time) In various embodiments, predictive application aware routing engine412may make use of any or all of the above approaches. For instance, predictive application aware routing engine412may make use of an application programming interface (API) for a particular online application, allowing it to obtain application experience/QoE metrics directly from the application. Such metrics may be combined with probing results and/or path telemetry. This is in sharp contrast to network-centric approaches that do not necessarily reflect the true user experience. As would be appreciated, direct user feedback regarding their application experience provides the truest measure of the QoE of the application and the only real form of ground truth. However, there are various reasons that user feedback is typically not collected, such as the following:User feedback may be biased and influenced by subjective factors such as the expectations of a user based on their previous experience.Gathering user feedback in a simplistic manner, such as asking users to score their satisfaction with the application on a scale of 1-5, may not be enough to make meaningful inferences about the network. Indeed, multiple questions may need to be asked, or certain context captured, to obtain a complete view of the performance of the network and the application.Vendors are very afraid of “annoying” users by asking for feedback. Even in cases in which an application does ask its users for feedback. this is typically done at predefined times, such as at the very end of a videoconferencing call. This time, though, is often inconvenient for users, who might already be rushing to their next meeting or simply needing a break. In addition, doing so completely decouples the feedback from when the disruption(s) actually occurred. Opportunistic User Feedback Gathering for Application-Aware Routing The techniques herein introduce mechanisms to request user feedback regarding an online application at the right time, to be able to accurately find the root cause of a potential issue. In some aspects, the techniques herein may also control when, how, and to whom, user feedback request are sent by the system. Illustratively, the techniques described herein may be performed by hardware, software, and/or firmware, such as in accordance with user feedback gathering process249, which may include computer executable instructions executed by the processor220(or independent processor of interfaces210) to perform functions relating to the techniques described herein, such as in conjunction with routing process248. Specifically, according to various embodiments, a device identifies a potential change in user experience of an online application. The device selects, based on the potential change in user experience, a set of one or more users of the online application. The device obtains, from the set of one or more users of the online application, feedback regarding their experience with the online application. The device uses the feedback obtained from the set of one or more users of the online application to make a routing decision in a network regarding traffic of the online application. Operationally,FIG.5illustrates an example architecture500opportunistic user feedback gathering, according to various embodiments. At the core of architecture500is user feedback gathering process249, which may be executed by a controller for a network, a server, or another device in communication therewith. For instance, user feedback gathering process249may be executed by a controller for a network (e.g., SDN controller408inFIGS.4A-4B), a particular networking device in the network (e.g., a router, etc.), another device or service in communication therewith, or the like. In some embodiments, user feedback gathering process249may be used to implement a predictive application aware routing engine, such as predictive application aware routing engine412. As shown, user feedback gathering process249may include any or all of the following components: a feedback triggering engine502, a user selector504, and/or a feedback requestor506. As would be appreciated, the functionalities of these components may be combined or omitted, as desired. In addition, these components may be implemented on a singular device or in a distributed manner, in which case the combination of executing devices can be viewed as their own singular device for purposes of executing user feedback gathering process249. In various embodiments, feedback triggering engine502may be responsible for determining when feedback should be solicited regarding the user experience of a particular online application, in an opportunistic manner. To do so, feedback triggering engine502may interact with an or all of the following:The online application itself or a monitoring agent associated with the application, to obtain performance metrics regarding the application and/or an QoE metrics captured by the application. Example metrics that feedback triggering engine502may obtain in this manner may include mean opinion score (MOS) data, user feedback, application-specific parameters (e.g., a frame rate, a concealment time, etc.), or the like. As would be appreciated, user feedback may have different formats, such as a categorical label (e.g., Excellent, Good, Average, Poor) or a scalar value (from X to Y).Networking entities in the network, to obtain performance metrics regarding the network (and its paths) via which traffic for the application is sent. For instance, feedback triggering engine502may obtain an indication from a network controller as to a sudden degradation in one or more path metrics (e.g., delay, loss, jitter, throughput, etc.) along a path (e.g., via the Internet, SD-WAN, etc.). Such information may include raw telemetry data (e.g., Netflow records, probing results, etc.) or a summary derived therefrom (e.g., an SLA violation notification). Other information that feedback triggering engine502may also obtain in this manner may further include change of route notifications (e.g., BGP updates). Based on the information obtained by feedback triggering engine502, it may decide to trigger the collection of feedback by one or more users of the online application as to their experience(s) with the application. In various embodiments, feedback triggering engine502may do so in response to identifying a potential change in the user experience. For instance, such a potential change in the user experience may be indicated by a change in the performance metrics of the network or a particular path. In other cases, feedback triggering engine502may identify the potential change based on a change in the operation of the application, such as when a videoconferencing application automatically decreases the framerate mid-call. In various embodiments, user selector504is responsible for selecting one or more users from which feedback is to be solicited regarding their experience with the online application. For instance, in response to a signal from feedback triggering engine502that a potential change has occurred in the QoE of the application, user selector504may identify which users are potentially affected and determine which of them should be prompted to provided feedback. Example factors that user selector504may use in its selection may include, but are not limited to, any or all of the following:The location of the user and their endpoint device. For instance, user selector504may select a user to query for feedback based on their endpoint device being along a network path experiencing performance degradation, being in the middle of a session for which operation of the application has changed, or the like. In a further embodiment, user selector504may also select a user to query for feedback based on their proximity to another user whose endpoint device meets any of these criteria.Information indicative of the mood of the user. Such information may include, for example, the cadence or frequency at which the user has been asked to provide feedback in the past, whether the user actually supplied the requested feedback, the current activities of the user (e.g., their web actions, clicks, mouse hover-over actions, etc.), an indication by the user as to their amenability to provide feedback (e.g., via an emoticon that symbolizes “do not ask me again,” “ask me again later,” etc.) or the like. In addition to selecting which user(s) to query for feedback regarding their experiences with the application, user selector504may also control one or more parameters of the feedback request sent to those user(s), in various embodiments. One such parameter may control, for instance, which question or questions are asked of a given user as part of the feedback request. In some embodiments, user selector504may select a question may be based in part on the type of application experience degradation that is suspected (e.g., degraded voice quality, degraded video quality, slow response time, etc.) and/or the suspected root cause of the degradation. Further parameters of the feedback request selected by user selector504may also control the amount of time that a given request is presented to a user (e.g., a popup that lasts x-number of seconds), the type of feedback being requested (e.g., categorical vs. scalar), and the like. Once user selector504has determined who to query for feedback and how, feedback requestor506may cause feedback requests to be sent to the endpoint devices operated by those one or more users, in various embodiments. According to various embodiments, feedback requests may be presented to the selected user(s) via any or all of the following mechanisms:Directly within the application, if so supported. To do so, feedback requestor506may signal to the application that feedback request(s) should be presented to the indicated user(s) and the parameters for the request(s).Through a mobility client installed on the endpoint(s) of the selected user(s). In general, such a mobility client may be used by an enterprise network to extend its network perimeter to remote endpoint devices. For instance, a mobility client may be responsible for establishing a virtual private network (VPN) connection, performing certain security checks, or the like, at the endpoint device. Prompting a user for application feedback via a mobility client has the advantage of the client already running with high privileges and can seek feedback for any application currently being used. Such a request may take the form, for instance, as a pop-up window or as part of an existing user interface. In addition, interfacing with a mobility client also allows user feedback gathering process249the potential to collect details about the endpoint device (e.g., its Wi-Fi details, VPN information, CPU utilization, etc.) and/or network details from the standpoint of the endpoint device.Via a browser API. Such mechanism may allow feedback requestor506to request user feedback regarding their application experience (e.g., through the use of a plugin installed within a web browser). For instance, ThousandEyes Endpoint represents one example browser plugin that could be extended for this purpose. In some instance, this approach also allows for the collection of data indicative of the mood of the user, such as their activities (e.g., web actions, clicks, mouse hover-overs, etc.). This information could be used with a behavioral profile for the user, to infer the mood of the user, either as an indicator of their satisfaction with the application or their amenability to being asked to provide explicit feedback regarding the application. Of course, the browser plugin could also be used to request such explicit feedback, such as via a pop-up window or the like.Via instrumentation of the application. As would be appreciated, online applications are increasingly leveraging monitoring solutions that rely on injecting certain code into their application for purposes of real-user monitoring (RUM), application performance monitoring, security monitoring, and the like. For instance, AppDynamics operates by injecting JavaScript code into the application (e.g., for execution by the browser itself), for purposes of monitoring the application. Such a mechanism could also be extended to prompt a user to provide application experience feedback. This approach has the advantages of being application independent and could work with any number of different applications. It also could be centrally configured and managed. FIG.6illustrates an example user interface600to gather user feedback regarding an online application, according to various embodiments. As shown, user interface600may take the form of a popup presented to a user via a mobility client, such as Cisco AnyConnect. Here, since the mobility client is executed concurrently with any number of online applications, user interface600may ask the user to provide feedback regarding their experience/satisfaction with multiple applications such as Webex and Office365 (0365), shown. Referring again toFIG.5, feedback requestor506may also be responsible for receiving and/or aggregating the user feedback that results from its requests. In turn, user feedback gathering process249may make the feedback available to any number of data consumers. For instance, in some embodiments, user feedback gathering process249may provide the obtained user feedback to routing process248for purposes of making routing decisions for network traffic associated with the application. In a predictive routing implementation, for example, routing process248may use the experience feedback to predict when the network conditions are likely to result in degraded application experience and make routing decisions, accordingly (e.g., by rerouting the application traffic in advance of the predicted degradation). In further instances, user feedback gathering process249may make the application feedback available for review by an administrator or other interested party, such as via a SaaS application portal, a user interface, or the like. For instance, user feedback gathering process249may indicate the user feedback and the corresponding symptoms of the degraded experience for review (e.g., response time for the application is too high, video quality is poor, but voice quality is good, etc.). In some embodiments, the operations of feedback triggering engine502, user selector504, and/or feedback requestor506may be controlled by rules that are either predefined or set by an administrator. Such rules may control under which conditions user feedback is to be obtained, how it is to be obtained, and the like. By way of example of the operation of user feedback gathering process249, assume that the SharePoint application is being used by a user to edit an online document without any significant issues. At some point, the user is sharing the same document with four other participants who start to edit it, simultaneously. Based on the information available to it, feedback triggering engine502may be aware of the details of the additional users (location, other details) and suspect that the application QoE might be degraded. In turn, user selector504may opt to request feedback from the users in the least intrusive manner possible. For instance, SharePoint app could display a thumb up/down query in its top-right corner for the next 10 seconds (and potentially fading away, slowly). Since a narrow time window was chosen, this makes it much more feasible to correlate the feedback with the exact event causing the potential QoE degradation. The user is also likely to appreciate being asked for feedback at the time in which the QoE starts to deteriorate, as it shows the system was intelligent enough to ask and confirm that the application experience was actually degraded. By way of another example, assume that WebEx is being used to host a video conference between two users located across the Europe, the Middle East, and Africa (EMEA) region. Everything is working as expected until a new user joins the conference from the U.S.A. and starts to share their screen. This could be due, for instance, to the new user having to use WebEx resources in the EMEA region. In turn, the WebEx application codec may respond by decreasing video/screen sharing resolution, but exhibit a high concealment time. Based on this, user feedback gathering process249may elect to ask the user for feedback, such as by asking the new user to rate their experience as good or back (e.g., “thumbs up” or “thumbs down”) via an option that appears on a voice icon for 10 seconds. Optionally, type of requested feedback could also depend on the probable root cause for the suspected QoE degradation. For example, feedback triggering engine502may have received a notification of degraded QoE (e.g., poor voice), or a path metric degradation (e.g., detection of a sudden packet loss above 40% for three minutes), in which case user feedback gathering process249may narrowly ask the user “What is your user experience?” and “Are you experiencing poor voice quality?” The quality of the questions/feedback request (e.g., the symptom) may help to enhance their overall perception of the application, especially if it is indicated that the application is not to blame for the degradation. For example, if the feedback request asks “are you experiencing poor voice quality (probably due to a service provider issue),” the user may be more tolerant of the poor QoE. Note also that the use of rules also allows for the reduction of the required user feedback (e.g., if the QoE is good, there is no need to request user feedback), except in cases where pure exploration is desired. FIG.7illustrates an example simplified procedure700(e.g., a method) for application-specific high frequency passive probing, in accordance with one or more embodiments described herein. For example, a non-generic, specifically configured device (e.g., device200), such as a networking device (e.g., a router, etc.), a server, a network controller, or other device in communication therewith, may perform procedure700by executing stored instructions (e.g., user feedback gathering process249). The procedure700may start at step705, and continues to step710, where, as described in greater detail above, the device may identify a potential change in user experience of an online application. In some embodiments, the device may do so by identifying a change in performance of a path in the network via which the traffic of the online application is conveyed (e.g., a change in its packet loss, jitter, latency, throughput, etc.). In other embodiments, the device may do so by receiving an indication that a video resolution of the online application has decreased or a concealment time of the online application has increased. At step715, as detailed above, the device may select, based on the potential change in user experience, a set of one or more users of the online application. In some embodiments, the device also selects the set of one or more users of the online application based in part on their location. In further embodiments, the device selects the set of one or more users of the online application based in part on their prior responses to requests for feedback regarding their experience with the online application. In some embodiments, the device may also select a parameter of a feedback request sent to one or more user interfaces associated with the set of one or more users of the online application. In some embodiments, the parameter controls which question is asked by the feedback request. In further embodiments, the parameter controls a duration of time during which the feedback request is presented by the one or more user interfaces. At step720, the device may obtain, from the set of one or more users of the online application, feedback regarding their experience with the online application, as described in greater detail above. In various embodiments, the feedback is obtained via one of: a JavaScript-injected popup, a browser application programming interface (API), or a mobility client. At step725, as detailed above, the device may use the feedback obtained from the set of one or more users of the online application to make a routing decision in a network regarding traffic of the online application. In some embodiments, the device may do so by using the feedback to predict a decrease in the user experience of the online application based on one or more performance metrics from the network. Procedure700then ends at step730. It should be noted that while certain steps within procedure700may be optional as described above, the steps shown inFIG.7are merely examples for illustration, and certain other steps may be included or excluded as desired. Further, while a particular order of the steps is shown, this ordering is merely illustrative, and any suitable arrangement of the steps may be utilized without departing from the scope of the embodiments herein. While there have been shown and described illustrative embodiments that provide for opportunistic user feedback gathering for application-aware routing, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the embodiments herein. For example, while certain embodiments are described herein with respect to using certain models for purposes of predicting application experience metrics, SLA violations, or other disruptions in a network, the models are not limited as such and may be used for other types of predictions, in other embodiments. In addition, while certain protocols are shown, other suitable protocols may be used, accordingly. The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly, this description is to be taken only by way of example and not to otherwise limit the scope of the embodiments herein. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the embodiments herein. | 53,679 |
11863433 | DESCRIPTION OF EXAMPLE EMBODIMENTS Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. Overview Systems, methods and computer-readable storage media are disclosed for scalable and targeted collection of in-situ Operation, Administration, and Maintenance (iOAM) data in a programmable way in a Segment Routing context. In some examples, a method can involve encoding an iOAM instruction as a local argument in the function field of one or more Segment Identifiers (SID) selected from a listing of Segment Identifiers (SID list) specified in the segment routing header of a packet. The one or more SIDs in the SID list of the segment routing header, which feature an iOAM argument bit in their respective function fields, can correspond to one or more Segment Routing nodes selected for iOAM data collection. In some examples, this may be achieved by setting an iOAM bit in the function argument field of one or more Segment Identifier in the Segment Identifier list. The method can further involve sending the packet to the one or more segment routing nodes based on the segment routing header, receiving a packet containing the iOAM data from the one or more Segment Routing nodes selected for iOAM data collection, and processing the iOAM data from the one or more Segment Routing nodes selected for iOAM data collection. According to some examples, the iOAM data from the one or more targeted Segment Routing nodes can be inserted into one or more Type, Length, Value (TLV) fields of the segment routing header of the packet. An egress Segment Routing node can extract the Segment Routing header, which includes the collected iOAM data from the selected Segment Routing nodes, and send the information to a controller entity for further processing, analysis and/or monitoring. The egress segment routing node may forward the user data packet (e.g., a remaining portion of the Segment Routing encapsulated packet) towards its intended destination. Alternatively the one or more Segment Routing nodes selected for iOAM data collection may insert the generated iOAM data into a duplicate copy of the Segment Routing header. The duplicate copy with the iOAM information included therein is sent to a controller entity using a collector mechanism. The targeted/tapped Segment Routing nodes may then forward the Segment Routing packet with the header-embedded iOAM probes onto the next hop along the Segment Routing Path specified in the SID list. In some examples, a Segment Routing ingress router that encapsulates the incoming packet with the segment routing header may be used to encode iOAM probe(s) in the function field (or the local SID) of one or more Segment Identifiers selected from the entries in the SID list of the segment routing header. In other examples, selecting target segment routing nodes for iOAM data collection may be performed by a Segment Routing Policy Headend router serving as a controller entity for both selective iOAM probing and iOAM data collection from probed Segment Routing nodes. The encapsulating Segment Routing ingress router and/or the controller entity may programmably change, for example in a round robin fashion, the one or more Segment Routing nodes selected from the SID list of the segment routing header for iOAM probing. Example Embodiments Disclosed are systems, methods, and non-transitory computer-readable storage media for scalable programmable in-situ OAM implementation in a Segment Routing context. Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Operations, administration and management (OAM) refer to a set of processes, activities, tools and standards involved with operating, administering, managing and maintaining telecommunication systems and computer networks/hardware. It is often involved with fault management and performance monitoring and may include measurements such as frame loss ratio, frame delay, frame delay variation, continuity checks to assist with Service Level Agreement (SLA) and capacity planning. OAM functionality generally involves a set of network management tools and functions that provide network fault indication, fault localization, performance information, and data and diagnostic functions. These operations may involve automatic monitoring of environment, detecting and determining faults and alerting administrators, collecting performance statistics, accounting data for the purpose of billing, capacity planning based on usage data and maintaining system reliability. As such, OAM functionality enables effective fault detection, verification, isolation and notification in carrier networks. In-situ OAM (iOAM) can provide real-time telemetry for individual data packets and flows. iOAM can include operational and telemetry data and metadata embedded within live user traffic (packets that originate and terminate at the application layer). In iOAM, operational information may be recorded in the packet as it traverses a path between two points in the network. As described herein, “In-situ OAM” can be implemented in a IPv6 Segment Routed (SRv6) network by carrying appropriate data fields in the Type Length Value (TLV) fields of a segment routing header (SRH). A bit may be defined in the segment routing header that when set enables in-situ OAM data collection. The present technology describes methods, systems, for selective probing and collection of iOAM data from programmably selected target nodes in a scalable fashion. The present technology obviates the need to monitor a bit in each incoming packet, as the instruction to perform iOAM function is encoded in the argument field of the SRv6 SID function. This way only nodes for which the local SID has the iOAM argument set will insert the iOAM data. A controller entity (SR policy headend) or a SR encapsulating ingress node may programmably change the iOAM target nodes or iOAM tapping points in order to construct the entire picture or model of how data is traveling in the network, thus providing scalable and programmable in-situ OAM data collection. Segment Routing (SR) allows a node to steer a packet through a controlled set of instructions, called segments, by prepending a segment routing header (SRH) to the packet. A segment can represent any (forwarding) instruction, topological or service-based. Segment Routing allows for steering of a flow through any path (topological or service/application based) while maintaining per-flow state only at the ingress node of the SR domain. Segments can be derived from different components: IGP, BGP, Services, Contexts, Locators, etc. The list of segment defining an end-to-end forwarding path of the flow packets is called the Segment List and is encoded in the SRH of the packet. In the IPv6 Segment Routing architecture, a Segment Identifier (SID) may be represented as an IPv6 address modeled as a Locator and a Function. The Locator, as represented by the most significant bits of the address, is used to route the packet to the associated segment (i.e., the node corresponding to the segment). The Function, as represented by the least significant bits of the address, may be used to identify the action to be performed by the segment (i.e., the node corresponding to the segment). Optionally, the function bits may include local arguments, which are encoded in the last bits of the address. The specific address format (i.e., number of bits allocated to each field) is entirely flexible as it may be defined locally by the parent node. SID reachability is made possible by advertising the locator prefix within the routing protocol domain. Treatment of OAM operation as a SID function, as disclosed by some embodiments of the present technology, enables the implementation of a programmable in-situ OAM. Consequently, instead of only providing a global end-to-end behavior, service providers may control OAM features on a node-by-node basis, enabling specific OAM operations to be performed on selected node(s). The iOAM enabled Segment Identifier includes an iOAM argument bit(s) in the Function field of the Segment Identifier. This may include flipping a bit in the appropriate argument portion of the Segment Identifier's Function field. FIG.1illustrates an example format for a Segment Identifier (SID)100carrying an iOAM instruction. The Segment Identifier100includes a routable Locator field102. As stated above, the locator information is encoded by the first most significant bits of the Segment Identifier and represents an address of a particular Segment Routing node (parent node of the local SID) and it is therefore used for routing in a Segment Routing domain. The remaining SID bits constitute the Function field104which identifies the function that is executed locally on a particular node, specified by the locator bits. The Function field104further comprises a portion106for identifying the type of operation to be performed (Op Code) and a portion108for storing one or more parameters/arguments that may be required for performing the operation identified by the Op Code (i.e., arguments passed to the function). Presence of iOAM argument bit(s)110in the argument portion108of the Function field104prompt the targeted Segment Routing node to take a specific action such as cloning the Function (104) with iOAM data operations. Segment Routing deployments can be used to deliver customized services with stringent performance requirements, details of which may be explicitly set forth in a service level agreements (SLA). Ensuring that such service-level guarantees are met may require routine monitoring to verify that a forwarding path across the network is in compliance with the implemented Segment Routing policy and the provisions of the associated Service Level Agreement. To address this requirement, iOAM probing may be implemented to verify a particular Segment Routing policy by monitoring the live data as it is steered across the Segment Routing path. However, implementing this in hardware amount to enabling in-situ OAM probe on all transit nodes which may potentially affect the timing of the actual traffic stream being probed. Therefore some performance penalties may result from the performance measurement operation itself. Additionally, the aforementioned Hardware implemented iOAM probing scheme may further incur additional hardware performance penalties as the examination of header information is performed for all incoming packets, regardless of whether the in-situ OAM is enabled or not. In the context of Segment Routing based IPv6 (SRv6) networks, iOAM data probing and collection may involve provisioning iOAM data-fields in the Type Length Value (TLV) field of the segment routing header. Moreover iOAM probing operation may involve setting a bit (i.e., O-flag) defined in the segment routing header, which when set indicates that iOAM data collection is enabled. This approach, however, is also susceptible to performance constraints discussed above. Generally, in-situ OAM (iOAM) data collection is expected to be deployed in a specific domain rather than on the overall Internet. The part of the network which employs iOAM is referred to as the iOAM domain. In-situ OAM data is added to a packet upon entering the iOAM-domain and is removed from the packet when exiting the domain. Within the iOAM-domain, the iOAM data may be updated by network nodes that the packet traverses. The device which adds an iOAM data container to the packet to capture iOAM data is called the iOAM encapsulating node whereas the device which removes the iOAM data container is referred to as the iOAM decapsulating node. Nodes within the domain which are aware of iOAM data and read and/or write or process the iOAM data are called iOAM transit nodes. Restricting the proliferation boundary of iOAM in this way serves to contain the iOAM signaling and data transport traffic along with the resulting processing load within the iOAM domain thus keeping it away from the rest of the network. The present technology enables selective collection of iOAM data from target nodes in a programmable fashion. Embodiments of the present technology obviate the need to monitor “a” bit (iOAM flag) in each incoming packet, as the instruction to perform iOAM function is encoded in the argument field of the SRv6 SID function. This way only nodes for which the local SID has the iOAM argument set will insert iOAM data. According to some embodiments an SRv6 Policy headend/controller entity may programmably change the iOAM tapping points (devices/nodes selected for iOAM collection) to construct a comprehensive picture of how data is traveling in the network. Some aspects involve a programmable iOAM implementation that enables a user/operator to specifically select the node from which to collect the desired iOAM data. The iOAM data may be injected into the header of the data packet by the specified node as it forwards the packet onto its next hop. Alternatively a duplicate copy of the packet with the iOAM information inserted therein may be sent to a device, such as a controller entity, using appropriate collector mechanism such as Netflow/IPFIX. An example implementation of programmable iOAM is illustrated inFIG.2. FIG.2illustrates an example Segment Routing path/policy202where traffic coming from a User Equipment (i.e., smart phone) denoted as Node0is steered into a Segment Routing policy at gNodeB (Node1). The Segment Routing policy steers the traffic via User Plane Function1(Node2) to the Traffic Engineering or service chaining node (Node3) and finally terminates the flow at User Plane Function2(Node4) which is the end point of the Segment Routing policy202. The ingress Segment Routing Node (Node1) encapsulates the incoming flow packet from the User Equipment (Node0) into a SRv6 packet203. The SRv6 packet203comprises an outer IPv6 header204which further contains the segment routing header208. The original user data packet210is left unmodified as the payload. The Source Address (SA) of the packet is the ingress node, the Destination Address (DA) is set as the first segment of the path, which corresponds to Node2in the example Segment Routing path202. With reference to notations included in the segment routing header inFIG.2; Node SIDs are represented as an alphabetic letter followed by the node number. For example, Node2,3and4correspond to SIDs A2, A3and A4, respectively. Notation A behind the node number indicate that the node/router is Segment Routing capable. Notation B behind the node number indicate that the node is a classic IPv6 transit node. The segment routing header (SRH)208includes a Segment Identifier (SID) list which corresponds to a list of segments that define the steering path of the packets (i.e., Segment Routing Path202inFIG.1). The Segment List is encoded starting from the last segment of the path (i.e., the first element of the segment list (A4) corresponds to the last segment of the path (Node4), the second element (A3::C34) contains the penultimate segment of the path (Node3) and so on). The identifier C34attached to A3(Locator SID portion) specifies a function to be performed locally at Node3. The function or operation denoted by C34may include switching the packet onto a specific outgoing interface or adjacency link that connects to Node4(last segment of the path and the egress node of the Segment Routing policy domain202). Therefore the SID list representing the Segment Routing path202is expressed in the segment routing header208as (A2::, A3::C34, A4::). Segments Left (SL) parameter encoded in the segment routing header208represents a pointer of the active segment in the Segment List, and is decremented at each segment. Therefore, the encapsulating node (Node1) sets the numerical value of the SL parameter, in the Segment Routing header208, to2. This identifies Node2as the next segment along the Segment Routing path. The O-bit in the segment routing header208represents an OAM flag which when set indicates that the present packet is an operations and management (OAM) packet. According to embodiments of the present technology any nodes across the Segment Routing path may be tapped for iOAM data collection. The Segment Routing (SRv6) packet212inFIG.2corresponds to a scenario whereby a user/operator wishes to collect iOAM data, on the Segment Routing policy path202, only at the point of flow transit through the User Plane Function1(UPF1) at Node2. In order to enable iOAM data tapping only at User Plane Function1(UPF1) from Node2(i.e., to collect iOAM data only from node2), the ingress Segment Routing Node (Node1) modifies the Segment Identifier (SID) list in the segment routing header216to <A4::, A3::C34, A2(1)::>. The referenced portion217in the IPv6 header218indicates that SR capable Node2has iOAM argument bit enabled. The Segment Routing (SRv6) packet212is such that the iOAM argument bit is only enabled on Node2as denoted by its modified SID Function notations A2(1). A2(1) is a clone of A2::SID with iOAM data collection bit enabled via a bit in the argument field of the A2::SID function. Such a probe as illustrated by212will collect iOAM data only from Node2. An example for enabling iOAM data tapping only at the Traffic Engineering or service Node3is illustrated inFIG.3. The example corresponds to the same Segment Routing Path202asFIG.2. The encapsulated SRv6 packet303inFIG.3corresponds to a scenario whereby a user/operator wishes to collect iOAM data on the Segment Routing policy path202only from the Traffic Engineering or Service Node3. In order to enable iOAM data tapping only at Node3(i.e., to collect iOAM data only from Node3) the ingress Segment Routing Node (Node1) modifies the Segment Identifier (SID) list in the segment routing header308to (A2::, A3::C34(1), A4::). The referenced portion317of the segment routing header308indicates that SR capable Node3has iOAM argument bit enabled. The Segment Routing (SRv6) packet303is such that the iOAM argument bit is only enabled on Node3as denoted by its modified SID Function notations A3::C34(1). The modified/augmented SID of Node3, A3::C34(1), which includes an iOAM probe in the encoded SID is a clone of the regular SID of Node3, A3::C34, with the only difference being that in the former case iOAM data collection is enabled via a bit in the argument field of the A3::C34SID function. Such a probe as illustrated by303will collect iOAM data only from Node3. According to some embodiment of the present technology, the modified SID with local iOAM probe functionality results from insertion of iOAM probe in the argument field of the locally significant portion (i.e. SID Function field) of a Segment Identifier. In some embodiments, the ingress Segment Routing node may implement iOAM data collection from multiple nodes by performing a round robin targeting of the nodes across the SID list to collect data in a scalable fashion. A controller entity may then run analytics routine and operations on the iOAM partial data to build a holistic view. The procedure is applicable for all underlay and overlay SRv6 SID types. Due to the programmable nature of iOAM SID, as described by some embodiments, iOAM data collection may also be specified and implemented based on a local decision at a node. Specifically, an iOAM SID may implement iOAM data transport using “forward and punt” technique used by Netflow collector. In this case a copy of the packet is exported from the “tapping” node (iOAM collection node) to a controller entity with the requested iOAM information inserted therein. This case is depicted inFIG.4. With reference toFIG.4, at402some of the User generated packets sourced from User Equipment (Node1) are marked for insertion of iOAM probe. The iOAM augmentation of the SID of the marked packets is executed at gNodeB (Node2) where the user data packet401is encapsulated with segment routing header404. The O-bit in the Segment routing Header404is set to 1 and the hop limit (HL) is set to 64. As such, Node2implements a forward and punt mechanism. At each segment routing node the enabled O-bit (O-bit=1) of the segment routing header causes a time-stamped copy of the packet to be punted (405) and processed elsewhere. As described above, a segment routing header (SRH) can be used to steer packets through paths with given properties (e.g., bandwidth or latency) and through various network functions (e.g., firewalling). The list of segments present in the segment routing header thus specifies the network policy that applies to the packet. Each segment routing header contains at least a list of segments and a Segments Left pointer that references the current active segment (a value of 0 refers to the last segment). In addition, an segment routing header can optionally include extended information encoded as Type-Length-Value (TLV) fields. Another use of TLV is as a global argument field for passing additional information between locally executed SID Functions. An iOAM augmented SID may implement iOAM data transport inside of the data packet by using the TLV fields of the segment routing header. As such iOAM data records may be transported in the respective Type Length Value (TLV) data fields of the segment routing header until the flow is terminated at a Segment Routing egress router. The egress router will decapsulated the user data and send the segment routing header including the iOAM data inserted into TLV fields to a controller entity. This case is illustrated inFIG.5. FIG.5illustrates a Segment Routing packet502with iOAM probes enabled for the selected Segment Routing Nodes. The iOAM data generated at each iOAM tapping point (i.e., Segment Routing nodes with an iOAM argument bit enabled in the function filed of the respective Segment Identifier) is collected in TLV fields503of the segment routing header504. Node4is the egress node of the Segment Routing path202. Therefore, Node4will decapsulated the Segment Routing packet502and export the segment routing header504with the collected iOAM data in the corresponding TLV fields to a controller entity, using, for example a Netflow collector. The User data packet505is then transported to the intended destination. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. An example flow chart100, as presented inFIG.6. illustrates targeted iOAM probing in Segment Routing context in accordance to some embodiments of the present technology. With reference toFIG.6, at step602one or more Segment Routing (SR) nodes for iOAM data tapping to targeted iOAM data collection are identified. The iOAM instruction is encoded as a local argument in the function field of one or more Segment Identifiers (SIDs) corresponding to the one or more selected SR nodes at step604. Each SR capable node maintains a “My Local SID Table”. The table contains all the local segments explicitly instantiated at the node. Each entry of the “My Local SID Table” indicates the function associated with the local SID. As the SR packet travels the network, the Locator and Function are copied by each SR node to the destination address field of the SR header. When the SID inside of the SR header matches Local SID table of the SR capable node, the node executes a function encoded in the right part (Function field) of the SID. The next SID is placed into the SR Header destination field and the Segment Left Value is decreased by 1 accordingly. Referring back to the Flowchart100inFIG.6, at step606, Destination address field of the SR header is compared with the SID of the SR Node the packet is forwarded to. In accordance to some embodiments of the technology, if there is a match (Locator portion of the destination SID matches the Locator portion of the Node SID and the function field of the destination SID is carrying an iOAM probe as argument), the operation moves to step608whereby the iOAM function is executed and the iOAM data is inserted inside TLV fields of the SR Header. If, however, the result of the comparison at step606returns no match, the packet is forwarded to the next hop along the SR path and step606is repeated until a match is encountered, at which point the operation moves to step608as explained above. At step610the operation verifies whether the current SR node is the SR Egress node. If the SR node does not correspond to the SR Egress node, the operation moves back to step607and the packet is forwarded to the next hop along the SR path and the process is repeated. However, if the comparison at step610reveals that the current SR node is the SR Egress router, the operation moves to612whereby the egress router decapsulates the header information and sends the header information along with the iOAM data embedded therein to a controller entity for further analysis and/or monitoring. According to some embodiments of the present invention, after verifying a match at step606, a duplicate copy of the packet including the requested iOAM data is generated by the SR node and sent to the controller entity for further analysis and/or monitoring. The operation is then moved to step607, whereby the original packet is forwarded to the next hop along the SR path. The disclosure now turns toFIGS.7and8, which illustrate example architectures of computing and network devices, such as client computers, switches, routers, controllers, servers, and so forth. FIG.7illustrates a computing system architecture900including components in electrical communication with each other using a connection905, such as a bus. System900includes a processing unit (CPU or processor)910and a system connection905that couples various system components including the system memory915, such as read only memory (ROM)920and random access memory (RAM)925, to the processor910. The system900can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor910. The system900can copy data from the memory915and/or the storage device930to the cache912for quick access by the processor910. In this way, the cache can provide a performance boost that avoids processor910delays while waiting for data. These and other modules can control or be configured to control the processor910to perform various actions. Other system memory915may be available for use as well. The memory915can include multiple different types of memory with different performance characteristics. The processor910can include any general purpose processor and a hardware or software service, such as service1932, service2934, and service3936stored in storage device930, configured to control the processor910as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor910may be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device900, an input device945can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device935can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device900. The communications interface940can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device930is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)925, read only memory (ROM)920, and hybrids thereof. The storage device930can include services932,934,936for controlling the processor910. Other hardware or software modules are contemplated. The storage device930can be connected to the system connection905. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor910, connection905, output device935, and so forth, to carry out the function. FIG.8illustrates an example network device1000suitable for performing switching, routing, assurance, and other networking operations. Network device1000includes a central processing unit (CPU)1004, interfaces1002, and a connection1010(e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU1004is responsible for executing packet management, error detection, and/or routing functions. The CPU1004preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU1004may include one or more processors1008, such as a processor from the INTEL X106 family of microprocessors. In some cases, processor1008can be specially designed hardware for controlling the operations of network device1000. In some cases, a memory1006(e.g., non-volatile RAM, ROM, TCAM, etc.) also forms part of CPU1004. However, there are many different ways in which memory could be coupled to the system. In some cases, the network device1000can include a memory and/or storage hardware, such as TCAM, separate from CPU1004. Such memory and/or storage hardware can be coupled with the network device1000and its components via, for example, connection1010. The interfaces1002are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device1000. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor1004to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown inFIG.8is one specific network device of the present disclosure, it is by no means the only network device architecture on which the concepts herein can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., can be used. Further, other types of interfaces and media could also be used with the network device1000. Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory1006) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory1006could also hold various software containers and virtualized execution environments and data. The network device1000can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing, switching, and/or other operations. The ASIC can communicate with other components in the network device1000via the connection1010, to exchange data and signals and coordinate various types of operations by the network device1000, such as routing, switching, and/or data storage operations, for example. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. | 36,714 |
11863434 | DETAILED DESCRIPTION Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Overview First aspect of this disclosure relates to SD-WANs and how one can implement a particular policy according to data embedded within a packet. An example method includes registering, by an enterprise controller on an enterprise domain, in a shared mapping system on a service provider domain, one or more entries specifying one or more services for one or more classes of traffic to yield registered entries, reading, by a service provider controller, from the shared mapping system, the registered entries, posting, by the service provider controller, the one or more entries to one or more routing tables at a software-defined wide area network of the service provider domain and receiving a request, by a mobile node on the enterprise domain, of a specific service for a particular class of packets according to a classification of the particular class of packets based on a particular label defined in the registered entries for the specific service. The second aspect of this disclosure relates to trust and selecting a trust-related policy in a network. As noted above, memory verification checks are expensive. Such checks by themselves imply that a device is more likely to be in a good state soon after device validation, and less likely to be in a good state just before a device validation. The result of this implication is that it should be possible to use historical and operational data to quantify and graph the likelihood of compromise for a specific device since the last device validation. Getting to such a quantification of trustworthiness is non-trivial. And being able to determine instantaneous trustworthiness means an operator will have to have an understanding of: how quickly device trustworthiness degrades in a particular deployment environment; the visible events which when taken together are potential indicators of compromise; and how instantaneous device trustworthiness can be improved via invoking actions such as memory and configuration checks. Considering this context and the factors above, the second aspect of this application describes an estimation formula for device trustworthiness evaluation based on probabilities of visible indicators of security compromises for a given device and probabilities of invisible (time based) indicators of security compromises for the give device. Many things can be done with the results of such a trustworthiness estimation formula when it is located on a controller: Sensitive traffic/flows can be routed around elements with less trustworthiness. Even if a sensitive flow is encrypted, an attackers knowledge possibly gleaned from a compromised device that traffic is being passed between endpoints can be harmful. Memory checks or configuration validations can be prioritized to be run on a remote device. For example, the formula allows the scheduling of such checks to be needs based, rather than scheduled regardless of underlying conditions; the business value of the function can be considered when determining when to schedule memory checks or configuration validation; the integrity of key data structure/subsystems can be assessed, rather than just the platform as a whole. This could include forwarding data structures that might need to be reconstructed; and the integrity/consistency of hardware based subsystems within a router/switch itself could be analyzed. For example, the consistency of satellite nodes, line-cards, or even hardware registers (ACL, FIB, etc.) could be checked/refreshed. Description of Example Embodiments Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein. The detailed description set forth below is intended as a description of various configurations of embodiments and is not intended to represent the only configurations in which the subject matter of this disclosure can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject matter of this disclosure. However, it will be clear and apparent that the subject matter of this disclosure is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject matter of this disclosure. As noted above, this application introduces two different approaches to handling policy selection within a network environment. The first approach will relate to a SDWAN environment and the second approach will relate to time-variant trust driven policies in an on-demand network. The first aspect of the present disclosure addresses the need in the art for ensuring that on a managed SDWAN deployment, an enterprise customer and the service provider agree on which policies should be applied to a particular flow originated by the customer. In some examples, the approaches herein can implement a mapping system shared infrastructure to broker SLAs (Service Level Agreements) and classification in a customer and service provider solution. The present technologies will be described in the context of a “mobileSDWAN” (or mSDWAN) use case. However, it should be noted that the present technologies can also apply to other networks and use cases, such as other SDWAN implementations. The mSDWAN use case is provided herein for clarity and explanation purposes, as the mSDWAN use case can provide clear examples of the various scalability issues and the separation of domains that are typical of managed SDWAN services. An mSDWAN deployment100as shown inFIG.1can include a managed SDWAN104having vEdges106,108that are used as attachment points for mobile devices122,124that will attach to the vEdges106,108in order to use transport services from the managed SDWAN104. Typically, the SDWAN104is part of a service provider domain102and is managed by the service provider controller116while the mobile devices122,124are managed by the enterprise domain(s)120that are the service provider (SP) customers. The mSDWAN mobile device122,124can perform a fine grain classification of the application/user/device that is generating a packet flow, and insert a label in each packet that represents the classification performed at the mobile edge106,108. This label provides the “context” as classified by the mobile device122,124. The context can be used at the ingress edge106,108of the SDWAN104to apply the appropriate policy. Note that in some instances, the classification function can be performed by a customer premises equipment (CPE) (not shown) sitting in front of the vEdge106,108rather than at the mobile device122,124. A mechanism as described herein can specify, per enterprise customer, which SDWAN SLA should be applied to packets tagged with a specific label. For example, an SDWAN104may offer low latency transport110and normal transport112between any two vEdges106,108of the SDWAN104. Similarly, different encryption SLAs might be offered, or other level of services, such as storage services, or access to specific service nodes for speech processing or encoding services. Thus, the disclosure is not limited to latency service at a particular level but can apply to any service or combination of services provided by the domain102. The present disclosure can use a mapping service, accessible to both the enterprise customer and the SDWAN service provider, as a way to specify the mapping, used by a given customer, between the label used to classify the traffic and the policy applied at the SDWAN edge106,108. As an example, Enterprise A may use label “red” to tag traffic that should receive low latency services from the SDWAN, and label “green” to identify traffic that should receive normal latency transport services140. The controller of Enterprise A126can register in the shared mapping system114two entries specifying that: “ent 1, red” →low latency and “ent 1, green” →normal latency128. Of course any label can do beyond selecting a color. Another enterprise may use different labels to identify the same policies offered by the SDWAN provider. Now the service provider can reflect in its SDWAN routing tables the association between the labels used by an enterprise customer and the corresponding policy rules that will be applied. If the mobile device122,124wants to request a low latency service110for a particular class of packets, it can simply label those packets as “red”. The ingress vEdge106,108can use that classification to properly route that packet on a low latency path110, as shown inFIG.1. Note that in certain instances, if the vEdge106,108has not received the routing policy for “red” packets, it can pull it on-demand. The SP controller16can access the shared mapping system114and provide the data to the SDWAN104. FIG.1also illustrates the example networking environment100having disjoint trust domains. The disjoint trust domains include a service provider domain102and an enterprise domain120. The service provider domain102includes a managed SDWAN104having vEdges106and108that are used as attachment points for mobile devices122and124in the enterprise domain120. The mobile devices122and124attach to the vEdges106and108in order to use transport services from the SDWAN104or other services available through the SDWAN104. The SDWAN104can include a low latency transport110and a normal latency transport112. These different transports represent an example service provided to the mobile devices122,124that can be offered at different quality levels according to the mapping to certain policies. An enterprise controller126on the enterprise domain120can register128in a shared mapping system114on the service provider domain102entries specifying different latency transports (or other services) for different traffic. The different traffic can be identified by labels tagging the traffic according to the respective latency transport for that traffic. In this example, the enterprise controller126registers in the shared mapping system114a first entry specifying that traffic labeled or tagged “Ent. 1, Red” should receive low latency (e.g., low latency transport110), and traffic labeled or tagged “Ent. 1, Green” should receive normal latency (e.g., normal latency transport112). A service provider controller116can then read, from the shared mapping system114, the registered entries, and post140such entries to the routing tables at SDWAN104. Once the service provider controller116has posted140the entries, the service provider102can reflect in its SDWAN routing tables the association between the labels used by the enterprise domain120and the corresponding policy rules that will be applied. If the mobile nodes122or124want to request a low latency service110for a particular class of packets, the mobile nodes122or124can simply label those packets as “red”. The ingress vEdge106or108can use that classification to properly route that packet on the low latency transport110. Similarly, if the mobile nodes122or124want to request a normal latency service for a particular class of packets, the mobile nodes122or124can simply label those packets as “green”. The ingress vEdge106or108can use that classification to properly route that packet on the normal latency transport112. FIG.2illustrates an example method200for SDWAN policy selection across disjoint trust domains. The method can include any one or more of these steps in any order. At step502, an enterprise controller126on an enterprise domain120can register in a shared mapping system114on a service provider domain102entries specifying different latency transports or different services for different classes of traffic. The different classes of traffic can be identified by labels tagging the traffic according to the respective latency transport (e.g.,110,112) or service for that traffic. For example, the enterprise controller126can register in the shared mapping system114a first entry specifying that traffic labeled or tagged “Ent. 1, Red” should receive low latency (e.g., low latency transport110), and traffic labeled or tagged “Ent. 1, Green” should receive normal latency (e.g., normal latency transport112). At step204, a service provider controller116can read, from the shared mapping system114, the registered entries, and at step206post such entries to the routing tables at an SDWAN104of the service provider102. Once the service provider controller116has posted the entries, the service provider102can reflect in its SDWAN routing tables the association between the labels used by the enterprise domain120and the corresponding policy rules that will be applied. At step208, a mobile node (e.g.,122or124) on the enterprise domain120can request a specific latency service for a particular class of packets by classifying (e.g., labeling, tagging, etc.) associated packets based on a particular label defined in a registered entry for that specific latency service. From a system standpoint, the vEdge108can receive a request for a specific service at a certain level. The ingress vEdge106or108can use that classification to properly route that packet on the specific latency service (e.g., low latency transport110, normal latency transport112) or to provide the certain quality of service for the flow, such as a certain bandwidth, amount of storage data, encryption services, etc. Claims can be drafted using the principles set forth above from the aspect of different components withinFIG.1. For example, the disclosure can include the steps performed from the standpoint of the mobile node122,124, or from the standpoint of the SP controller116, or shared mapping system114. Claims can focus on the processes from the standpoint of the vEdge106,108. In some cases, an embodiment could be described using processes performed by two or more of these components. The disclosure now turns to the second aspect which relates to trust and selecting a trust-related policy in a network. Memory verification checks are expensive. Such checks by themselves imply that a device is more likely to be in a good state soon after device validation, and less likely to be in a good state just before a device validation. The result of this implication is that it should be possible to use historical and operational data to quantify and graph the likelihood of compromise for a specific device since the last device validation. Getting to such a quantification of trustworthiness is non-trivial. And being able to determine instantaneous trustworthiness means an operator will have to have an understanding of: how quickly device trustworthiness degrades in a particular deployment environment; the visible events which when taken together are potential indicators of compromise; and how instantaneous device trustworthiness can be improved via invoking actions such as memory and configuration checks. Considering this context and the factors above, the second aspect of this application describes an estimation formula for device trustworthiness evaluation based on probabilities of visible indicators of security compromises for a given device and probabilities of invisible (time based) indicators of security compromises for the give device. Many things can be done with the results of such a trustworthiness estimation formula when it is located on a controller. For example, sensitive traffic/flows can be routed around elements with less trustworthiness. This can occur even if a sensitive flow is encrypted, an attackers knowledge possibly gleaned from a compromised device that traffic is being passed between endpoints can be harmful. Memory checks or configuration validations can be prioritized to be run on a remote device. In one aspect, the formula allows the scheduling of such checks to be needs based, rather than scheduled regardless of underlying conditions. The business value of the function can be considered when determining when to schedule memory checks or configuration validation; the integrity of key data structure/subsystems can be assessed rather than just the platform as a whole. This could include forwarding data structures that might need to be reconstructed; and the integrity/consistency of hardware based subsystems within a router/switch itself could be analyzed. For example, the consistency of satellite nodes, line-cards, or even hardware registers (ACL, FIB, etc.) could be checked/refreshed. Certain previous systems rely on the freshness (e.g., the recency) of measurements of a node in order to verify the security (e.g., trustworthiness) of the node. This is problematic because the reliability and accuracy of the verification is proportional to the frequency with which measurements are taken. Accordingly, a high utilization of network resources corresponds to a high reliability system, and vice versa. Another problem with a freshness-based system is that an attacker can inject previously recorded measurements into the node being verified (e.g., after gaining root access) in order to give the false appearance that the node has not been compromised. By contrast, various implementations disclosed herein verify the security (e.g., trustworthiness) of a node by comparing trusted information against corresponding information obtained from the node. In this way, verification proceeds irrespective of the freshness of information at the node. Moreover, utilization of the trusted information guards against the event where an attacker has changed information at the node. Certain other previous systems provide a transactional process between the node making the verification request and the measurement device. For example, the requesting node provides a random number (e.g., a nonce) to the measurement device, which provides a signature across the response, including the returned random number itself. This indicates that the information is not being replayed from a time before the random number was available to the measurement device. Such a dependency of the requesting node for each signed response is problematic because the system is grounded in a transactional challenge and/or response interaction model. In other words, this system does not support unidirectional verification communications originating from the requesting node, such as an asynchronous push, multicast, broadcast message, and/or the like. By contrast, the attestation based routing disclosed herein supports these types of communications. An example-time-based attestation mechanism is a Canary Stamp. The Canary Stamp allows elements in a network to ascertain if the source of information has been compromised. In addition, the Canary Stamp provides a structure to assign a level of trust to the information that is shared. The following is a discussion of canary stamps or attestation data which can be applicable for the trust-related concepts disclosed herein. Canary Stamps A computer network can include different nodes (e.g., network devices, client devices, sensors, and any other computing devices) interconnected by communication links and segments for sending data between end nodes. Many types of networks are available, including, for example, local area networks (LANs), wide area networks (WANs), software-defined networks (SDNs), wireless networks, core networks, cloud networks, the Internet, etc. When data traffic is transmitted through one or more networks, the data traffic typically traverses a number of nodes that route the traffic from a source node to a destination node. While having numerous nodes can increase network connectivity and performance, it also increases security risks as each node that a packet traverses introduces a risk of unauthorized data access and manipulation. For example, when a packet traverses a node, there is a security risk that is introduced which can result from the node being potentially compromised (e.g., hacked, manipulated, captured, etc.). As a result, compliance, security, and audit procedures can be implemented to verify that network users, devices, entities and their associated network traffic comply with specific business and/or security policies. When sensitive information is transmitted through nodes in a network, such as in battlefield, banking settings, and healthcare settings, such traffic should be sent through uncompromised nodes to prevent access to, leakage of, or tampering with the data and sensitive information carried by that traffic. If an attacker gains access to a device via some exploit, previous protection and encryption approaches for network interfaces are generally ineffective at mitigating or addressing such unauthorized access and resulting damage. Proving that network traffic complies with specific policies can involve proving in a secure way that the traffic has traversed a well-defined set of network nodes (e.g., firewalls, switches, routers, etc.) and that such network nodes have not been modified or compromised. This can help ensure that the network nodes have performed their expected or intended actions (e.g., packet processing, security or policy compliance verification, routing, etc.) on the packet and that the packet has traversed the network nodes. Some security approaches can aim at removing any implied trust in the network used for connecting applications hosted on devices to cloud or enterprise hosted services. Moreover, some security approaches can be implemented to verify the trustworthiness (e.g., the integrity, identity, state, etc.) of the network and/or nodes traversed by packets. In some cases, certain verification checks can be implemented to validate or verify that traffic has traversed a specific set of nodes and that such nodes are trusted and uncompromised. In some examples, certain Proof-of-Transit (POT), Trusted Platform Module (TPM), attestation, or proof of integrity approaches can be implemented to verify or validate the trustworthiness of a node in a network. POT can enable a network user or entity to verify whether traffic traversed a defined set of network nodes. Attestation, as further described below, can also be used to verify the integrity of a node. In some cases, the approaches herein can integrate both to offer a secure approach that allows network users or entities to verify that traffic has traversed a defined set of nodes and that such nodes have not been compromised. In some cases, TPM can be implemented to collect and report the identity of hardware and software components in a platform to establish trust for that platform. A TPM used in a computing system can report on the hardware and software of the system in a manner that allows verification of expected behavior associated with that system and, from such expected behavior, establishment of trust. The TPM can be a system component containing state that is separate from the host system on which the TPM reports identity and/or other information. TPMs can be implemented on physical resources (indirectly or directly) of the host system. In some examples, a TPM component can have a processor and memory such as RAM, ROM and/or flash memory. In other implementations of a TPM, a host processor can run TPM code while the processor is in a particular execution mode. Parts of system memory can be partitioned by hardware to ensure that memory used by the TPM is not accessible by the host processor unless the host processor is in the particular execution mode. In some cases, trusted computing (TC) implementations, such as TPM, can rely on Roots of Trust. Roots of Trust can be system elements that should be trustworthy because misbehavior by such system elements may not be detectable. A set of roots can provide a minimum functionality that can sufficiently describe characteristics that affect a platform's trustworthiness. In some cases, determining if a Root of Trust is behaving properly may not be possible; however, it may be possible to determine how roots are implemented. For example, certificates can provide assurances that the root has been implemented in a way that renders it trustworthy. To illustrate, a certificate may identify the manufacturer and evaluated assurance level (EAL) of a TPM. Such certification can provide a level of confidence in the Roots of Trust used in the TPM. Moreover, a certificate from a platform manufacturer may provide assurance that the TPM was properly installed on a system that is compliant with specific requirements so the Root of Trust provided by the platform may be trusted. Some implementations can rely on three Roots of Trust in a trusted platform, including Root of Trust for Measurement (RTM), Root of Trust for Storage (RTS), and Root of Trust for Reporting (RTR). The RTM can send integrity information, such as integrity measurements, to the RTS. Generally, the RTM can be a processor controlled by a Core Root of Trust for Measurement (CRTM). The CRTM is the first set of instructions executed when a new chain of trust is established. When a system is reset, the processor (e.g., RTM) can execute the CRTM, which can then send values that indicate its identity to the RTS. Thus, in some cases, the starting point for a chain of trust can be established in this manner. As previously noted, the TPM memory can be shielded from access by an entity other than the TPM. Since the TPM can be trusted to prevent unauthorized access to its memory, the TPM can act as an RTS. Moreover, the RTR can report on the contents of the RTS. An RTR report can be a digitally signed digest of the contents of one or more values in a TPM. Attestation is another example trusted computing approach that can be used to verify the integrity of a node. Attestation can be applied to a node, such as a router or switch, to review logs from connected devices, such as Layer 1 (L1) or Layer (L2) connected devices and maintain these logs in trusted storage. These logs can be protected by embedding a private key into every trust anchor produced for a hardware device and publishing the device's public key as a certificate to adjacent devices. This peering device can then push log updates from trusted storage periodically and/or on some log entry event. Reviewing any provided signed logs can provide an understanding of the current trustable state of a peer device. Moreover, by looking back at the set of transactions which have occurred since boot time, a determination can be made regarding the trustworthiness of the information which that peer device is asserting. In some examples, metadata elements containing security measurements or evidence, can be used to provide verifiable evidence of device trustworthiness (e.g., integrity, state, etc.). The metadata elements can include applicable data for verifying trustworthiness of a device and be provided through an applicable technique for verifying device trustworthiness. For example, the metadata elements can be provided as part of a canary stamp associated with the device. A canary stamp can indicate or otherwise include a signed measurement associated with a device for verifying trustworthiness of the device. In turn, such measurements can be referred to as canary stamps because each signed measurement is like a stamp proving its authenticity, and like a canary in a coal mine that indicates an early sign of trouble. Such verifiable evidence can be appended or included in packets transmitted by nodes on a network. The metadata elements can thus be used to evaluate the trustworthiness of a node(s) and react accordingly. For example, a device or entity can review metadata element associated with a node to determine that the node should not be trusted and adjust a network policy to mitigate possible damage. In some implementations, dedicated cryptoprocessors, such as a processor in TPM platform, can take measurements to attest to the trustworthiness (e.g., identity, integrity, etc.) of a node and its environment (e.g., software, hardware, operating system, running binaries, firmware, etc.). These measurements include evidence that the node is in a safe state. In some cases, these measurements can be provided through canary stamps, as previously described. However, a receiver of such evidence should be able to certify that the evidence is fresh, as the evidence can become stale thereby potentially reducing its effectiveness in reflecting the current trustworthiness of a node. For example, without ensuring freshness of such evidence, an attacker has an opening to inject previously recorded measurements and asserting what is replayed as being current. Some approaches can detect the replaying of old evidence via a “nonce”. A nonce is an arbitrary number that can be used to introduce randomness. In some instances, a nonce can be used just once in a cryptographic communication. Further, a nonce can be passed into a TPM and/or incorporated into a canary stamp/metadata. In some cases, a result provided by the TPM can include a signature based on the nonce. Since the nonce can be grounded in a transactional challenge/response interaction model, in some cases the nonce may be less effective with unidirectional communications originating from an attesting device. For example, a nonce may less effective with an asynchronous push, multicast, or broadcast message. However, there are numerous use cases where a platform assessing whether its peers are trustworthy is advantageous. Being able to perform a unidirectional attestation using an asynchronous push, multicast, or broadcast message in conjunction with trusted binaries opens many possibilities for platforms to assess whether their peers are trustworthy. Detection of invalid attestations can trigger alarms or events, reduction of network access from a suspect device, or can become a part of Admission Control (e.g., IEEE 802.1X). Some platforms can be configured to support the unidirectional attestation mechanism. Other freshness approaches can be based on trusted computing capabilities, such as TPM. For example, a token can be generated which allows external entities to validate freshness of asserted data based on the state of internal counters within the TPM. This token can be used to detect replay attacks, and provide attestation for asynchronous push, multicast, and broadcast messages. Various of the foregoing approaches can be combined with TPM-integrated capabilities aimed at verifying that valid compute components, such as binary processes, are running on a node. These capabilities can include, for example, Trusted Execution Environments (TEE) which provide runtime malware protections, Authenticated Code Modules (ACM) which ensure that only digitally-signed code modules can be loaded into a processor, and the like. These technologies can validate that a processor is running known software with a valid chain of binary signatures. In some cases, metadata elements, e.g. canary stamps, and tokens can be created by extracting current counters (e.g., clock, reset, restart) from a node's TPM, and incorporating such counters and security measures taken from the node into a packet. In some examples, the current counters and/or security measures can be hashed with information within an external TPM. The metadata elements and tokens can thereby provide a non-spoofable token or metadata element, which can bind continuously incrementing counters on an attestee with a known external state. Any resetting of the TPM counters is visible in any subsequent TPM queries, and any restarting of a platform is also exposed in subsequent TPM queries. Within these bounds of reset and restart, the TPM's time ticks counter continuously increments. Therefore, any push of attestee TPM information which includes these counters can be determined to have occurred subsequent to any previously-received measurement. Also, if the reset and restart counters have not changed, the incremental time since any previous measurement can also be known. In some cases, a large amount of information that should be trusted by network peers may not be contained within the TPM's Program Configuration Registers (PCR). As a result, indirect methods of validating that a node has not been compromised can be applied. The receipt of the metadata elements, e.g. canary stamps, and/or tokens can mean that a receiver should have the option of verifying the information. In many cases, such verification can be performed without the need of supplementary evidence being sent with the canary stamp. Moreover, in non-controller based or centralized implementations, the verification steps do not have to occur at the receiver. In some integrity verification implementations, a controller or device can implement an integrity verification application. The integrity verification application can be designed to recognize change events and evaluate known good values, which allow evaluation of a boot-integrity stamp and a running process binary signature stamp based on, for example, TPM counters, timestamps, nonces, and/or time tokens. On any discrepancy, a controller or centralized device can isolate a compromised node from its network peers by shutting down the interfaces of the node. In some examples, the metadata elements, e.g. canary stamps, and/or verifications for integrity can be implemented, such as a measured-boot stamp (e.g., SHA1 hash over PCRs 0-7), a verified-boot stamp (e.g., which can verify that only recognized binaries were executed when booting), a process-stamp (e.g., root-of-trust validated through a process which is asserting a particular protocol or protocols), a file-system stamp (e.g., all files within a vendor determined set of directories), a log-integrity stamp (e.g., used to augment existing integrity analytics and forensics), a configuration stamp (e.g., State of the current device configuration), etc. Some implementations can achieve all or some of these stamps, depending on the implementation. Moreover, in some implementations, all or some of these stamps can be implemented or achieved using a single or multiple stamps. As previously explained, TPM provides methods for collecting and reporting the identity of hardware and software components in a platform to establish trust for that platform. TPM functionality can be embedded in a variety of devices including mobile phones, personal computers, network nodes (e.g., switches, routers, firewalls, servers, network appliances, etc.), and/or any other computing devices. Further, attestation can describe how the TPM can be used as a hardware root of trust and offer proof of integrity of a node. Such integrity can include hardware integrity, software integrity (e.g., micro loader, firmware, boot loader, kernel, operating system, binaries, files, etc.), and runtime integrity. In some cases, TPM and attestation can be implemented as described herein to provide proof of integrity and proof of transit through uncompromised nodes. In some examples, metadata elements and tokens containing or reflecting security measures are used as previously mentioned to validate the integrity of a node and perform continuous evaluation of node integrity. Thus, the metadata elements and tokens described herein can be used to provide proof of transit through uncompromised nodes. In some examples, the metadata elements and tokens can be added as additional metadata to packets that traverse a network where proof of transit via uncompromised nodes is desired. Various strategies can be implemented for transporting the metadata elements and tokens in a packet. In some cases, the metadata elements and tokens can be carried within an In-Situ (or in-band) Operations, Administration and Management (IOAM) data field. In some implementations, the metadata elements and tokens can be carried with IOAM trace data. For example, a canary stamp can be carried as part of an IOAM data field in a variety of encapsulation protocols such as, for example and without limitation, IPv4, IPv6, NSH (Network Service Header), etc. In some cases, the canary stamp can be carried in an IOAM data field as an IOAM Trace option data element (e.g., with an IOAM Trace type for node integrity canary stamp). A metadata element, token, or digest, e.g. canary stamp digest, can be added in the IOAM trace option of a packet by each node that forwards the packet. When the packet reaches a node (e.g., the destination node and/or an intermediate node) that removes IOAM metadata (e.g., an IOAM decapsulating node), the validity of the metadata element and/or token in the packet can be verified to determine that the packet traversed uncompromised nodes. In some examples, since canary stamps are time bound, the packet trace timestamps defined in IOAM can be used to validate the canary stamp in the time window the packet traversed that node. Verification can be performed without placing a large transactional load on the verifier or a device, such as a controller, that will ultimately validate the security measurements associated with the metadata elements or tokens. This is because the measurement values can often change infrequently. The verifier may only need to validate a metadata element and/or token carried within an IOAM data trace whenever the associated security measurements associated change (e.g., a verifier may only need to check with a controller whenever it sees a node's TPM extends a PCR value which was not previously confirmed by the verifier). In some cases, when only the time ticks within a signed metadata element increases, only the signature of the metadata element is validated. To do this, the verifier may use the public key of any node which can place a metadata element. Such signature validation can be done without using a controller to verify the measurements. In another example, a packet can carry IOAM POT data with space optimization of metadata element values, e.g. canary stamp values. For example, a new IOAM POT data field can carry a canary stamp or a hash extend of a canary stamp and, in turn, canary stamp data can be carried across nodes. In some cases, a canary stamp hash extend can be a similar method as PCR extend operation performed by TPMs. In some cases, the canary stamp hash can provide a one-way hash so that a canary stamp recorded by any node cannot be removed or modified without detection. IOAM proof of transit option data for a canary stamp digest can be defined by a hash algorithm (e.g., 20 octets with SHA1, 32 octets with SHA 256, etc.). In some implementations, each node along a path of the packet can forward the packet with a new or updated canary stamp digest. In some examples, the new or updated canary stamp digest can be generated by a node as follows: IOAM canary stamp digest new value=Digest of (IOAM canary stamp digest old value∥hash (canary stamp of the node)), where the IOAM canary stamp digest old value can refer to the canary stamp digest included in the packet by one or more previous hops. Moreover, in some cases, a Per Packet Nonce (PPN), where PPN changes per packet and is carried as another field within the IOAM metadata option, can be added to provide robustness against replay attacks. To illustrate, in some examples, a PPN can be added as follows: IOAM canary stamp digest new value=Digest of (IOAM canary stamp digest old value∥hash (canary stamp of the node∥PPN)). A node creating the new value for the IOAM canary stamp digest can thus take the value of any previous IOAM canary stamp digest and extend/hash that value with the node's current canary stamp. The result of the concatenation and hashing can then be written into IOAM POT data (or other IOAM data fields) as the new IOAM canary stamp digest. At the verifier (e.g., the device verifying the canary stamp data), the same operation can be performed over expected canary stamp values calculated for the nodes that are traversed in the time window when the packet was forwarded. A verifier can be an inline device or a centralized device. Moreover, in some examples, nodes that are expected to be traversed can be identified using IOAM tracing, routing state or by sending active probes. A match between the value of POT data carrying specific metadata elements, e.g. a canary stamp digest and the expected canary stamp value, can prove that the packet traversed through trusted or uncompromised nodes. In some examples, one or more strategies can be implemented to optimize metadata element validation. For example, metadata elements, e.g. canary stamps, can detect attempts of a replay attack by embedding a nonce as well as TPM or TPM2 counters (e.g., clock, reset, restart). In some cases, this nonce can be part of the metadata elements and different from the PPN described above. The nonce is relevant to a receiver as the interval from the nonce's creation time to the first stamp received by the verifier can define the interval of freshness (e.g., the measurement is no older than this interval of freshness). From there, the TPM2 time ticks counter can be used to maintain that initial gap of freshness even without the delivery of a new nonce. In some implementations, to optimize metadata element or token validation across nodes, the following approaches can be implemented to deliver synchronization information from a central component to each node and the verifier. For example, a central server can broadcast or multicast centralized nonce values (e.g., tracked random numbers). Each node can pick up the latest nonce and use it to attest a value. A verifier can know the freshness of a metadata element or token it receives from each node. This freshness can be the delta in time since that particular nonce was issued. Subsequent attestations can use the incrementing time ticks to prove freshness from that initial time gap. In some cases, the issuing of new nonces can reset the time gap to a potentially shorter interval. Moreover, in some cases, each node can embed attested time within its metadata element. To get attested time, a TUDA (Time-Based Uni-Directional Attestation) scheme such as the TUDA scheme described in https://tools.ietf.org/id/draft-birkholz-i2nsf-tuda-01.html, the contents of which are incorporated herein by reference in their entirety, can be used. This can result in the availability of both the attested time at a node, as well as the value of the TPM2 counters at this node when a TUDA time-synchronization token was created. This can eliminate the use of a central nonce authority, but can increase the size of the metadata element as the nonce can be replaced by the TUDA time-synchronization token. This approach may also implement a central timestamp authority as per TUDA. In some examples, for each hop, a canary stamp digest value can be: IOAM canary stamp digest new value=Digest of (IOAM canary stamp digest old value∥hash (canary stamp of the node∥TUDA time-synchronization token of the node)). This approach can provide numerous benefits. For example and without limitation, with this approach, a verifier can limit the number of verifications by verifying the signature of a hop's time-synchronization token only when it changes. Moreover, with this approach, there may not be a time gap nonce changeover freshness when a first measurement is received. Further, in some cases, this approach can be implemented without also carrying a PPN or without synchronizing a nonce across nodes as previously described. Further, an attestor, e.g. a node or a verifier, can use random numbers, otherwise pseudo-random numbers, created by peers and/or the attestor to generate and verify attestation information. Specifically, the attestor can accumulate random numbers from one or more layer 2 peers. The random numbers can be accumulated from the peers over a specific amount of time, e.g. a short duration of time. In turn, the random numbers can be combined into a number through an applicable technique, e.g. a Bloom filter. This number can serve as a nonce for a cryptoprocessor for generating a result. As follows, the layer 2 peers, potentially including the attestor, can use the result created by the cryptoprocessor, to verify/validate that their corresponding provided random number was used in generating the nonce ultimately used by the cryptoprocessor to create the result. In turn, the layer 2 peers, potentially including the attestor, can generate verified attestation information based on the random numbers generated by the peers, the nonce created from the random numbers, and/or the result created by the cryptoprocessor from the nonce. Having discussed canary stamps and other trust-related concepts, this disclosure now returns to the discussion of handling trust issues in networks. On-demand based network protocols and architectures (such as locator ID separation protocol (LISP) and ILAMP protocols defined in the Internet Engineering Task Force (IETF) or the just-in-time architecture used in the Streamline infrastructure) can take advantage of these attestation techniques. An ILAMP protocol (ILA (Identifier Locator Addressing) Mapping Protocol) is a mapping used between forwarding nodes and routers to manage the cache. The ILA provides an approach to implement network overlays without the overhead, complexities, or anchor points associated with encapsulation. The solution facilitates highly efficient packet forwarding and provides low latency and scalability in mobile networks. ILA can be used in conjunction with techniques such as network slices and Network Function Virtualization to achieve optimal service based forwarding. In particular, the on-demand based network protocols and architectures can use them to 1) assign levels of trust to the different sources of information in the network and 2) during on-demand procedures use of these levels of trust to dynamically drive the policy to be followed during request processing. The present disclosure also proposes a set of mechanisms to be used in on-demand systems to support trust-level tracking based on unidirectional attestation and trust-driven dynamic resolution of policies. FIG.3is a block diagram of an example of a networking environment300in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the first example of the networking environment300includes a network310that includes a sub-network310a. In some implementations, the sub-network310acorresponds to a local area network (LAN) or virtual local area network (VLAN). In some implementations, the sub-network310acorresponds to a wide area network (WAN), such as the Internet. The sub-network310acan include a combination of nodes included within a LAN, VLAN, and/or WAN. The networking environment300further includes a source node302. The source node302corresponds to a networking device (e.g., switch, router, gateway, etc.) associated with a data packet that is destined for a destination node306. The source node302is coupled to a plurality of candidate next-hop nodes304-1-304-N. Each of the plurality of candidate next-hop nodes304-1-304-N is included within a respective route between the source node302and the destination node306. As illustrated inFIG.3, each of the plurality of candidate next-hop nodes304-1-304-N is connected to candidate second hop nodes305-1-305M. One of ordinary skill in the art will appreciate that, in various implementations, each of the plurality of candidate next-hop nodes304-1-304-N is connected to a subset of the candidate second hop nodes305-1-305M (not shown). The networking environment300further includes an attestation routing orchestrator301. As with the source node302, the attestation routing orchestrator301is coupled to the plurality of candidate next-hop nodes304-1-304-N. In various implementations, the attestation routing orchestrator301obtains, according to a predefined protocol, a first plurality of attestation vectors from the candidate next-hop nodes304-1-304-N. In one aspect, the attestation routing orchestrator301further obtains additional information from candidate second-hop nodes305-1-305-M and utilizes the additional information in selecting the particular candidate next-hop node. Although not illustrated inFIG.3, the attestation routing orchestrator301further can obtain additional information from nodes that are more than two hops away (e.g., candidate third hop nodes, candidate fourth hop nodes, etc.). The attestation routing orchestrator301is further coupled to a trusted system303. The attestation routing orchestrator301can obtain a trusted image vector from the trusted system303. The trusted system303includes a verified image repository303aand a server303b. The trusted system303can include one or more trusted image vectors that are known with a high degree of confidence to have not been compromised (e.g., hacked, attacked, improperly accessed, etc.). For example, in some implementations, the trusted system303can be part of a stub network. As will be described in great detail with reference toFIG.4, the attestation routing orchestrator301selects, and directs a data packet to, a particular candidate next-hop node of the plurality of candidate next-hop nodes304-1-304-N based on the trusted image vector and the first plurality of attestation vectors. Moreover, the attestation routing orchestrator301directs the data packet destined for the destination node306to the particular candidate next-hop node. FIG.4is a block diagram of an example of a networking environment400in accordance with some implementations. Notably, in contrast to the networking environment300illustrated inFIG.3, the networking environment400includes a source node401that includes an attestation routing orchestrator401d. The attestation routing orchestrator401dcan be similar to and adapted from the attestation routing orchestrator301inFIG.3. The source node401further includes one or more CPUs401a. In various implementations, the one or more CPUs401aprovide processing resources for generating a plurality of confidence scores for the corresponding plurality of candidate next-hop nodes304-1-304-N. The one or more CPUs401acan provide processing resources for selecting a particular confidence score of the plurality of confidence scores that satisfies one or more selection criteria. A more detailed description of these features is provided with reference toFIG.5, below. The source node401further includes a memory401b. The memory401bcan correspond to a non-transitory memory, such as RAM, ROM, etc. The memory401bcan store the data packet destined for the destination node306. In some implementations, the memory401bstores a trusted image vector obtained from the trusted system303. The memory401bcan store a first plurality of attestation vectors obtained from the corresponding plurality of candidate next-hop nodes304-1-304-N and optionally a second plurality of attestation vectors obtained from the corresponding plurality of candidate second hop nodes305-1-305-M. The source node401further can include a network interface401cfor obtaining, receiving, and transmitting the aforementioned data packets and vectors. As will be further described with reference toFIG.5, the source node401can select, and directs a data packet to, a particular candidate next-hop node based the trusted image vector and the first plurality of attestation vectors. FIG.5is a block diagram500of an example of a networking environment500in accordance with some implementations. Notably, in contrast to the networking environment400illustrated inFIG.4and the networking environment300inFIG.3, in the networking environment500, a particular one of the plurality of candidate next-hop nodes304-1-304-N relays a trusted image vector from the trusted system303to the source node501. In various implementations, the attestation routing orchestrator501dis similar to and adapted from the attestation routing orchestrator401inFIG.4and/or the attestation routing orchestrator301dinFIG.3. The trusted system303can sign the trusted image vector and provide the signed trusted image vector to the particular candidate next hop node, which in turn provides the signed trusted image vector to the source node501. The particular candidate next hop node provide the signed trusted image vector reduces attestation time (e.g., the time to determine trustworthiness of the particular candidate next hop node) because the source node501need not contact a remote node (trusted system303). In some implementations, attestation time is further reduced because a single attestation process (e.g., the trusted system303signing the trusted image vector) facilitates the attesting of multiple source nodes. In other words, trusted image vectors need not be generated and evaluated on a per source node basis. Moreover, in implementations in which the source node501is not connected to the trusted system303(e.g., link down), obtaining the trusted image vector from the particular candidate next hop provides an alternative mechanism for node attestation. In some implementations, the trusted system303appends a time-stamped response to the trusted image vector as part of the signing process, sometimes referred to as stapling. Consequently, the source node501need not contact the trusted system103in order to attest a particular candidate next hop node. FIG.6is an example block diagram of a controller orchestrated attestation based routing system600. The source node601can be similar to and adapted from the source node302inFIG.3. As illustrated inFIG.6, the attestation routing orchestrator301is separate from but coupled (e.g., connected) to the source node601. The attestation routing orchestrator301corresponds to a controller with knowledge of the network that includes the plurality of candidate next-hop nodes and optionally the plurality of candidate second-hop nodes. For example, the attestation routing orchestrator301corresponds to a network management system (NMS). As another example, the attestation routing orchestrator301corresponds to an intent-based networking system, such as Cisco's digital network architecture (DNA). As yet another example, in some implementations, the attestation routing orchestrator301corresponds to a wireless LAN controller (WLC), while the plurality of candidate next-hop nodes304-1-304-N and optionally the plurality of candidate second hop nodes correspond to networking devices (e.g., access points, user devices, etc.) The attestation routing orchestrator301obtains, according to a predefined protocol, a first plurality of attestation vectors from the plurality of candidate next-hop nodes304-1-304-N. Each of the plurality of candidate next-hop nodes304-1-304-N is included within a respective route between the source node601and a destination node. In various implementations, the respective routes are independent of each other. The attestation routing orchestrator301determines a plurality of confidence scores. Each of the plurality of confidence scores is based on a comparison between a corresponding one of the first plurality of attestation vectors and a trusted image vector. In various implementations, the attestation routing orchestrator301obtains the trusted image vector from the trusted system303. The attestation routing orchestrator301can obtain, according to the predefined protocol, a second plurality of attestation vectors from a corresponding plurality of candidate second-hop nodes. Each of the plurality of candidate second-hop nodes is included within a respective route between a corresponding one of the plurality of candidate next-hop nodes304-1-304-N and the destination node. Each of the plurality of confidence scores can be additionally based on a comparison between a corresponding one of the second plurality of attention vectors and the trusted image vector in combination with the comparison between the corresponding one of the first plurality of attestation vectors and the trusted image vector. The attestation routing orchestrator301selects, from the plurality of confidence scores, a particular confidence score that satisfies one or more selection criteria. The particular confidence score is associated with a particular candidate next-hop node of the plurality of candidate next-hop nodes304-1-304-N. The attestation routing orchestrator301directs, to the particular candidate next-hop node, a data packet destined for the destination node. For example, in various implementations, the attestation routing orchestrator301provides attested route information to an attested route manager601dof the source node601in order to facilitate the source node601sending the data packet to the particular candidate next-hop node. The attested route information is indicative of the trustworthiness of each of the plurality of candidate next-hop nodes304-1-304-N. For example, the attested route information can include an identifier (e.g., IP address, MAC address, SSID, etc.) identifying a secure, particular candidate next-hop node of the plurality of candidate next-hop nodes304-1-304-N. In this example, the source node601provides the data packet based on the identifier in order to route the data packet to the secure, particular candidate next-hop node. As another example, the attested route information can include a plurality of confidence scores associated with the plurality of candidate next-hop nodes304-1-304-N. The determination of confidence scores will be described in further detail, below. In this example, the attested route manager601dselects a particular candidate score based on one or more selection criteria, which also be described in further detail, below. Moreover, the attested route manger601dprovides the data packet to the particular next-hop node associated with the particular candidate score. The attestation routing orchestrator301can cease to direct additional data packets to the particular candidate next-hop node in response to determining that the particular confidence score falls below a confidence threshold. The source node601can include one or more CPUs601a. In various implementations, the one or more CPUs601aprovide processing resources for managing attested route information obtained from the attestation routing orchestrator301. The source node601further includes a memory601b. The memory601bcan correspond to a non-transitory memory, such as RAM, ROM, etc. The memory601bcan store the obtained attested route information and data packets to be transmitted. The source node601further includes a network interface601cfor obtaining the attested route information. Determination of whether a network device has been compromised or not is a function of available/visible indicators associated with the network device and time. The visible indicators include, but are not limited to, a set of currently available evidence footprints which indicate a compromise may be in place on a particular remote device. Such indicators can come from multiple sources, including from TPM/Aikido, canary stamps, Syslog, YANG Push, EEM, peer devices, traffic counters, and other sources. Visibility can be a preferred method of identifying a compromise, as it is easier to react to the compromise in a timely manner. These indicators may be referred to as visible indicators. When there are no visible indicators (i.e., no visible footprints available), the probability of a remote device compromise is a function of the time which has passed since the last validation that the device is in a known good state. Time can be a less preferable method compared to visibility method described above, as there will be a lag before any remediation might be applied. These indicator(s) may be referred to as invisible and/or time indicators. With the above two categories of visible/invisible indicators, an instantaneous formula can be provided for estimating probability or chance of a compromise on any given device operating within a network. For visible factors/indicators: Pv1 can be defined as probability for compromise of type 1 when there is a specific set of events/signatures existing which correspond to the compromise. Pv2 can be defined as probability for compromise of type 2 and Pvx can be defined as probability for compromise of type x. Assuming each of these compromises Pv1 . . . Pvx are independent, the following formula gives the probability of visible compromise based on recognized signatures (Pv): Pv=1-((1-Pv1)(1-Pv2)..(1-Pvx))(1) Other type of known or to be developed formulas may be used instead of or in conjunction with formula (1) when there are interdependencies between different types of evaluated compromises (Pv1, Pv2, . . . Pvx). Furthermore, any given probability (e.g., Pv1 . . . Pvx) may be determined based on evidences of events from that device for which the probability of a compromise is being calculated (e.g., via formula (1)) and/or evidence obtained from one or more devices adjacent to the device for which the probability of a compromise is being calculated (e.g., via formula (1)). For invisible/time factors/indicators, a probability that an invisible compromise has occurred to a remote device in the deployment environment can be expressed by the formula: Pc=1-((1-Pv)*(1-Pi))(3) Effectively knowing Pi implies that on operator knows the half-life which should be expected before a remote device should be considered compromised independently of any concrete evidence. It should be noted that a probability of an invisible compromise does not have to be static. Real time modification based on current knowledge of viruses/attacks may be allowed With formulates for visible and invisible factors as described above (formula (1) and formula (2)), an overall probability of a compromise for a given device may be given by Pi=1-((1-chanceofinvisiblecompromiseintimeperiodt)⋀numberoftintervalssincelastverificationofagood/uncompromisedremotesystemstate)(2) Formula (3) provides an indicator of trustworthiness (safety) of a given device. This metric considers both time based entropy and any available evidence which can be correlated to known compromises. If Pc can be calculated (or even roughly estimated), various costly functions can be efficiently prioritized. For example, a controller may schedule when to do deeper validation (or perhaps direct refresh) of a remote device. This scheduling could include determining when to perform active checks to validate remote device memory locations (locations perhaps containing executable code which might have been compromised). These can be used to return the system to a known good state (and reset the entropy timer). Local configuration repositories can be refreshed based on evidence of security/trustworthiness issues underway, rather than being based just on time. Beyond the scheduling of system checks, there can be forwarding implications based on the value of Pc. For example, routing/switching behavior might be adjusted/impacted based on the relative trustworthiness of a remote device. Where a higher Pc values exist, sensitive data traffic flows can be routed around that device. As a further advantage of the present disclosure, it should be noted that encryption alone may be insufficient to protect sensitive flows since there are scenarios where even the fact that a flow is occurring between endpoints might be considered information to be protected (e.g., in a battlefield). FIG.7is an example method of determining a device trustworthiness in accordance with some implementations. WhileFIG.7will be described from a perspective of a network controller such as attestation routing orchestrator301ofFIG.6, it should be noted that such controller may have one or more memories having computer-readable instructions stored therein, which when executed by one or more associated processors, configure the controller to perform the steps ofFIG.7. At step702, a controller may determine a first probability of a security compromise for a given network device such as any one of source node302/401, destination node306and/or any one of candidate next-hop nodes304-1to304-N and any one or more of candidate second-hop nodes305-1to305-N. Such first probability may be determined according to formula (1) as described above. At step704, the controller may determine a second probability of a security compromise for the given network device such as any one of source node302/401, destination node306and/or any one of candidate next-hop nodes304-1to304-N and any one or more of candidate second-hop nodes305-1to305-N. Such first probability may be determined according to formula (2) as described above. At step706, the controller may determine, based on the first probability and the second probability, a probability of trustworthiness of the device according to formula (3) as described above. FIG.8illustrates an exemplary common architecture800for an on-demand networking solution. In this exemplary form, the on-demand network architecture can support two types of entities. The first is a centralized database system810. The centralized database system stores information about the on-demand network830that can be queried (on demand) to support the dissemination of information. The common architecture800also includes network elements820. The network elements820participate in the network830and implement the routing and network services. The network elements820use the centralized database system to store and gather information about the on the network830. The two basic elements of the architecture must support two basic procedures to participate in a network deployment. First, the network elements820use a process to upload information to the centralized database810. The other procedure is the on-demand requests by which the network elements820use to discover on-demand information about the network830. For example, such information may include routing information within the network830. Protocols such as LISP and ILAMP and architectures like Just-in-Time can follow this exemplary architecture as illustrated in the figure and described above. FIG.9AandFIG.9Billustrate a use of trust-level in connection with the on-demand networking solution in accordance with some implementations. The present application for using the trust-level in connection with the following mechanisms. First, network elements can use unidirectional attestation procedures coupled with the information upload events to the centralized database used in on-demand solutions. Next, the centralized system810uses attestation verification to maintain a level of trust categorization of the network elements. Trust level continuously degrades as time passes since the last attestation verification. Last, the on-demand resolution procedures are then subject to a time-based policy enforcement that depends on the state of trust at that particular point in time (degradation) for each one of the network elements involved in the on-demand process. Further details are provided below in connection with: (1) how attestation can be used during uploads and trust levels are maintained and may degrade with time, and (2) how trust-levels of both a requestor and receiver can be used to drive policy enforcement during on-demand resolution. With respect toFIG.9A, the figure illustrates information upload procedures that are leveraged to carry unidirectional attestation information (e.g. a canary stamp) so that the centralized mapping system810can ascertain trustworthiness of the source of the data820. First with respect to the attestation with uploads, every information upload procedure is used as an attestation procedure902,904and represent timely recordings of assessment of authenticity at the centralized database system810. Second, with respect to trust-level degradation, network elements that produce frequent information uploads820revalidate trust frequently. On the contrary, if a network element does not re-validate its attestations with the centralized database periodically, that may degrade its trust level. Similarly, during periods of low upload activity (no mobility, no new hosts, downtime, etc.) the centralized database810may consider degrade the level of trust associated to a particular network element in the network830. Verification failures also degrade the level of trust. Last, with respect to attestation refresh, when the centralized database810identifies that the trust-level of a node is degrading it has the possibility to request an information refresh, that request will be coupled with an attestation procedure. It is important to note here that following the above considerations, the trust-level of network elements becomes a time-dependent variable. Any policy that takes the trust-level into account automatically becomes a time-dependent function. With respect toFIG.9B, once the centralized database system810maintains a trust level score associated with every network element, on-demand procedures can follow trust-level policies954that take into account the level of trust of both the requestor as well as of the destination of the request taking into account the trust degradation at the time of the request. The important observation here is that since trust level has become a time dependent variable, policy application automatically becomes a dynamic process952that automatically adjusts to the level of trust of the elements involved. The overall system is shown as feature950. The following figure (e.g.FIG.10) illustrates this concept. FIG.10illustrates an exemplary process1000to determine what trust policy to apply based on the level of trust. The figure illustrates the process to determine to policy to apply at the centralized database system810based on the level of trust assigned to both the requestor of the information as well as the destination of the request (e.g. the producer of the information). The figure illustrates this idea by using three exemplary policy outcomes based on three exemplary levels of resulting combined trust. As the figure illustrates, the instant when the on-demand request is transmitted and reaches the centralized database in step1010, the centralized database can evaluate the received trust-level in step1020in order to determine the specific policy that is applied in step1030. Applying the policy can include dropping the communication where there is little or no trust, redirecting to a trusted element for a medium or low level of trust, or providing direct access where the system has a high level of trust. As described above, the fact that trust-levels become a time-dependent value leads to having policies that dynamically track the trust score attributed to the different elements of the on-demand network. FIG.11illustrates a method example1100of implementing the policies. An example method includes receiving an on-demand request for information (1102), evaluating the request based on a trust-level, wherein the trust-level corresponds to a trust pertaining to a requestor of the information and a destination of the request, and wherein the trust-level includes a time-dependent value (1104) and selecting a policy from a plurality of possible policies based on the trust-level (1106). FIG.12andFIG.13illustrate systems in accordance with various embodiments. The more appropriate system will be apparent to those of ordinary skill in the art when practicing the various embodiments. Persons of ordinary skill in the art will also readily appreciate that other systems are possible. FIG.12illustrates an example of a bus computing system1200wherein the components of the system are in electrical communication with each other using a bus1205. The computing system1200can include a processing unit (CPU or processor)1210and a system bus1205that may couple various system components including the system memory1215, such as read only memory (ROM)1220and random access memory (RAM)1225, to the processor1210. The computing system1200can include a cache1212of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor1210. The computing system1200can copy data from the memory1215, ROM1220, RAM1225, and/or storage device1230to the cache1212for quick access by the processor1210. In this way, the cache1212can provide a performance boost that avoids processor delays while waiting for data. These and other modules can control the processor1210to perform various actions. Other system memory1215may be available for use as well. The memory1215can include multiple different types of memory with different performance characteristics. The processor1210can include any general purpose processor and a hardware module or software module, such as module 11232, module 21234, and module 31236stored in the storage device1230, configured to control the processor1210as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor1210may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing system1200, an input device1245can represent any number of input mechanisms, such as a microphone for speech, a touch-protected screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device1235can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing system1200. The communications interface1240can govern and manage the user input and system output. There may be no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. The storage device1230can be a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memory, read only memory, and hybrids thereof. As discussed above, the storage device1230can include the software modules1232,1234,1235for controlling the processor1210. Other hardware or software modules are contemplated. The storage device1230can be connected to the system bus1205. In some embodiments, a hardware module that performs a particular function can include a software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor1210, bus1205, output device1235, and so forth, to carry out the function. FIG.13illustrates an example architecture for a chipset computing system1350that can be used in accordance with an embodiment. The computing system1350can include a processor1355, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. The processor1355can communicate with a chipset1350that can control input to and output from the processor1355. In this example, the chipset1350can output information to an output device1355, such as a display, and can read and write information to storage device1370, which can include magnetic media, solid state media, and other suitable storage media. The chipset1350can also read data from and write data to RAM1375. A bridge1380for interfacing with a variety of user interface components1385can be provided for interfacing with the chipset1350. The user interface components1385can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. Inputs to the computing system1350can come from any of a variety of sources, machine generated and/or human generated. The chipset1350can also interface with one or more communication interfaces1390that can have different physical interfaces. The communication interfaces1390can include interfaces for wired and wireless LANs, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the technology disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by the processor1355analyzing data stored in the storage device1370or the RAM1375. Further, the computing system1350can receive inputs from a user via the user interface components1385and execute appropriate functions, such as browsing functions by interpreting these inputs using the processor1355. It will be appreciated that computing systems1200and1350can have more than one processor1210and1355, respectively, or be part of a group or cluster of computing devices networked together to provide greater processing capability. For clarity of explanation, in some instances the various embodiments may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. FIG.14illustrates an example network device1400suitable for implementing PIM routing and performing switching, routing, and other networking operations. Network device1400includes a central processing unit (CPU)1404, interfaces1402, and a connection1410(e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU1404is responsible for executing packet management, error detection, and/or routing functions. The CPU1404preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU1404may include one or more processors1408, such as a processor from the INTEL X36 family of microprocessors. In some cases, processor1408can be specially designed hardware for controlling the operations of network device1400. In some cases, a memory1406(e.g., non-volatile RAM, ROM, etc.) also forms part of CPU1404. However, there are many different ways in which memory could be coupled to the system. The interfaces1402are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device1400. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor1404to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown inFIG.14is one specific network device of the present technologies, it is by no means the only network device architecture on which the present technologies can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., is often used. Further, other types of interfaces and media could also be used with the network device1400. Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory1406) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory1406could also hold various software containers and virtualized execution environments and data. The network device1400can also include an application-specific integrated circuit (ASIC)1412, which can be configured to perform routing and/or switching operations. The ASIC1412can communicate with other components in the network device1400via the connection1410, to exchange data and signals and coordinate various types of operations by the network device1400, such as routing, switching, and/or data storage operations, for example. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language reciting “at least one of” a set indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B. | 86,378 |
11863435 | DESCRIPTION OF EXAMPLE EMBODIMENTS 1. Overview Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies. Aspects of the invention are set out in the independent claims and preferred features are set out in the dependent claims. Features of one aspect may be applied to each aspect alone or in combination with other aspects. One embodiment includes receiving a particular segment routing packet by a particular router in a network. Responsive to the particular router data plane ascertaining during fast path processing by a fast path processing unit based on an Operations, Administration, and Maintenance (OAM) segment identifier of the particular segment routing packet that the particular segment routing packet is to be OAM processed by a different processing unit in the particular router, the particular segment routing packet is communicated to the different processing unit, with fast path processing being hardware-based packet processing by the fast path processing unit. OAM processing of the particular segment routing packet is performed by the different processing unit. In one embodiment, the OAM segment identifier includes a locator portion identifying to perform said OAM processing. In one embodiment, the OAM segment identifier includes an identification of an END.OP endpoint with punt function. In one embodiment, responsive to the OAM segment identifier identifying timestamp behavior, the fast path processing unit communicating a timestamp of a current time along with the particular segment routing packet to the different processing unit. In one embodiment, the OAM segment identifier includes an identification of an END.OTP endpoint with timestamp and punt function. In one embodiment, the OAM segment identifier includes an identification of END.OTPF endpoint with punt and forward function. In one embodiment, in response to an Internet Control Message Protocol (ICMP) echo request packet encapsulated in the particular segment routing packet, the particular router sending an ICMP echo response packet corresponding to the ICMP echo request. In one embodiment, the different processing unit is part of slow path packet processing responsive to programmed instructions, wherein slow path packet processing is packet processing based on programmed instructions; and wherein the different processing unit creates the ICMP echo response packet or provides the ICMP echo request packet to an ICMP service running on the particular router. In one embodiment, said OAM processing includes sending the timestamp and identifying information of the particular segment routing packet via the network to a remote OAM processing unit. One embodiment includes: another segment router in the network particular OAM processing of the particular segment routing packet, including sending to the remote OAM processing unit another timestamp and packet identifying information related to the particular segment routing packet; the remote OAM processing unit receiving and processing the timestamp and said identifying information and said another timestamp and said packet identifying information and to determine an OAM result including delay, loss, segment routing path verification, or jitter. In one embodiment, a segment list of a segment routing header of said received particular segment routing packet includes the OAM segment identifier. In one embodiment, the OAM segment identifier is a 128-bit Internet Protocol (IP) version 6 (IPv6) routable address; and wherein the particular segment routing packet includes an IPv6 header that comprises the OAM segment identifier as a destination address of the IPv6 header. One embodiment includes an apparatus, comprising: one or more hardware interfaces sending and receiving packets with a network; a fast path packet processing unit performing hardware-based packet processing; and a slow path packet processing unit performing processor-based packet processing based on programmed instructions. The apparatus performs packet processing operations including segment routing-capable (SR-capable) packet processing operations, with said packet processing operations including: receiving a particular segment routing packet; responsive to the particular router data plane ascertaining during fast path processing by the fast path processing unit based on an Operations, Administration, and Maintenance (OAM) segment identifier of the particular segment routing packet that the particular segment routing packet is to be OAM processed by the slow path processing unit in the particular router, communicating the particular segment routing packet to the slow path processing unit, with fast path processing being hardware-based packet processing by the fast path processing unit; and OAM processing of the particular segment routing packet by the slow path processing unit. In one embodiment, the OAM segment identifier includes an identification of an END.OP endpoint with punt function. In one embodiment, the OAM segment identifier includes an identification of an END.OTP endpoint with punt function or the OAM segment identifier includes an identification of END.OTPF endpoint with punt and forward function; and wherein responsive to the OAM segment identifier identifying timestamp behavior, the fast path processing unit communicating a timestamp of a current time along with the particular segment routing packet to the different processing unit. 2. Example Embodiments Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies. As used herein segment routing (SR) includes, but is not limited to using Internet Protocol Version 4 or 6 (IPv4 or IPv6) addresses as segment identifiers (SIDs). Further, SR includes, but is not limited IPv6 SR (SRv6) and/or IPv4 (SRv4). A segment identifier is typically a routable address in the network, such as, but not limited to an IPv4 or IPv6 address. As described herein, embodiments include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recites an aspect of the embodiment in its entirety. Moreover, some embodiments described may include, but are not limited to, inter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable media containing instructions. One or multiple systems, devices, components, etc., may comprise one or more embodiments, which may include some elements or limitations of a claim being performed by the same or different systems, devices, components, etc. A processing element may be a general processor, task-specific processor, a core of one or more processors, or other co-located, resource-sharing implementation for performing the corresponding processing. The embodiments described hereinafter embody various aspects and configurations, with the figures illustrating exemplary and non-limiting configurations. Computer-readable media and means for performing methods and processing block operations (e.g., a processor and memory or other apparatus configured to perform such operations) are disclosed and are in keeping with the extensible scope of the embodiments. The term “apparatus” is used consistently herein with its common definition of an appliance or device. The term “route” is used to refer to a fully or partially expanded prefix (e.g., 10.0.0.1 or 10.0.*.*), which is different than a “path” through the network which refers to a nexthop (e.g., next router) or complete path (e.g., traverse router A then router B, and so on). Also, the use of the term “prefix” without a qualifier herein refers to a fully or partially expanded prefix. As used herein, “forwarding information” includes, but is not limited to, information describing how to process (e.g., forward, send, manipulate, modify, change, drop, copy, duplicate, receive) corresponding packets. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to, any block and flow diagrams and message sequence charts, may typically be performed in the same or in a different serial or parallel ordering and/or by different components and/or processes, threads, etc., and/or over different connections and be combined with other functions in other embodiments, unless this disables the embodiment or a sequence is explicitly or implicitly required (e.g., for a sequence of read the value, process said read value—the value must be obtained prior to processing it, although some of the associated processing may be performed prior to, concurrently with, and/or after the read operation). Also, nothing described or referenced in this document is admitted as prior art to this application unless explicitly so stated. The term “one embodiment” is used herein to reference a particular embodiment, wherein each reference to “one embodiment” may refer to a different embodiment, and the use of the term repeatedly herein in describing associated features, elements and/or limitations does not establish a cumulative set of associated features, elements and/or limitations that each and every embodiment must include, although an embodiment typically may include all these features, elements and/or limitations. In addition, the terms “first,” “second,” etc., as well as “particular” and “specific” are typically used herein to denote different units (e.g., a first widget or operation, a second widget or operation, a particular widget or operation, a specific widget or operation). The use of these terms herein does not necessarily connote an ordering such as one unit, operation or event occurring or coming before another or another characterization, but rather provides a mechanism to distinguish between elements units. Moreover, the phrases “based on x” and “in response to x” are used to indicate a minimum set of items “x” from which something is derived or caused, wherein “x” is extensible and does not necessarily describe a complete list of items on which the operation is performed, etc. Additionally, the phrase “coupled to” is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modifying or not modifying the coupled signal or communicated information. Moreover, the term “or” is used herein to identify a selection of one or more, including all, of the conjunctive items. Additionally, the transitional term “comprising,” which is synonymous with “including,” “containing,” or “characterized by,” is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. Finally, the term “particular machine,” when recited in a method claim for performing steps, refers to a particular machine within the 35 USC § 101 machine statutory class. FIG.1Aillustrates a segment identifier structure100according to one embodiment. As shown, locator101identifies the segment routing node (e.g., router) to which segment identifier100pertains. In one embodiment segment identifier, locator101is a single value. In one embodiment, locator101includes a segment routing (SR) discriminator portion101A (some fixed value of a small number of possible values) and a segment node value101B, which allows a smaller search space for locator101as the dynamic portion is SR node value101B. Segment identifier100also includes SR function value102, argument value103(if present), and bit padding104. Using a known bit padding value104(e.g., all zeros or all ones for simplicity) allows for exact matching of a complete segment identifier100. In one embodiment, each of SR discriminator101A, SR node value101B, and SR function value102is a fixed size and located in a corresponding fixed position in the highest-order bits of segment identifier100. Thus, the structure of segment identifier100allows a SR-capable node to efficiently extract any of the desired fields101A,101B,102and103, possibly using exact matching instead of a more resource consuming longest prefix matching operation. This includes a SR node (corresponding to SR node value101B) performing the segment routing processing (corresponding to SR function value102) which includes accessing argument value103(qualify this segment routing processing) at a corresponding fixed position within segment identifier100, rather than acquiring such as via an additional read or parsing operation if argument value103was located elsewhere (e.g., at the end of segment identifier100). In one embodiment, segment identifier100is a routable IPv6 128-bit address, such as with a sixty-four bit SR discriminator101A, a sixteen-bit SR node value101B, a sixteen-bit SR function value102, and an argument value103of zero or more bits qualifying the processing identified by SR function value102. In one embodiment, an OAM segment identifier100includes a local SR function value102determined by the SR node (corresponding to locator101) to signal particular OAM functionally. Thus, this OAM segment identifier100is only locally valid, as another SR node may use a different SR function value102for the same OAM processing. In one embodiment, an OAM segment identifier100includes a global segment routing function value102(e.g., SRv6 FUNC opcode), with an opcode value globally identifying particular OAM processing to multiple or all SR nodes in the network. Thus any node can signal via the global segment routing function value102to cause another network node to perform corresponding OAM functionality. Using a global value provides for efficient signaling to a remote node to perform particular OAM processing. FIG.1Billustrates a segment routing packet structure140according to one embodiment. As shown, SR packet structure140includes an IP header141(e.g., IPv6, IPv4) including an IP destination address (which typically is a segment identifier), one or more ordered segment routing headers150, and the native (encapsulated) packet149. Each of one or more ordered SR headers150(which includes SR headers151-159) typically includes one or more segment identifiers. By allowing multiple, typically smaller SR headers, SR packet format140provides processing and/or memory efficiencies especially for limited-capability (e.g., less memory, less processing power) SR routers. In one embodiment, a SR packet with only a single segment identifier has no segment routing header150. As shown, one or more ordered SR headers150includes one to n SR headers151-159, with n being a positive integer. Each of these ordered SR headers151-159includes an ordered list of one or more segment identifiers (e.g., IPv6 or IPv4 address), each representing a segment in the SR network used to process (e.g., forward, manipulate, modify) a SR packet in and through the SR network. FIG.2Aillustrates network200operating according to one embodiment. As shown, network200includes client networks201and203(which are the same network in one embodiment) external to segment routing (SR) network210, which includes SR edge nodes211and213and a network212of network nodes including SR-capable routers (and possibly some network nodes that are not SR-capable in that they do not process a segment routing header/segment identifier but can forward/route IP and/or other packets), SR gateways, and OAM controller(s). In one embodiment, SR edge nodes211and213typically encapsulate native packets received from networks201and203into SR packets according to a data plane ascertained SR policy, and subsequently decapsulate native packets from SR packets and forward the native packets into network201and203. In response to receiving a packet, a SR edge node211,213and/or a SR node within network212determines a SR policy (e.g., list of segment identifiers) through and/or to which to forward a SR packet, possibly identifying OAM functionality via one or more (global or local) OAM segment identifiers and/or to set or clear the O-Flag in a segment routing header. These policies can change in response to network conditions, network programming, etc. In one embodiment, the SR policy specifies to add one or more SR headers or simply one or more segment identifiers, resulting in a SR packet having one or more SR headers, each with one or more segment identifiers and in one embodiment, with OAM signaling. In one embodiment, a native packet is received without a SR header, and the SR node211,213(or possibly an SR-capable node within network212) encapsulates the native packet in a SR packet including one or more added SR headers, each including one or more segment identifiers (e.g., one or more OAM and/or non-OAM segment identifiers), and possibly other OAM signaling information (e.g., an OAM flag set or cleared, a time length value (TLV) indicating OAM signaling, etc.) In one embodiment, a SR packet is received with a SR header, with a SR node211,213(or possibly an SR-capable node within network212) adding one or more SR headers resulting in a SR packet including one or more added SR headers, each including one or more segment identifiers. In one embodiment, a single SR header could have been used that includes all of the segment identifiers and other OAM signaling information, if present. FIG.2Billustrates a process associated with distributing segment routing policies according to one embodiment, with these segment routing policies designating OAM signaling to be included in corresponding segment routing packets. Processing begins with process block240. In process block242, a segment routing-capable node receives for a route, a segment routing policy update defining such as, but not limited to, an ordered list of OAM and other segment identifier(s), instruction to OAM mark or clear customer traffic (e.g., to set or clear an O-Flag in a segment routing header), an associated rate at which to perform the OAM marking, and/or where to send the OAM information, etc. In process block244, segment routing nodes continuously update their segment routing policies, routing information bases, and forwarding information bases as needed. Processing of the flow diagram ofFIG.2Bis complete as indicated by process block249. FIG.2Cillustrates a process according to one embodiment associated with distributing segment routing information including OAM and non-OAM segment identifiers in a network. Processing begins with process block260. In process block262, SR routers in the SR networks continuously advertise and exchange segment routing information (e.g., including advertising routes of segment identifiers) and other routing information (e.g., IPv4 or IPv6 topology information) via one or more routing protocols and/or via one or more label distribution protocols. As used herein, advertising of a route of a segment identifier includes advertising the fully expanded route, or a prefix corresponding to the segment identifier (e.g., the SR discriminator and SR node value, and possibly the SR function with or without an argument). In one embodiment, one or more SR routers advertise a predetermined maximum or preferred number (e.g., for increased or maximum efficiency) of segment identifiers to include in a SR header that will be processed by the corresponding SR node. In one embodiment, such advertising identifies those SR nodes that gain processing and/or memory efficiencies when a SR header has only a small number of segment identifiers. In one embodiment, a value (e.g., number, flag, range) corresponding to a predetermined quantity is advertised. In process block264, SR (and other) network nodes continuously update their SR policies and/or routing information as required (e.g., based on information received via a routing protocol, from a network management system, etc.). Processing of the flow diagram ofFIG.2Cis complete as indicated by process block269. FIGS.3A-Cand their discussion herein provide a description of all or portions of various SR network nodes and OAM controllers according to one embodiment. FIG.3Aillustrates one embodiment of a SR-capable packet switching device300(e.g., SR gateway, appliance, router, packet switching device, possibly with one or more service functions, and/or an OAM controller) according to one embodiment. As shown, packet switching device300includes multiple line cards301and305, each with one or more network interfaces for sending and receiving packets over communications links (e.g., possibly part of a link aggregation group), and with one or more processing elements that are used in one embodiment associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies. Packet switching device300also has a control plane with one or more processing elements302for managing the control plane and/or control plane processing of packets associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies. Packet switching device300also includes other cards304(e.g., service cards, blades) which include processing elements that are used in one embodiment to process (e.g., forward/send, drop, manipulate, change, modify, receive, create, duplicate, perform SR gateway functionality possibly with shared memory with one or more service functions, apply a service according to one or more service functions) packets associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies, and some hardware-based communication mechanism303(e.g., bus, switching fabric, and/or matrix, etc.) for allowing its different entities301,302,304and305to communicate. Line cards301and305typically perform the actions of being both an ingress and egress line card, in regards to multiple other particular packets and/or packet streams being received by, or sent from, packet switching device300. In one embodiment, a SR gateway and service functions are implemented on a line card301,305. FIG.3Bis a block diagram of an apparatus320used in one embodiment associated with segment routing (SR) network processing of packets including operations signaling and processing of packets in manners providing processing and/or memory efficiencies. In one embodiment, apparatus320performs one or more processes, or portions thereof, corresponding to one of the flow diagrams illustrated or otherwise described herein, and/or illustrated in another diagram or otherwise described herein. In one embodiment, apparatus320includes one or more processor(s)321(typically with on-chip memory), memory322(possibly shared memory), storage device(s)323, specialized component(s)325(e.g. optimized hardware such as for performing lookup and/or packet processing operations and/or service function, associative memory, binary and/or ternary content-addressable memory, etc.), and interface(s)327for communicating information (e.g., sending and receiving packets, user-interfaces, displaying information, etc.), which are typically communicatively coupled via one or more communications mechanisms329(e.g., bus, links, switching fabric, matrix), with the communications paths typically tailored to meet the needs of a particular application. Various embodiments of apparatus320may include more or fewer elements. The operation of apparatus320is typically controlled by processor(s)321using memory322and storage device(s)323to perform one or more tasks or processes. Memory322is one type of computer-readable/computer-storage medium, and typically comprises random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components. Memory322typically stores computer-executable instructions to be executed by processor(s)321and/or data which is manipulated by processor(s)321for implementing functionality in accordance with an embodiment. Storage device(s)323are another type of computer-readable medium, and typically comprise solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Storage device(s)323typically store computer-executable instructions to be executed by processor(s)321and/or data which is manipulated by processor(s)321for implementing functionality in accordance with an embodiment. FIG.3Cillustrates a specialized segment routing processing hardware architecture340according to one embodiment that performs hardware-based fast path packet processing of packets. The terms “fast path” and “slow path” processing of packets are used herein consistently with the common meaning to one skilled in the art, as a packet is initially processed by a packet switching device (e.g., router) by optimized, hardware-based “fast path” processing, and upon some condition (e.g., segment routing OAM signaling in a packet), the packet is “punted” (e.g., communicated) to a different processing path called “slow path” processing which uses a general-purpose processor (e.g., by a centralized processing unit operating according to software instructions such as a route processor) to process the packet. As used herein, fast path processing excludes processing by general-purpose processor (e.g., by a centralized processing unit operating according to software instructions). In one embodiment, fast path (FP) specialized hardware-based processing unit(s)344, such as, but not limited to one or more application-specific integrated circuits or network processors typically operating according to fixed microcode (excluding processing by general-purpose processor) fast path process packets. Upon some condition (e.g., detection of segment routing OAM signaling), a packet is punted to slow path processing unit(s)350(e.g., to a generalize-purpose processor350). In one embodiment, the fast path processing of packets is designed for processing of packets quickly, such as, but not limited to, at a line rate. In one embodiment, packets that cannot be processed by the fast path processing in a line rate packet time or require extra information are punted (communicated) to slow path packet processing freeing up the fast path processing for a next packet. The fast path processing capabilities of one embodiment of a reduced-capability router or other network node does not allow for significant manipulation of a segment routing header within the allotted processing time, thus, such packets are punted to slow path processing. As shown, hardware interface342receives packets which are stored in packet memory345(at least the packet payload), with lookup information (e.g., packet headers) being provided to fast path processing unit(s)344. For each packet, fast path processing unit(s)344, referencing a forwarding information base345, determines forwarding information which is provided to fast path processing unit(s)344. In addition, always or if the forwarding information so indicates, a hardware timestamp is acquired from timestamp generator(s)351indicating a current time, with the timestamp passed to fast path processing unit(s)344. In one embodiment, forwarding information base345includes specialized hardware and/or data structures (e.g., hardware binary and/or ternary content-addressable memory, data structures in memory). In one embodiment, fast path processing unit(s)344is specialized hardware that efficiently hardware processes including encapsulating a native packet into a segment routing packet (which includes adding one or more segment identifiers), updating a segment routing header of a segment routing packet, decapsulating a native packet from a segment routing packet, etc. If the packet is not dropped, fast path processing unit(s)344provides the segment routing or other packet to hardware interface348on which the packet is sent into the network according to forwarding information (e.g., identification of hardware interface348as the outbound interface, nexthop information). In one embodiment, fast path processing unit(s)344uses other specialized hardware and/or data structures345(e.g., hardware binary and/or ternary content-addressable memory, data structures in memory, packet memory) in determining forwarding information, generating the segment routing packet encapsulating the received native packet, etc. In one embodiment, fast path processing unit(s)344punts (communicates to slow path processing) a packet (with a timestamp if already acquired) as needed to slow path processing unit(s)350for performing slow path processing of a packet and/or for performing other functionality (e.g., responding to ICMP echo request packets). In one embodiment, a slow path processing unit350may acquire a timestamp from timestamp generator(s)351. In one embodiment, slow path processing unit(s)350performs the corresponding packet processing operations, such as, but not limited to, OAM processing of segment routing packets which may include sending OAM information to other processes or processing units350, or sending information to an external device (e.g., OAM controller) directly, or via an egress lookup operation performed by processing unit344, to hardware interface348. FIG.4Aillustrates a process according to one embodiment that processes a native packet received by a SR-capable node. Processing begins with process block400. In process block402, the data plane of a segment routing ingress/edge (or other) node receives a packet and determines forwarding information (e.g., via a FIB lookup operation). In one embodiment, the packet is a live user data traffic packet. As used herein live user data traffic packet refers to a packet that is an actual data traffic, and is not a probe or other network testing packet. For example, live user data traffic packets would include packets belonging to a streaming session, TCP communication session, voice or video call, etc., but excludes probe or other network testing packets. As determined in process block403, if a segment routing packet should be created to encapsulate the received native packet and the segment routing header is to include OAM signaling, then processing proceeds to process block410; else the packet is processed normally in process block404and processing proceeds to process block419. Continuing in process block410, the segment routing packet is created by fast path processing according to a segment routing policy with OAM signaling identified in the forwarding information. This forwarding information designates one or more OAM and/or other segment routing identifiers to be included in one or more segment routing headers, and might also designate to set or clear the O-Flag in a segment routing header, OAM context identifying information to be added to a segment routing header (e.g., in a time length value/TLV field) which can be used to uniquely identify a stream to which the received packet belongs or the received packet itself (e.g., for ease of correlation of OAM information collected from the packet as it traverses multiple SR-capable and possibly non-SR-capable nodes), to add an acquired timestamp in a segment routing header (e.g., in a TLV), a flag designating a location or a location address to which to send OAM information (e.g., identification of the packet, the packet itself, timestamp), etc. Next, as determined in process block411, if the created segment routing packet is to be punted to slow path processing (e.g., the O-Flag is set in a current segment routing header, the current segment identifier is an OAM segment identifier), then processing proceeds to process block412, else to process block413. Continuing in process block412, the created segment routing packet typically along with an acquired timestamp is sent to a slow path packet processor and processing continues with process block413. In one embodiment, fast path processing is used instead of slow path processing. Continuing and as determined in process block413, if the packet is be sent from the SR-capable router by the fast path processing (including when a copy of the packet has been sent to slow path processing), then processing proceeds to process block414; else processing proceeds to process block419. In process block414, the created segment routing packet is sent from the node according to egress forwarding information (e.g., identified interface, nexthop information). In one embodiment, fast path processing creates the segment routing packet, including inserting a hardware timestamp in a TLV of the segment routing header of the packet, with a subsequent segment routing node communicating the timestamp and other OAM information to an OAM controller, with slow path processing of the packet not being performed. Continuing, processing of the flow diagram ofFIG.4Ais complete as indicated by process block419. FIG.4Billustrates a process to create a segment routing packet performed in one embodiment. Processing commences with process block400. In process block442, an instruction to create OAM segment routing packet is received (e.g., from a console, OAM or network management system or process, etc.). As determined in process block443, if the instruction requires the retrieval of a corresponding segment routing policy, then processing proceeds to process block444wherein the segment routing policy is retrieved. Next in process block450, the OAM segment routing packet is created according to the received instruction, including to include OAM or other segment routing identifier(s) in a retrieved segment routing policy or specified by the instruction in one or more segment routing headers, to set or clear the O-Flag in a segment routing header, OAM context identifying information to be added to a segment routing header (e.g., in a time length value/TLV field) which can be used to uniquely identify a stream to which the created packet belongs or the created packet itself especially by an OAM controller in correlating multiple sets of OAM information acquired as the packet traverses a network, to add an acquired timestamp in a segment routing header (e.g., in a TLV) immediately or after egress processing, flag designating a location or a location address to send OAM information, etc. Next, as determined in process block451, if the packet is to be punted to slow path processing (e.g., the O-Flag is set in a current segment routing header, the current segment identifier is an OAM segment identifier for the current node), then processing proceeds to process block452, else to process block453. Continuing in process block452, the created segment routing packet along with an acquired timestamp is sent to a slow path packet processor and processing continues with process block453. In one embodiment, the packet is processed by fast path processing rather than slow path processing. Continuing, and as determined in process block453, if the packet is to be sent from the SR-capable router by the fast path processing (including when sending a copy of the packet to slow path processing), then processing proceeds to process block454; else processing proceeds to process block459. In process block454, the created segment routing packet is sent from the router according to forwarding information (e.g., identified interface, nexthop information). Continuing, processing of the flow diagram ofFIG.4Bis complete as indicated by process block459. FIG.5illustrates segment routing fast path packet processing performed in one embodiment. Processing begins with process block500. In process block502, the data plane of a segment routing node receives a segment routing packet and determines forwarding information (e.g., via a FIB lookup operation based on a destination address of the IP packet, with destination address possibly being a local or global OAM or non-OAM segment identifier). This forwarding information may include a segment routing policy to adjust segment routing processing, possibly modifying OAM signaling information (e.g., clearing the O-Flag, adding one or more OAM or other segment identifiers to the received packet), modifying the list of segment identifiers in a segment routing header of the packet, etc. In one embodiment, if the current segment identifier is not for the local node that received the packet in process block502, then the packet is dropped. As determined in process block503, if the O-Flag is set in the current segment routing header of the received segment routing packet, then processing proceeds to process block504; else processing proceeds to process block511. Continuing with process block504and in response to the O-Flag being set (which is a decision typically independent of the value of a segment identifier) as determined in process block503, the received segment routing packet is forwarded to slow path processing typically with an acquired current timestamp. As determined in process block513, if the received segment routing packet is to be further processed by fast path processing (e.g., both fast path and slow path processing will process a copy of the received segment routing packet), then processing proceeds to process block520; else the packet is dropped (e.g., not further processed) in process block514by fast path processing and processing proceeds to process block549. Continuing with process block511, as determined therein, if the current segment identifier is an OAM segment identifier, then processing proceeds to process block531; otherwise processing proceeds to process block520. Continuing with process block520, the received segment routing packet is processed according to its current segment identifier, and processing proceeds to process block540. In one embodiment, this processing may result in the packet being further processed by fast path and/or slow path processing. As used herein, segment route processing typically includes performing an action corresponding to a current segment identifier, updating of the segment routing information of a packet in a segment routing header and an IP destination address of an IP segment routing packet, this updating may include removing a segment routing header such as when performing penultimate segment popping (PSP). In one embodiment, PSP is disabled (e.g., not performed) so that a segment routing packet (instead of a packet from the payload of the segment routing header) including OAM signaling (e.g., a set O-Flag) is sent to a next SR-capable node. Continuing with process block531, as determined therein, if the slow path processing is to receive a timestamp, then processing proceeds to process block534to forward a copy of the received segment routing packet with an acquired timestamp to slow path processing; otherwise, in process block532a copy of the received segment routing packet is forwarded to slow path processing (e.g., without a timestamp). Next, as determined in process block535, if the received segment routing packet is to be further processed by fast path processing (e.g., both fast path and slow path processing will process a copy of the received segment routing packet), then processing proceeds to process block538; else the packet is dropped (e.g., not further processed) in process block536by fast path processing and processing proceeds to process block549. In one embodiment, the received segment routing packet is dropped (in process block536) if the next segment identifier (after the current, OAM segment identifier) is not associated with the SR node performing this processing (e.g., based on the SR locator) (as determined in process block535). Continuing with process block538, the received segment routing packet is segment route processed according to the next segment identifier in the order of the segment identifiers, and processing proceeds to process block540. Continuing in process block540, the processed packet is forwarded from the router according to forwarding information (e.g., identified interface, nexthop information). Continuing, processing of the flow diagram ofFIG.5is complete as indicated by process block549. FIG.6illustrates a process to slow path process a packet as performed in one embodiment. Processing begins with process block600. In process block602, a processor of the slow path processing receives a packet, typically with a timestamp. As determined in process block603, if the packet is identified as a segment routing packet to be OAM processed, then processing proceeds to process block606; otherwise, the packet is processed normally in process block604and processing proceeds to process block612. Continuing with process block606, OAM information, typically including the timestamp, is forward to a corresponding OAM controller or process to take further action (e.g., send OAM information to a designated local or remote process or controller, providing an Internet Control Message Protocol/ICMP echo request to an ICMP process on the node which then sends an ICMP echo response, or to another process to provide a response to a different probing request). Next, as continued in process block607, if the slow path is to further process the received packet (e.g., it is not dropped in process block606), then in process block610, the received packet is segment route processed according to a current segment identifier with may include further OAM processing, including sending a copy of the packet to an OAM processor or process, sending the packet to be fast path processed. Processing proceeds to process block612. Continuing in process block612, the processed packet is forwarded from the node according to forwarding information (e.g., identified interface, nexthop information), possibly including fast path egress processing to identify this egress forwarding information. Continuing, processing of the flow diagram ofFIG.6is complete as indicated by process block619. FIG.7illustrates OAM processing process performed by an OAM controller (e.g., processor or process) on a local or remote node according to one embodiment. Processing begins with process block700. In process block702, OAM information is received (e.g., a packet with OAM signaling or via some other format or mechanism), typically including a timestamp associated therewith). As determined in process block703, if the OAM controller is to provide a response based on the received packet, including that the packet encapsulates an ICMP echo or other probing request, then in process block704, a response packet is created and sent to the requester. In one embodiment, this OAM controller includes an ICMPv6 process running in a segment routing node. As determined in process block705, if the OAM controller is to accumulate OAM information (e.g., packet identifying information, possibly added OAM context identifying information, and timestamp) for subsequent correlation and processing, then in process block706, the OAM controller stores this information in a data structure, typically optimized for retrieving OAM information for a packet or a stream of packets. As determined in process block707, if the OAM controller is to currently process acquired OAM information, then in process block708, the OAM controller correlates and processes accumulated OAM information to determine OAM results (e.g., delay, loss, SR path verification, jitter, other matrices) for a packet or stream of packets. In one embodiment, the OAM information received includes, but is not limited to, one or more timestamps, packet identifying information, identification of the segment routing node providing the OAM information, etc. In one embodiment, the OAM information received from multiple segment routing nodes is correlated to a same particular packet (or stream of packets) based on one or more fields of the packet included in the received OAM information (e.g., address, segment identifier, information in a segment routing header) with may include OAM context identifying information added to a segment routing packet for such correlation (e.g., a same identifying or deterministic value for uniquely identifying OAM information associated with the same packet). In one embodiment, the received OAM information is used to verify that the packet traversed the nodes of a segment routing policy and possibly to determine associated delays and/or other metrics based on the associated timestamps. These OAM results are typically provided to a default or designated local or remote system or process. Processing returns to process block702. Many scenarios require punting of SRv6 OAM packets at the desired nodes in the network. Ping to a remote SID, performance management, proof-of-transit, network troubleshooting, etc. are among the use cases that require punting of the OAM packet. The interception and punting may be necessary at the egress node and/or at a selected/arbitrary transit node. Just like the clean bit (which has been depreciated), OAM operation is also a Function. One embodiment includes basic OAM SID function(s). One embodiment is described in relation to Segment Routing terminology, and also using the following terms.An::OP to represent the special OAM SID function to implement the punt behavior, where An is the locator part of the SID.An::OTP to represent the special OAM SID function to implement the timestamp and punt behavior, where An is the locator part of the SID.An::OTPF to represent the special OAM SID function to implement the punt a copy of the packet with timestamp and forward behavior, where An is the locator part of the SID. In an SRv6 network800shown inFIG.8, a user would like to ping a remote SID function (e.g., A4::DC45 on network node804), i.e., would like to validate if a SID Function at a remote node is programmed and is valid. The validation can be initiated from anywhere in the network, e.g., from a remote node (e.g., network node802) or a controller. However, the egress network node804drops (809) any such ping request packet807and ping always fails. This is exemplified in theFIG.8, where user is trying to ping a remote SID function, A4::DC45, from node802, where A4::DC45 is an END.DX4 SID at the remote node804. When the packet arrives, Egress node804drops (809) the packet807. This is because forwarding chain at the Egress is incomplete, and the ping to a SID function is performed without punting. One embodiment implements a ping to a SID function that uses punting at the target node. One embodiment uses OAM SIDS. Local SID allocation refers to the allocation of an “opcode” (FUNC) within a given locator context. In current implementation plan,Opcode 0 reserved as Invalid.Opcode 1-63 reserved:Opcode 1 and 2 are reserved for default END functions with PSP and USP support respectively.Opcode 3-63 are unassigned for future use. One embodiment uses opcodes from the reserved (3-63) range to encode the special OAM SID(s). One embodiment uses the argument field to provide data to a function and/or to communicate information between network nodes. Use of the OAM SID is exemplified using an example of pinging a SID function in SRv6 network900ofFIG.9. In the following, the user wants to ping a SID function, A4::DC45 on network node904, from network node902. As noted above, this ping requires punting at network node904. To exercise OAM punting at node904, the special OAM SID, A4::OP, has been added to the SRH in packet907before the target A4::DC45 SID. When the node904receives the packet908(sent from node903after SR processing packet907), OAM SID Function A4::OP::forces OAM packet punting on node904. Slow path at node904can now respond to the ICMP ping request message by sending packet909. In one embodiment, the same technique is used to punt an OAM packet at any selected node by inserting An::OP SID in front of the target SID function. In one embodiment, Each entry of the “My Local SID Table” indicates the function associated with the local SID. One embodiment includes, but is not limited to, using the following OAM functions associated to a SID.END.OP—OAM Endpoint with PuntEND.OTP—OAM Endpoint with Timestamp and PuntT.OTPF—OAM Transit with Timestamp, Punt and Forward END.OP: OAM Endpoint with Punt The END.OP (OAM Endpoint with Punt) is the most basic OAM function. When N receives a packet whose IPv6 DA is S and S is a local END.OP SID, N does:1. Punt packet to CPU for processing in software (slow-path).; Ref1Ref1: Hardware (ucode) just punts the packet. There is no requirement for the hardware to manipulate any TLV in SRH (or elsewhere). Software (slow path) implements the required OAM mechanism. Please note that use of END.OP SID in SRH segment list does not require any changes to PSP behavior. END.OPT: OAM Endpoint with Timestamp and Punt The “OAM Endpoint with Timestamp and Punt” function (End.OPT for short) is a variant of the END.OP function. ENP.OPT can be used for performance management data collection at an arbitrary SRv6 node. When N receives a packet whose IPv6 DA is S and S is a local END.OPT SID, N does:1. Timestamp the packet.; Ref12. Punt the time-stamped packet to CPU for processing in software (slow-path).; Ref2Ref1: Timestamping is done ASAP at the ingress pipeline (in hardware).Ref2: Hardware (ucode) just punts the packet. There is no requirement for the hardware to manipulate any TLV in SRH (or elsewhere). Software (slow path) implements the required OAM mechanism. In one embodiment, the timestamp is placed in the punt header or another location. END.OTPF: OAM Transit with Timestamp, Punt and Forward The “OAM Transit with Timestamp, Punt and Forward” function (End.OTPF for short) is used to implement punt and forward behavior. When N receives a packet whose IPv6 DA is S and S is a local T.OTPF SID, N does:1. Timestamp the packet.2. Punt the time-stamped packet to CPU for processing in software (slow-path).; Ref13. decrement SL.4. IF SRH[SL] is not a local SID THEN drop the packet.; Ref25. continue with execution of the local SID function at SRH[SL].Ref1: Hardware (ucode) just punts the packet. There is no requirement for the hardware to manipulate any TLV in SRH (or elsewhere). Software (slow path) implements the required OAM mechanism.Ref2: The function at SRH[SL] must be a local SID owned by N. T.OTPF Example Use of T.OTPF is illustrated using the following SR Policy. (A, S1::F1)(S3::F3, S2::F2, S1::F1, SL=2) Consider how packet needs to be modified in order to implement the “punt and forward” behavior at each segment of the following SR-Policy. To collect performance data from all SIDS in the sid-list, the ingress needs to insert the OTPF SID in front of all SIDS in the sid-list, as shown in the following,(A, S1::OTPF), (S3::F3, S3::OTPF, S2::F2, S2::OTPF, S1::F1, S1::OTPF) OAM “Punt and Forward” Using SRH.Flags.O-Bit Please note that the use of T.OTPF Function may double the SRH stack size. To address the SRH stack size increase issue, an alternate of using the “O-bit” to define the “punt and forward” OAM function is defined here. The following instructions are inserted at the beginning of the pseudo-code for all SID Functions. When N receives a packet whose IPv6 DA is S and S is a local SID, N first executes the following the pseudo-code,IF NH=SRH and SL>0 and SRH.Flags.O-bit is True THENa. Timestamp the packetb. Punt the time-stamped packet to CPU for processing in software (slow-path).; Ref1c. continue with execution of the function S; Ref2Ref1: Hardware (ucode) just punts the packet. There is no requirement for the hardware to manipulate any TLV in SRH (or elsewhere). Software (slow path) implements the required OAM mechanism.Ref2: S is a local SID and executed based on [ID. draft-filsfils-spring-srv6-network-programming]. The use of OAM “punt and forward” using SRH.Flags.O-bit requires additional change to disable PSP behavior using the following pseudo code. Disabling PSP when SRH.Flags.O-Bit is Set The following change needs to be implemented for all SID Functions. After the instruction ‘update the IPv6 DA with SRH[SL]’ is executed, the following instructions must be added:IF updated SL=0 and PSP is TRUE and SRH.Flags.O-bit is FalseTHEN pop the top SRH.; Ref1Ref1: PSP behavior is disabled when SRH.Flags.O-bit is set. In summary, in one embodiment, segment routing (SR) network processing of packets is performed which includes operations signaling and processing of packets in manners providing processing and/or memory efficiencies. One embodiment includes acquiring a segment routing particular packet by a particular router in a network. Responsive to the particular router data plane ascertained during fast path processing by a fast path processing unit that the segment routing particular packet is to be Operations, Administration, and Maintenance (OAM) processed by a different processing unit in the particular router, communicating a timestamp of a current time and the segment routing particular packet including a segment routing header that includes OAM signaling from said fast path processing to the different processing unit, with fast path processing being hardware-based packet processing by the fast path processing unit. The segment routing particular packet is OAM processing by the different processing unit. The use of OAM SIDs enable a controller or any node in the network to collect OAM/PM data from an arbitrary node or a set of arbitrary nodes in the network. This may be viewed as a more powerful construct than use of a global OAM bit (e.g., SRH.Flags.O-bit). In view of the many possible embodiments to which the principles of the disclosure may be applied, it will be appreciated that the embodiments and aspects thereof described herein with respect to the drawings/figures are only illustrative and should not be taken as limiting the scope of the disclosure. For example, and as would be apparent to one skilled in the art, many of the process block operations can be re-ordered to be performed before, after, or substantially concurrent with other operations. Also, many different forms of data structures could be used in various embodiments. The disclosure as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof. | 54,652 |
11863436 | DETAILED DESCRIPTION FIG.1shows an example system100. The system100may comprise a plurality of premises devices104. The premises devices104may comprise user devices, such as laptop computers, tablets, wearable devices, personal digital assistants (PDA's), mobile phones, or other computing devices. The premises devices104may comprise entertainment devices, such as set-top boxes or smart televisions. The premises devices104may comprise internet of things (IoT) devices, such as smart appliances, wearable, and/or automation devices. The premises devices104may comprise premises management and/or security system devices, such as sensors, communication devices, control panels, and/or alarms. The system100may comprise a gateway device102. The gateway device102may be configured to enable the premises devices104to establish a wired or wireless connection to the gateway device102for purposes of communicating with the gateway device102and other network apparatuses beyond the gateway device102, such as the internet101. The gateway device102may be configured to establish a wired and/or wireless local area network to which the premises devices104may connect. For purposes of communicating wirelessly, the gateway device102may implement a wireless access technology, such as the IEEE 802.11 (“Wi-Fi”) radio access technology. In other implementations, other radio access technologies may be employed, such as IEEE 802.16 or 802.20 (“WiMAX”), IEEE 802.15.4a (“Zigbee”), or 802.15.3c (“UWB”). For purposes of communicating with the gateway device102via a wired connection, the gateway device102may be configured to implement a wired local area network technology, such as IEEE 802.3 (“Ethernet”) or the like. The gateway device102may comprise a router. The gateway device102may comprise a modem. The gateway device102may be configured to provide a first connection to the internet101via a service provider network105, such as a network operated by a cable television system operator or other communications service provider. The service provider network105may comprise any of a variety of types of networks, such as, for example, a coaxial cable network, a fiber-optic cable network, a hybrid fiber-coaxial (HFC) network, a satellite transmission channel, a DSL connection or the like. The gateway device102may be configured to receive data traffic from the premises devices104, such as via a Wi-Fi network established by the gateway device102at the premises. The gateway device102may be configured to route the data traffic to the internet101via the first internet connection provided by the service provider network105. The system may comprise one or more cellular devices103. The premise devices104may comprise one or more of the cellular devices103. The cellular devices103may be configured to connect to the internet101via a second internet connection provided by a cellular network106to which the cellular device103may be connected. The cellular network106may comprise a cellular network (e.g., 3G, 4G, LTE, or 5G). The cellular device103may be configured to operate as a hotspot to enable other devices to connect to the cellular device103in order to share its connection to the internet101via the cellular network106. The hotspot provided by the cellular device103may comprise a Wi-Fi hotspot. The cellular device103may comprise a mobile phone, and the hotspot provided by the device may comprise a personal hotspot. The cellular device103may comprise a device associated with an internet and/or cellular service provider, and the hotspot may comprise a dedicated hotspot whose primary purpose is to provide internet connectivity to other devices, such as devices in an area around the hotspot. A cellular device may comprise a public device. The public device may provide access to the internet101via, for example, a 3G, 4G, LTE and/or 5G connection. The public device may comprise an alternate communication channel for internet connectivity, such as via radio frequency (RF) signals or a land-line connection. The system may further comprise a dedicated hotspot device107, which may provide another connection109to the internet. The dedicated hotspot device107may be configured to enable other devices to connect to it, such as via a WiFi connection, to enable those other devices to use the internet connection provided by the dedicated hotspot device107. The dedicated hotspot device may be operated by the operator of the service provider network105. The dedicated hotspot device may be operated by a different service provider, such as an internet service provider or a cellular network provider. The dedicated hotspot device107may comprise a public device and/or one of many devices located throughout a city or other geographic area, such as in a grid or web. The hotspot device107may provide access to the internet101via, for example, a 3G, 4G, LTE and/or 5G connection. The hotspot device107may provide access to the internet101via other forms of connection, such as via radio frequency (RF) signals or a land-line connection. The gateway device102may be configured to determine a change in the first internet connection provided via the service provider network105. The change may comprise a loss of the first internet connection. The change may comprise a degradation of the first internet connection. The degradation may comprise or be associated with one or more of reduced bandwidth, excessive packet loss, improper or failed routing, or other reduction in quality of service associated with the first internet connection. The degradation may be determined based on one or more of a measured quantity of packet loss, a measured level of latency, a measured level of latency variability, a determined quantity of packet retries, a determination of a failure to route to specific networks, or other lower-level network metrics indicative of poor performance or quality of service associated with the first internet connection. The gateway device102may be configured to determine that the change in the first internet connection occurs for a time period that meets and/or exceeds a predetermined threshold duration. The threshold duration may comprise 1 minute, three minutes, or five minutes, as examples. The gateway device102may be configured to determine one or more cellular devices103that are able to provide a second internet connection to use to route internet traffic from the premises devices104. The gateway device102may be configured (e.g., by a user, service provider, ISP, etc.) to detect the one or more cellular devices103. For example, the user may input, via a user interface, an indication that one or more of the cellular devices103are typically present at the premises and capable of providing a cellular connection to the internet, via for example, a personal hotspot capability of the cellular device103. The user interface for configuring the gateway device may be presented via a mobile application installed on one of the cellular devices103associated with the user, such as the user's mobile phone. The gateway device102may be configured to detect the one or more cellular devices103without user input. For example, the gateway device102may be configured to detect an available WiFi connection to each cellular device103. For example, the gateway device102may be configured to detect a beacon transmitted by a cellular device103indicative of a WiFi hotspot provided by the cellular device103. The gateway may monitor or scan, for example periodically, for the presence of such cellular devices103providing alternative connections so that it is aware of such devices in the event of a loss or degradation of the first (e.g., primary) internet connection. Alternatively, or in addition, the gateway may scan for such cellular devices upon detection of the loss or degradation of the primary internet connection. The gateway device102may be configured to establish a connection to an available WiFi hotspot of one or more of the cellular devices using one or more credentials associated with the WiFi hotspot or the one or more cellular devices. The one or more credentials may be pre-provisioned in the gateway device102. The one or more credentials may be pre-provisioned by a user of the gateway device102, such as during a setup operation associated with the gateway device102. The one or more credentials may be pre-provisioned by a user of the gateway device102using a mobile application on one of the cellular devices103associated with the user that is configured to communicate with the gateway device102, such as the user's mobile phone. The gateway device102may be configured to determine that more than one cellular device103is available to provide an internet connection via the cellular network106(or via another cellular network). Based on a loss or degradation of the first internet connection, the gateway device102may be configured to query the cellular devices103. Alternatively, or in addition, the gateway device102may be configured to periodically query the cellular devices103prior to any determined loss or degradation of the first internet connection. The cellular devices103may be configured to respond to the querying. Each of the cellular devices103may have a mobile application installed that enables the cellular device to communicate with the gateway device102for the purpose of providing the gateway device102with an alternative connection to the internet. The mobile application may use hotspot functionality of the cellular device. The mobile application may provide routing information to the gateway device102. Based on querying the cellular devices103, the gateway device102may be configured to determine which cellular devices103are within communication (e.g., WiFi) range of the gateway device102. Based on querying the cellular devices, the gateway device102may be configured to determine internet connectivity statistics associated with the cellular devices, such as bandwidth and latency. The gateway102may be configured to select one of the cellular devices to use to route internet traffic. For example, the gateway device102may be configured to determine the cellular device103that has the best cellular internet connection. The gateway device102may be configured to determine which cellular device103has the best cellular connection based on the connectivity statistics associated with the cellular devices. The gateway device102may be configured to select one of the cellular devices103based on one or more metrics associated with the cellular devices and/or cellular networks to which they are connected. The one or more metrics may comprise one or more of a signal strength, a latency, a bandwidth, a data rate, connection quality metrics (e.g., jitter), historical connection quality metrics, traffic limits (e.g., a monthly data limit), or a data cost associated with the cellular device103and/or its cellular network connection. The data cost may be associated with a service provider that provides cellular data connectivity to the cellular device(s)103. The data cost may be based on a subscription, such as a subscription with the service provider. The gateway device may compare the connectivity of the alternative internet connections (e.g., to each other or to the first internet connection). The gateway device may select the cellular device with the best relative connectivity. The gateway device may determine to continue routing internet traffic via the first internet connection if the connectivity of the cellular devices is determined not to be better than the connectivity of the first internet connection. The gateway device102may be configured to determine a dedicated hotspot device107that is able to provide a second internet connection to use to route internet traffic from the premises devices104. The gateway device102may be configured to detect the dedicated hotspot device107. The gateway102may be configured to detect an available WiFi connection to the dedicated hotspot device107. For example, the gateway102may be configured to detect a beacon transmitted by the dedicated hotspot device107indicative of an available WiFi connection to the dedicated hotspot device107. The gateway may monitor or scan, for example periodically, for the presence of such a hotspot device107providing an alternative connection so that it is aware of such device in the event of a loss or degradation of the first (e.g., primary) internet connection. Alternatively, or in addition, the gateway may scan for such a hotspot device107upon detection of the loss or degradation of the primary internet connection. The gateway device102may be configured to establish a connection to the dedicated hotspot device107using one or more credentials associated with the dedicated hotspot device. The one or more credentials may be pre-provisioned in the gateway device102. The one or more credentials may be pre-provisioned by a user of the gateway device102, such as during a setup operation associated with the gateway. The gateway device102may be configured to receive data, such as internet traffic, from the premises devices104. The gateway device102may be configured to route the data to the internet via the second internet connection provided by the determined cellular device103or dedicated hotspot device107. The gateway device102may be configured to route the data via the second internet connection provided by the determined cellular device103or dedicated hotspot device107instead of via the first internet connection provided by the service provider network105. The gateway device102may be configured to route the data via the second internet connection without receiving a command, prompt, or request from a user. Alternatively, or in addition, the gateway device102may be configured to designate the determined cellular device103as providing the second internet connection. Based on that designation, the determined cellular device103may send its internet traffic (e.g., data) directly via the second internet connection that it provides. For other cellular devices103not selected by the gateway device102to serve as the second internet connection, the gateway device102may be configured to, upon receiving data from any of such other cellular devices103, send a signal or notification back to the sending cellular device103to cause that sending cellular device103to send its data directly to the determined cellular device103that is providing the second internet connection, or routing by the determined cellular device to the Internet. Alternatively, or in addition, upon determining a loss or degradation of the primary internet connection, the gateway device102may be configured to determine which cellular devices103and/or hotspot devices107are in communication with the gateway device102. Based on that determination, the gateway device102may then send a message, alert, or other signal to each of those devices that are in communication with the gateway device102, that they should route future internet traffic (e.g., data) via the determined cellular device103or hotspot device107that is serving as the second internet connection. Upon detecting the restoration of, or improvement in the quality of the primary internet connection, the gateway device102may be configured to alert each cellular device103to again route their internet traffic via the gateway device102. The gateway device102may be configured to send a notification indicative of the internet traffic being routed via the second internet connection provided by the determined cellular device103or dedicated hotspot device107. The gateway device102may be configured to send the notification to one or more of the premises devices104or to one of the cellular devices103associated with the user, such as the user's mobile phone. The gateway device102may be configured to send the notification to one or more accounts. The accounts may comprise accounts of one or more users associated with the premises. The one or more accounts may comprise one or more accounts of subscribers of a service from a provider. The gateway device102may be configured to route the internet traffic via the second internet connection until the first internet connection provided by the service provider network105has been reestablished or is no longer degraded. The gateway device102may be configured to route the internet traffic via the second internet connection until a metric associated with the first internet connection has improved. The gateway device102may be configured to route the internet traffic via the second internet connection until a loss or degradation of the second internet connection occurs. Loss or degradation of the second connection may occur based on the determined cellular device103moving, such as outside a premises. Loss or degradation of the second connection may occur based on a data limit associated with the determined cellular device103being reached or exceeded. Based on detection of a loss or degradation of the second internet connection, or at times (e.g., periodically) prior to any such loss or degradation (e.g., in anticipation that such loss or degradation may occur), the gateway device102may be configured to determine another cellular device103or another dedicated hotspot device107. The degradation of the second internet connection may comprise a change in connectivity associated with the second internet connection. The degradation may be determined based on a quantity of packet loss, a level of latency, a level of latency variability, a predetermined quantity of retries, a failure to route to specific networks, or other lower-level network metrics indicative of poor connectivity, and/or failure to route to specific networks. In the event the gateway device102determines a degradation or loss of the second internet connection, the gateway device102may be configured to route internet traffic via a third internet connection provided by the other determined cellular device103or a dedicated hotspot device107. Based on the first internet connection being reestablished, its connectivity improving, and/or loss or degradation of the second internet connection, the gateway device102may be configured to route the internet traffic via the first internet connection—despite the initial loss of degradation of the first internet connection. The gateway device102may be configured to send a notification indicating that the first internet connection has been reestablished or improved and/or that the internet traffic is being routed via the first internet connection. FIG.2shows an example method200. At step210, a change in a first internet connection provided by a service provider (e.g., the first internet connection via the service provider network105inFIG.1) may be determined. The change in the first internet connection may be determined by a gateway device (e.g., the gateway device102inFIG.1). The first internet connection may comprise a primary internet connection. The change may comprise a loss of the first internet connection. The change may comprise a degradation of the first internet connection. The degradation may comprise or be associated with one or more of reduced bandwidth, excessive packet loss, improper or failed routing, or other reduction in performance or quality of service associated with the first internet connection. The degradation may be determined based on one or more of a measured quantity of packet loss, a measured level of latency, a measured level of latency variability, a determined quantity of packet retries, a determination of a failure to route to specific networks, or other lower-level network metrics indicative of poor performance or quality of service associated with the first internet connection. The change may be determined to last for (e.g., meet and/or exceed) a predetermined threshold time period. The threshold time period may comprise 1 minute, three minutes, or five minutes, as examples. At step220, one or more alternative (e.g., secondary) internet connections may be determined. An alternative internet connection may comprise a cellular internet connection via a cellular device (e.g., one of the cellular devices103inFIG.1). The cellular internet connection may be determined by the gateway device. The cellular internet connection may be determined based on the change in the first internet connection. The gateway device may be configured to detect the one or more cellular devices providing alternative internet connections. The gateway device may be configured by a user to detect the one or more cellular devices. For example, the user may input, via a user interface, an indication that one or more of the cellular devices are typically present at the premises and capable of providing a cellular connection to the internet, via for example, a personal hotspot capability of the cellular device. The user interface for configuring the gateway device may be presented via a mobile application installed on one of the cellular devices associated with the user, such as the user's mobile phone. The gateway device may be configured to detect available WiFi connections to the cellular devices. For example, the gateway device may be configured to detect a beacon transmitted by a cellular device indicative of a WiFi hotspot provided by the cellular device. The gateway device may be configured to establish a connection to the WiFi hotspot using one or more credentials associated with the WiFi hotspot. The one or more credentials may be pre-provisioned in the gateway device. The one or more credentials may be pre-provisioned by a user of the gateway device, such as during a setup operation associated with the gateway device. It may be determined that more than one cellular device is available to provide an internet connection via the cellular network (or via another cellular network). Based on the loss or degradation of the first internet connection, or at times prior to any such loss or degradation, the gateway device may be configured to query the cellular devices. The cellular devices may be configured to respond to the querying. Each of the cellular devices may have a mobile application installed that enables the cellular device to communicate with the gateway device for the purpose of providing the gateway device with an alternative connection to the internet. The mobile application may use hotspot functionality of the cellular device. The mobile application may provide routing information to the gateway device. Based on querying the cellular devices, the gateway device may be configured to determine which cellular devices are in communication (e.g., WiFi) range of the gateway device. Based on querying the cellular devices, the gateway device may be configured to determine connectivity statistics associated with the cellular devices, such as bandwidth and latency. The alternative internet connections may comprise an internet connection via a dedicated hotspot device (e.g., the hotspot device107inFIG.1). The dedicated hotspot device may be able to provide a second internet connection to use to route internet traffic from the premises devices. The gateway device may be configured to detect the dedicated hotspot device. The gateway device may be configured to detect an available WiFi connection to the dedicated hotspot device. For example, the gateway device may be configured to detect a beacon transmitted by the dedicated hotspot device indicative of an available WiFi connection to the dedicated hotspot device. The gateway device may be configured to establish a connection to the dedicated hotspot device using one or more credentials associated with the dedicated hotspot device. The one or more credentials may be pre-provisioned in the gateway device. The one or more credentials may be pre-provisioned by a user of the gateway device, such as during a setup operation associated with the gateway device. At step230, one of the cellular devices or the dedicated hotspot device may be selected to use to provide an alternative internet connection. The selection may be based on a comparison, by the gateway device for example, of the connectivity of the alternative internet connections (e.g., to each other or to the first internet connection). The gateway device may be configured to determine which cellular device has the best cellular connection based on the connectivity statistics associated with the cellular devices. The gateway device may be configured to select one of the cellular devices based on one or more other metrics associated with the cellular devices and/or cellular networks to which they are connected. The one or more metrics may comprise one or more of a signal strength, a latency, a bandwidth, a data rate, connection quality metrics (e.g., jitter), historical connection quality metrics, traffic limits (e.g., a monthly data limit), or a data cost associated with the cellular device and/or its cellular network connection. The data cost may be associated with a service provider that provides cellular data to the cellular device(s). The data cost may be based on a subscription, such as a subscription with the service provider. The gateway device may select the cellular device with the best relative connectivity. The gateway device may determine to continue routing internet traffic via the first internet connection based on the connectivity of the cellular devices not being better than the connectivity of the first internet connection. At step240, data, such as internet traffic, may be routed via a second internet connection provided by the selected cellular device or dedicated hotspot device (e.g., the second internet connection via the cellular network106inFIG.1). The internet traffic may be received from one or more devices located at the premises (e.g., premises devices104inFIG.1). Routing the internet traffic may comprise receiving the internet traffic from the premises devices and sending the internet traffic to the selected cellular device or dedicated hotspot. The internet traffic may be routed via the second internet connection independent of receiving a command, prompt, and/or request from a user. A notification indicating that the internet traffic is being routed via the second internet connection may be sent to one or more devices, such as the premises devices, cellular devices, or other user devices. The notification may be sent to one or more accounts associated with users associated with the premises. The notification may be sent based on selecting the cellular device, as in step230. The notification may be sent based on routing the internet traffic via the second internet connection, as in step240. FIG.3shows an example method300. At step310, a change in a first internet connection (e.g., the first internet connection via the service provider network105inFIG.1) may be determined. The change in the first internet connection may be determined by a gateway device (e.g., the gateway device102inFIG.1). The degradation may comprise or be associated with one or more of reduced bandwidth, excessive packet loss, improper or failed routing, or other reduction in performance or quality of service associated with the first internet connection. The degradation may be determined based on one or more of a measured quantity of packet loss, a measured level of latency, a measured level of latency variability, a determined quantity of packet retries, a determination of a failure to route to specific networks, or other lower-level network metrics indicative of poor performance or quality of service associated with the first internet connection. The change may be determined to last for (e.g., meet and/or exceed) a predetermined threshold time period. The threshold time period may comprise 1 minute, three minutes, or five minutes, as examples. At step320, one or more alternative (e.g., secondary) internet connections may be determined. An alternative internet connection may comprise a cellular internet connection via a cellular device (e.g., one of the cellular devices103inFIG.1). The cellular internet connection may be determined by the gateway device. The cellular internet connection may be determined based on the change in the first internet connection. The gateway device may be configured to detect the one or more cellular devices providing alternative internet connections. The gateway device may be configured by a user to detect the one or more cellular devices. For example, the user may input, via a user interface, an indication that one or more of the cellular devices are typically present at the premises and capable of providing a cellular connection to the internet, via for example, a personal hotspot capability of the cellular device. The user interface for configuring the gateway device may be presented via a mobile application installed on one of the cellular devices associated with the user, such as the user's mobile phone. The gateway device may be configured to detect available WiFi connections to the cellular devices. For example, the gateway device may be configured to detect a beacon transmitted by a cellular device indicative of a WiFi hotspot provided by the cellular device. The gateway device may be configured to establish a connection to the WiFi hotspot using one or more credentials associated with the WiFi hotspot. The one or more credentials may be pre-provisioned in the gateway device. The one or more credentials may be pre-provisioned by a user of the gateway device, such as during a setup operation associated with the gateway device. It may be determined that more than one cellular device is available to provide an internet connection via the cellular network (or via another cellular network). Based on the loss or degradation of the first internet connection, the gateway device may be configured to query the cellular devices. The cellular devices may be configured to respond to the querying. Each of the cellular devices may have a mobile application installed that enables the cellular device to communicate with the gateway device for the purpose of providing the gateway device with an alternative connection to the internet. The mobile application may use hotspot functionality of the cellular device. The mobile application may provide routing information to the gateway device. Based on querying the cellular devices, the gateway device may be configured to determine which cellular devices are in communication (e.g., WiFi) range of the gateway device. Based on querying the cellular devices, the gateway device may be configured to determine connectivity statistics associated with the cellular devices, such as bandwidth and latency. The alternative internet connections may comprise an internet connection via a dedicated hotspot device (e.g., the hotspot device107inFIG.1). The dedicated hotspot device may be able to provide a second internet connection to use to route internet traffic from the premises devices. The gateway device may be configured to detect the dedicated hotspot device. The gateway device may be configured to detect an available WiFi connection to the dedicated hotspot device. For example, the gateway device may be configured to detect a beacon transmitted by the dedicated hotspot device indicative of an available WiFi connection to the dedicated hotspot device. The gateway device may be configured to establish a connection to the dedicated hotspot device using one or more credentials associated with the dedicated hotspot device. The one or more credentials may be pre-provisioned in the gateway device. The one or more credentials may be pre-provisioned by a user of the gateway device, such as during a setup operation associated with the gateway device. At step330, one of the cellular devices or the dedicated hotspot device may be selected to use to provide an alternative internet connection. The selection may be based on a comparison, by the gateway device for example, of the connectivity of the alternative internet connections (e.g., to each other or to the first internet connection). The gateway device may be configured to determine which cellular device has the best cellular connection based on the connectivity statistics associated with the cellular devices. The gateway device may be configured to select one of the cellular devices based on one or more other metrics associated with the cellular devices and/or cellular networks to which they are connected. The one or more metrics may comprise one or more of a signal strength, a latency, a bandwidth, a data rate, connection quality metrics (e.g., jitter), historical connection quality metrics, traffic limits (e.g., a monthly data limit), or a data cost associated with the cellular device and/or its cellular network connection. The data cost may be associated with a service provider that provides cellular data to the cellular device(s). The data cost may be based on a subscription, such as a subscription with the service provider. The gateway device may select the cellular device with the best relative connectivity. The gateway device may determine to continue routing internet traffic via the first internet connection based on the connectivity of the cellular devices not being better than the connectivity of the first internet connection. At step340, internet traffic may be routed via the second internet connection provided by the selected cellular device or dedicated hotspot device (e.g., the second internet connection via the cellular network106inFIG.1). The internet traffic may be received from one or more devices located at the premises (e.g., premises devices104inFIG.1), such as by the gateway device. The gateway device may route the internet traffic via the second internet connection. Routing the internet traffic may comprise receiving the internet traffic from the devices and sending the internet traffic to the selected cellular device of dedicated hotspot. The internet traffic may be routed via the second internet connection independent of receiving a command, prompt, and/or request from a user. A notification indicating that the internet traffic is being routed via the second internet connection may be sent to one or more devices, such as one or more of the premises devices, the cellular devices, or other user devices. The notification may be sent to one or more accounts associated with users associated with the premises. The notification may be sent based on selecting the cellular device, as in step330. The notification may be sent based on routing the internet traffic via the second internet connection, as in step340. At step350, the internet traffic may be throttled. The gateway device may throttle the internet traffic. The internet traffic may be throttled based on a data limit (e.g., a monthly data limit) being reached or exceeded. The internet traffic may be throttled as it is routed via the second internet connection. The internet traffic may be throttled based on a bandwidth of the second internet connection. The internet traffic may be throttled based on a load of the internet traffic. Throttling the internet traffic may comprise lowering the bandwidth of subsequent communications. Based on the internet traffic being throttled, the internet traffic may be routed via the first internet connection. For example, the device may have a monthly data limit of 5 gigabytes (GB). Based on the 5 GB being used within a monthly period, the bandwidth of the second internet connection may drop from 5 megabits per second (Mbps) to 2 Mbps. If the first internet connection—even when degraded—provides 3 Mbps, the internet traffic may again be routed via the first internet connection. If the first internet connection degrades to 1 Mbps, the internet traffic may again be routed via the alternate second internet connection. FIG.4shows example method400. At step410, data, such as internet traffic, may be routed by a gateway device via an alternative internet connection provided by a cellular device (e.g., an alternative internet connection via the cellular network106inFIG.1) or a dedicated hotspot device (e.g., the hotspot device107inFIG.1). The data may be received from one or more devices located at the premises (e.g., premises devices104inFIG.1). Routing the internet traffic may comprise receiving the internet traffic from the devices and sending the internet traffic to the cellular device or dedicated hotspot device. The internet traffic may be routed via the first alternative internet connection independent of receiving a command, prompt, and/or request from a user. At step420, a change in a primary internet connection (e.g., the first or primary internet connection via the service provider network105inFIG.1) may be determined. The change in the second internet connection may be determined by the gateway device. The change in the second internet connection may comprise a connectivity associated with the primary internet connection improving. The change in the primary internet connection may comprise a measure of a quality of the connectivity of the primary internet connection improving by a predetermined threshold amount. The change in the second internet connection may comprise the change lasting for at least a predetermined threshold time. At step430, internet traffic may be routed via the primary internet connection. The internet traffic may be routed via the primary internet connection based on the change in the second internet connection. The internet traffic may be routed via the second internet connection based on the connectivity of the primary internet connection being better than the alternative internet connection, such as the primary internet connection having a greater bandwidth or lower latency. The internet traffic may be routed by the gateway device. The internet traffic may comprise internet traffic from the one or more of the premises devices. The internet traffic may be routed via the primary internet connection instead of via the alternative internet connection. A notification indicating that the primary internet connection has improved and/or that the internet traffic is being routed via the primary internet connection may be sent to one or more devices. For example, a gateway device at a house may determine that the connectivity of the primary internet connection provided by a cable network has dropped below a predetermined threshold. For example, the gateway may determine that latency of the internet connection exceeds 1000 milliseconds or that bandwidth has dropped below 100 kilobits per second (Kbps). The gateway device may determine that this degradation in connectivity has not improved over a duration of five minutes. The gateway device may determine cellular devices at the house. The cellular devices may include a tablet device, a mobile phone, and a dedicated hotspot. The gateway device may determine that the cellular connection of the mobile phone is better than the cellular connections of the tablet device and the dedicated hotspot. The gateway device may connect to the cellular connection via the mobile phone. The gateway device may receive internet traffic from devices at the house, such as a laptop computer and a set-top box. The gateway device may route the internet traffic via the cellular connection. The gateway device may receive responsive internet traffic via the cellular connection and send the responsive internet traffic to the intended recipient device, such as the laptop computer or the set-top box. The gateway device may send a notification to user devices at the house indicating that internet traffic is being routed via the cellular network. The gateway device may determine that the connectivity of the primary internet connection has improved. Based on the improvement in the primary internet connection, the gateway device may switch back to routing internet traffic from the devices at the house via the primary internet connection. The gateway device may send a notification to user devices at the house indicating that routing via the first internet connection has resumed. FIG.5shows an example system500. The system500may comprise a computing device501. One or more of the devices inFIG.1, such as the gateway device102, the cellular devices103, the dedicated hotspot device107, and/or the premises devices104may be embodied in the form of the computing device501. The computing device501may comprise a system bus513. The system bus513may comprise one or more bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The architectures may comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI), a PCI-Express bus, a Personal Computer Memory Card Industry Association (PCMCIA), Universal Serial Bus (USB) and the like. The bus513, and all buses specified in this description may also be implemented over a wired or wireless network connection and each of the subsystems, including the processor503, a mass storage device504, an operating system505, content playback management software506, content playback management data507, a network adapter508, system memory512, an Input/Output Interface510, a display adapter509, a display device511, and a human machine interface502, may be contained within one or more remote computing devices514a,b,cat physically separate locations, connected through buses of this form, in effect implementing a fully distributed system. The computing device501may comprise a variety of computer readable media. Exemplary readable media may be any available media that is accessible by the computing device501and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media. The system memory512may comprise the intelligent cache. The system memory512comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). The system memory512may store data such as content playback management data507and/or program modules such as operating system505and content playback management software506that are immediately accessible to and/or are presently operated on by the processing unit503. The computing device501may comprise other removable/non-removable, volatile/non-volatile computer storage media.FIG.5shows a mass storage device504which may provide non-volatile storage of computer code, computer readable instructions, data structures, program modules, and other data for the computing device501. For example and not meant to be limiting, a mass storage device504may be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like. Optionally, any number of program modules may be stored in the mass storage device504, including for example, an operating system505and content playback management software506. Each of the operating system505and content playback management software506(or some combination thereof) may comprise elements of the programming and the content playback management software506. Content playback management data507may also be stored in the mass storage device504. Content playback management data507may be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. The databases may be centralized or distributed across multiple systems. The user may enter commands and information into the computing device501via an input device (not shown). Examples of such input devices comprise, but are not limited to, a keyboard, a pointing device (e.g., a “mouse”), a microphone, a joystick, a scanner, tactile input devices such as gloves, and other body coverings, and the like. These and other input devices may be connected to the processing unit503via a human machine interface502that is coupled to the system bus513, but may be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 Port (also known as a Firewire port), a serial port, or a universal serial bus (USB). A display device511may also be connected to the system bus513via an interface, such as a display adapter509. It is contemplated that the computing device501may have more than one display adapter509and the computing device501may have more than one display device511. For example, a display device may be a monitor, an LCD (Liquid Crystal Display), or a projector. In addition to the display device511, other output peripheral devices may comprise components such as speakers (not shown) and a printer (not shown) which may be connected to the computing device501via Input/Output Interface510. Any step and/or result of the methods may be output in any form to an output device. Such output may be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like. The display511and computing device501may be part of one device, or separate devices. The computing device501may operate in a networked environment using logical connections to one or more remote computing devices514a,b,c.The remote computing devices514a,b,c,may comprise one or more of the devices inFIG.1. A remote computing device may be a personal computer, portable computer, a smart phone, a server, a router, a network computer, a peer device or other common network node, and so on. Logical connections between the computing device501and a remote computing device514a,b,cmay be made via a network515, such as a local area network (LAN) and a general wide area network (WAN). Such network connections may be through a network adapter508. A network adapter508may be implemented in both wired and wireless environments. Such networking environments are conventional and commonplace in dwellings, offices, enterprise-wide computer networks, intranets, and the internet. Application programs and other executable program components such as the operating system505are shown herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device501, and are executed by the data processor(s) of the computer. An implementation of content playback management software506may be stored in or sent across some form of computer readable media. Any of the disclosed methods may be performed by computer readable instructions embodied on computer readable media. Computer readable media may be any available media that may be accessed by a computer. For example and not meant to be limiting, computer readable media may comprise “computer storage media” and “communications media.” “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Exemplary computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by a computer. | 48,042 |
11863437 | DETAILED DESCRIPTION OF THE INVENTION To overcome the problems faced by the conventional network routing technologies, the present invention provides a decentralized system in which distributed nodes self-organize into a peer-to-peer computer network. Data transfer latencies and stabilities between nodes are continually measured and evaluated. When a data transport need arises between two nodes in the network, better performing paths between nodes are dynamically determined in the peer-to-peer computer network based on the up-to-date measured latencies and network stability. In some embodiments, referring toFIG.1, a peer-to-peer computer network100includes a plurality of nodes A, B, C, V1, R, P, V2, Z, etc. Some of the nodes (e.g., A, B, C, R, P, Z) can be physical computer devices or systems which are connected on the Internet. Some of the nodes (e.g., V1, V2. . . ) can be virtual nodes that virtual machines or virtual agents defined in a software defined network. The peer nodes in the peer-to-peer computer network100can communicates with each other in encrypted messages using public/private key pairs. The public key of a node can be obtained from the node ID of the node, which is available to all peer nodes in the peer-to-peer computer network100. All the nodes in peer-to-peer computer network100are pre-installed computer codes which contain protocols that govern the communications among the nodes, the set-up, maintenance, and governance within the peer-to-peer computer network100, and measurements, data path selection, and data routing within the peer-to-peer computer network100. FIG.2shows detailed components of two exemplified nodes node A210and node V1250in the peer-to-peer computer network100. Node A210includes a communication module220, a processor225, and computer memory230. The computer memory230stores computer codes that include instructions that define a distributed autonomous routing protocol (DARP), which can be executed by the processor225and the communication module220. The components in the DARP are the same as those stored in a virtual node such as node V1250, and their details are described below in conjunction with node V1250. The node V1250is a self-contained virtual system which resides in a host system or host device but isolated from the host by a firewall255. A virtual node can run any executable or script that is supported by the operating system environment of the host system or host device. The node V1250includes a remote access module260that is configured to communicate with other nodes in the peer-to-peer computer network100. The pre-installed DARP defines several applications or modules: network self-organization protocols270, a peer-node hash table275, data path discovery protocols280, and smart contract290. Analogously, these protocols and a peer-node hash table are stored in the computer memory230in the node A210, which can be accessed and executed by the processor225. The peer-node hash table275can store IP addresses, port numbers, and protocols (such as TCP, UDP, DNS, etc.), which are information used to communicate with the nodes identified by the node IDs. The nodes may support multiple network protocols that can be used to exchange messages based on network parameters. Nodes can choose which protocol is best suited for a particular situation and switch when needed. Each node must have a Public/Private key pair in order to be able to join the network. A node ID is derived from the Public Key. The Public Key of node can also be obtained from Node ID, which allows other peer nodes to verify the authenticity of messages signed by this node. Thus, a node ID is not only an identifier for the node but can also be used to obtain the public key for decrypting messages sent by this node. Moreover, secure messages sent from other peer nodes to this node can be encrypted by the public key of this node, which can only be decrypted and read by the private key of this node. The peer-node hash table275at each node contains information for a portion of the peer nodes (i.e., a portion of the global node ID hash table) in the whole peer-to-peer computer network. Importantly, other peer nodes can also query a peer-node even it is not stored in their own peer-node hash tables. Given each node is connected to the peer-to-peer computer network100and its node ID is stored in the peer-node hash tables at some peer nodes, any other node within the peer-to-peer computer network100may find it one way or another. Thus, with the sharing of information stored in peer-node hash tables, nodes in the peer-to-peer computer network100are not required to be directly connected for them to find each other. The node IDs and queries of the node IDs can be defined by Kademlia protocol. The network self-organization protocols270stores instructions for tasks for autonomously setting up and maintaining the peer-to-peer computer network100. Since there is no centralized command center, the peer-to-peer computer network100is formed and maintained solely by the distributed nodes therein, which makes the disclosed network more resilient against attacks and network failures. The disclosed peer-to-peer computer network100adopts a node-centric approach in organizing the relationship between a node and relationships to other nodes. Referring toFIG.1, node A is connected to node B, node C, node V1, and node R via connections11,12,13,15respectively. These nodes that node A is connected to are stored as neighbor nodes at node A. Node A sends pulse messages to node B, node C, V1, R and some of the nodes reply and send return pulses back to node A. Using the time stamps of the pulse messages sent out and the reception time stamp of the return messages, node A can calculate round-trip times (RTTs) from the respective nodes. In some embodiments, the pulse messages can be based on User Datagram Protocol, TCP or DNS protocols. Node A organizes its neighbor nodes according to the measured values of the respective RTTs: for example, neighbor nodes having RTTs within [0, 10 ms] are placed in a first orbital bin; neighbor nodes having RTTs within (10 ms, 20 ms] are placed in a second orbital bin . . . . Graphically, the nodes can be visualized as located at different orbits around node A: node B and node C are on orbit10(˜10 ms RTT) relative to node A, while node V1and node R are located at an orbit20(˜20 ms RTT) around node A, and so on. In addition to data-transfer latencies, each node also measures jitters in its communication with other nodes. Details about latency measurements based on sending and reception time stamps and details about jitters in data transfer latencies between nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Since the peer-to-peer computer network100is a distributed system without a center, each of node B, node C, node V1, and node R measures RTTs from their respective neighbor nodes and organizes the respective neighbor nodes in a similar fashion as node A does, as described above. For example, node R is connected to neighbor node P with connection32and to neighbor node V2via connection31. Node P is located on an orbit30relative to node R and node V2is located in an orbit40relative to node R. In a cascading fashion, all the updated nodes (current members) in the peer-to-peer computer network100are connected to each other: a first node is connected to its neighbors; each of the neighbors is connected to their respective neighbors. Under the instructions of DARP, the RTTs between nodes are continually measured; the orbital bins around each node are regularly updated; nodes in the peer-to-peer computer network100are updated. A distinct advantage of the presently disclosed system and method is that the latency measurements in the peer-to-peer computer network100does not require clock synchronization between peer nodes. Local clocks at different nodes can generally have skews or clock rate differences. The RTT measurements involves the subtraction of the reception time of a pulse message received by a neighbor node (or a candidate node) from the sending time (measured at the same node) of the return message back to the origination node. Thus, a skew in the clock at the neighbor node (or the candidate node) is cancelled out in the RTT measurement. In other words, offsets between clocks of a node and its neighbor nodes do not affect RTT measurements between peer nodes in the peer-to-peer computer network100. Details about independence of latency measurement against clock offset in a disclosed decentralized network are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Each node (e.g., A, B, C, V1, R, P, V2, Z) in the peer-to-peer computer network100is represented by a unique node identification (ID). Each node (physical or virtual) in the peer-to-peer computer network100stores a hash table of hash values of the node IDs of some neighbor nodes (current members, or the updated nodes) in the peer-to-peer computer network100and the nodes' IP addresses, port numbers and protocols. The hash values in the peer-node hash table allow allows the node to quickly query some current members (mostly connected neighbor nodes, as well as candidate nodes that may be selected to be connected to the current node) of the peer-to-peer computer network100. For example, node V1250can query some current members of the peer-to-peer computer network100using the hash values stored in the peer-node hash table275(FIG.2). Moreover, node V1can send requests to its neighbor nodes to query a node using peer-node hash tables at the neighbor nodes. Since the nodes in the peer-to-peer computer network100are interconnected in the above-described cascading fashion, node V1250can find any node in the peer-to-peer computer network and sends messages or data to another node within the peer-to-peer computer network100and manage the relationship with the other nodes in the peer-to-peer computer network100. Referring toFIGS.1and2, the data path discovery protocols280guides the operation tasks for identifying, evaluating, and selecting data routing paths and sending data between a source node to a destination node along a selected relayed data path within the peer-to-peer computer network100. For example, when a need arises for node A (source node) to send data to node Z (destination node) within the peer-to-peer computer network100, DARP can discover multiple candidate relayed data paths from node A to node Z by sending path packages, as described below in relation toFIG.5, wherein each of the relayed data path includes at least one relay node that is a current member of the peer-to-peer computer network100. Under the guidance of DARP, a distributed node in the peer-to-peer computer network100can evaluate data-transmission latencies and jitters of the multiple candidate relayed data paths from node A to node Z. For example, a relayed data path from node A to node R to node V2to node Z is identified and selected if the latencies and jitter meet preset criteria. This particular relayed data path includes two relay nodes (node R and V2node) and three routing segments there in between: node A to node R; node R to node V2; and node V2to node Z. The latencies of a relayed data path can be characterized by the total the one-way latency (OWL), which is the sum of OWLs from all the routing segments of the relayed data path. The data jitter in the relayed data path can be represented by an average of data jitter in the routing segments that constitute the relayed data path. In parallel, node A sends pulse one or more path packages directly to node Z in a direct path as defined by conventional network routing protocols, which results in a measurement of the one-way latency for the direct path. If the total OWL in a relayed data path is shorter than the OWL of the direct path and the jitter in the relayed data path is below a threshold, that relayed data path can be selected to route data from node A to node Z, which gives better data-transport performance that the conventional method along the direct path. Another advantage of the presented disclosed methods and systems is that the total measured OWL of a relayed data path in the peer-to-peer network is independent from the clock skews or offsets at the relay nodes along the relayed data path. The total measured OWL is determined by the sending time of the path package at the source node (e.g., node A) and the reception time of the path package at the destination node (e.g., node Z). Details about one-way latencies along a relayed data path comprising one or more relay nodes and its independence of the clocks of the relayed nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/219,884, titled “Automated formation and optimization of a group of nodes for distributed data routing over computer networks”, filed Apr. 1, 2021, the content of which is incorporated herein by reference. Referring toFIG.2, the smart contract290defines obligations and incentives for each node relative to the peer-to-peer computer network100and relative to each other. For example, after successful data transfer via a relayed data path, the relayed nodes can be paid by tokens typically by the source node that has initiated the data transfer. The successful completion of data transfers and token transactions can be validated and recorded by peer nodes on a blockchain. In addition, those peer nodes that function as relay nodes can be validated and awarded by tokens for continuing to up and available to route data for its peers. These above and other conditions are defined in the smart contract, which are pre-agreed when nodes install DARP codes. Details about governance and utility of a decentralized data routing system including obligations and incentives of the peer nodes are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. Referring toFIG.3, the method for autonomously routing data using in a peer-to-peer computer network (e.g.,100) can include two processes each comprising multiple steps: self-organizing a peer-to-peer computer network comprising a plurality of nodes each associated with a unique node ID (step310) and automatically routing data from a first node to a second node via one or more relay nodes in the peer-to-peer computer network (step320). Step310is related to setting up and maintaining a functional peer-to-peer computer network capable of routing data within the network. Each node in peer-to-peer computer network is represented by a unique ID. Hash values of these node IDs are stored in a peer-node hash table (e.g.,275inFIG.2). Step320involves the process of identifying, evaluating, and selecting relayed data paths for routing data between peer nodes in the peer-to-peer computer network. As described below in relation withFIGS.4and5, the relay node is an updated node in the peer-to-peer computer network. The process of self-organizing a peer-to-peer computer network comprising a plurality of nodes each associated with a unique node ID (step310) can include one or more of the following steps. Referring toFIG.4, the first node in a peer-to-peer computer network stores information about of its neighbor nodes in the peer-to-peer computer network (step410). In the example shown inFIG.1, node A stores information of its neighbor nodes, such as node B, node C, node V1, and node R that node A is connected to in the peer-to-peer computer network. The information can include node IDs and other properties (such as IP addresses, port numbers, and protocols) of the neighbor nodes, which as described above can be stored in a peer-node hash table (e.g.,275inFIG.2). Optionally, the first node can also store information about candidate nodes that are currently not neighbor nodes of the first node, but can become neighbor nodes to the first node in the future (step420). The candidate nodes are nodes that the first node is aware of and has incrementally stored previously. In some embodiments, the candidate nodes can be shared by the neighbor nodes of the first node. For example, inFIG.1, Node A's neighbor nodes, i.e., node B, node C, node V1, and node R are in communication with node A. Under DARP protocols, these node A's neighbor nodes can share with node A about the nodes they are respectively connected to and are aware of. For instance, the candidate nodes stored at node A can include nodes that are connected to node B, node C, node V1, and node R, such as node P and node V2that are connected to node R. The candidate nodes allow node A to explore a larger pool of nodes and to expand its network of neighbor nodes in each update. At the same time, some of the nodes that node A has been connected may become unstable or non-responsive or non-performing (e.g., increased data latencies or increased data jitter), these nodes may be dropped off from node A's connections (i.e., Node A's list of neighbor nodes, with more details described below). The balance of expansion and trimming of neighbor nodes (i.e., updated connection with the first node) assures a healthy operational peer-to-peer computer network. In general, nodes are self-managed and self-organized in the peer-to-peer computer network based on the performance by the data connections between the nodes. Thus, the nodes in the peer-to-peer computer network are required by DARP protocols to continually measurement performance characteristics (e.g., latency, jitter, etc.) of their connections. Based on the most updated performance measurements, the peer-to-peer computer network dynamically refresh its members: some good performing nodes are added to neighbor nodes, and some non-response or bad performing nodes are removed from neighbor nodes. The updated neighbor nodes for all nodes in the peer-to-peer computer network form the updated nodes for the peer-to-peer computer network. To this end, pulse messages are regularly automatically sent from the first node to the neighbor nodes and the candidate nodes (step430). Each of the pulse messages is characterized by a sending time stamp at the first node. In response to the pulse messages, the first node receives return pulses from at least some of the nodes in the neighbor nodes and the candidate nodes (step440). Each of the return pulses is characterized by a reception time stamp at the first node. Similarly, each of the pulse messages sent from the first node to one of the neighbor nodes or the candidate nodes is associated with a sending time stamp. Next, round-trip times (RTTs) between the first node and its neighbor nodes or its candidate nodes are calculated based on the pulse messages and the return pulses (step450). Each of the return messages is characterized by a reception time stamp. Since both sending and reception times are measured at the first node, thus RTT calculations are independent of the clocks at the neighbor nodes and the candidate nodes. A neighbor node or a candidate node receives a pulse message from the first node at a reception time and sends a return message back to the first node at a transmittance time. The reception time and transmittance time cancel out each other in the calculation of the RTT at the first node using the transmittance time of the pulse message at the first node and the reception time of the return message at the first node. However, RTT measurement may be affected by clock rate differences between the first node and the neighbor node or the candidate node. In some embodiments, the RTT calculations between the first node and neighbor nodes or the candidate nodes in step450can compensate the clock rate differences between different nodes. The first node can send pulse messages to a neighbor node or a candidate node at regular time intervals and receive return messages at regular time intervals. The return messages include transmittance times at the neighbor node or the candidate node. The clock rate of the neighbor node or the candidate node can be calculated using the transmittance times. In RTT calculations, the time gap between the reception time and the transmittance time at the neighbor node or the candidate node can be adjusted according to the difference between the clock rates at the first node and the neighbor or candidate node. In other words, the RTT measurements and calculations can be independent of the clock skews or clock rate discrepancies at the counterpart testing nodes. In the presently disclosed method, RTTs are used for monitoring connection performances between pairs of neighboring nodes in the peer-to-peer computer network. The neighbor nodes and the candidate nodes are then sorted into a plurality of orbital bins each comprising nodes characterized by RTTs related to the first node within a specific interval (step460). As noted above, each orbital bin is defined by a range of RTT such as [0 ms, 5 ms], [5 ms, 10 ms] . . . , etc. In one respect, nodes in different orbital bins can be considered being at different distances from the first node in relation to data transport. The spread in “data transport distances” between the orbital bins assures an optimal reach of the first node's connections with its neighbor nodes. The nodes that have not successfully updated with RTTs are not sorted in the orbital bins. From each of the orbital bins, at least one node is automatically selected based on RTTs associated with the node. The selected node is added to updated neighbor nodes for the first node (step470). The sum of updated neighbor nodes of all the nodes in the peer-to-peer computer network form the updated nodes in the peer-to-peer computer network (step470). Within an orbital bin, a node having a shorter RTT can be selected, which gives a faster data transport within RTT range of that orbital bin. Moreover, the node selection within each orbital bin can also take into account of jitters, bandwidths, clock rate differences, and other performance parameters measured by the pulse messages and the return pulses at the first node. A node will not be selected if measured jitters, bandwidths, clock rate differences, and other performance parameters exceeding a respective threshold. It should be noted that the neighbor nodes and the candidate nodes that are non-responsive to the pulse messages from the first node do not lead to updated RTT calculations and are not sorted into the orbital bins. These non-response nodes are thus discarded if some of them were on members of the peer-to-peer computer network. Furthermore, those nodes that have recently measured jitter exceeding a predetermined threshold can also be removed from the list of updated nodes in the peer-to-peer computer network if they have been. In some embodiments, when two nodes in the same orbital bin have similar performances (in latencies and jitter), the node that has been an updated node in the peer-to-peer computer network for longer duration is selected. This criterion is based on the observation that nodes that have shown longer period of good performance more likely provide more reliable performance in the future. Steps410-470are repeated for other nodes (e.g., B, C, V1, R, P, V2, Z, etc.) in the peer-to-peer computer network. In this way, node connections are regularly evaluated between pairs of neighboring nodes; the neighbor nodes are regularly updated. These node updating steps are repeated and propagated throughout the peer-to-peer computer network. The process of automatically routing data from a first node to a second node in the peer-to-peer computer network (step320inFIG.3) can include one or more of the following steps. Referring toFIG.5, an order or a need is first identified to send data from a first node to a second node in a peer-to-peer computer network (step510). The IP address of the second node is looked up using second node's ID on the peer-node hash table (275inFIG.2) stored at the first node. One or more path packages are sent from the first node to the second node in a direct data path (step520) as defined by conventional Internet routing. Each path package records all the timestamps from the first node, all the intermediate hops along the direct path, and the second node. One-way latency (OWL) and jitter are measured in the direct path between the first node and the second node using the one or more path packages received at the second node (step530). The OWL of the direct path is the reception time at the second node subtracted by the sending time recorded at the first node. The conventional direct data path is used as a benchmark for the improved performance of the relayed data paths. Next, relayed paths between the first node and the second node are searched for and selected. One or more path packages are sent from the first node to the second node via relay nodes (step540). Each path package records the reception time and the sending time at each relay node along its path as well as the sending time at the first node. Each of the relayed data paths includes one or multiple relay nodes that are from the updated nodes in the peer-to-peer computer network (step540). UsingFIG.1as an example, when node A wants to find relayed paths to node Z, node A sends path packets to its neighbor nodes in the orbital bins (e.g., node B, C, R, V1, etc.). Thus, the updated neighbor nodes have been recently updated using pulse messages and RTT and jitter measurements as described above. Each of the neighbor nodes receiving a path packet records a reception timestamp and a seconding timestamp to the path package. Then, the node A's neighbor node transmits this updated path packet forward to its neighbor node (e.g., from node R to node P and node V2). The relaying operation is repeated until the destination node is reached, or certain constraints are not met anymore (e.g., the number of hops has exceeded the maximum number of hops along each relayed path). Thus, a path packet that is successfully arrives the destination node Z includes the timestamps of all the intermediate hops for the specific relayed path. An important aspect for the presently disclosed cascaded path packages is in its network security. At each hop, a relay node cryptographically signs the path packet with its private key paired with a public key of the relay node. Thus, the destination node (or the second node) can cryptographically verify the integrity and authenticity of all the hops (or routing segments) along the relayed path. Thus, no intermediate node can alter hop timestamps or the list of hops. In some embodiments, the construction of a path packet along the data path (a potential data relay path) can include the following steps: the source node builds a path packet describing constraints (e.g., the maximum number of hops allowed along the relayed path) and the destination node; the source node cryptographically signs the path packet using the node ID of the source node, the node ID of the destination, the node ID of the first hop node (i.e. the first hop), and sends this path packet to the first relay node along with the signature; the first hop node records OWL, jitter, etc. of this hop; the first hop node cryptographically signs the path packet using the source node signature, recorded OWL, jitter, etc. and the node ID of the second hop node, and sends the updated path package to the second hop node; the second hop node repeats the steps of the first hop node; and these steps are repeated till the path package is received by the destination node. The destination node receives a chain of signatures that each depends on the previous signatures as well as recorded measurements along each routing segment, which prevents the content of the path packet from being altered by the intermediate malicious nodes. (When a data path is indeed selected for data routing, its hop nodes will function as relay nodes for data routing.) In the above described method, the first node (the source node) can find the second node (the destination node) even if they are not directly connected or the second node is not listed in the peer-node hash table of the first node. Moreover, the relay nodes may or may not be directly connected to the first node (the source node) or the to the second node (destination node). Additionally, these relay nodes have been recently or currently updated by their respective neighbor nodes, which means that they provide good data transfer performance via their connections. In some embodiments, the search for the destination node is enabled by Kademlia protocol, which allow a node to find information (node ID etc.) about a previously unseen node that is connected to the whole peer-to-peer computer network, and to send path packets to that node. For each path package that is originated from the first node and received by the second node, the total OWL for each of the relayed data paths between the first node and the second node is calculated (step550). Since the sending time and reception time are recorded by the path package for each routing segment, the OWL for each routing segment is simply the difference between the reception time of the receiving node subtracted by the sending time of the sending node for that routing segment. The total OWL for the relayed path from the first node to the second node is the sum of all the OWLs of the routing segments along the relayed path. Since each relay node resends the next path package right after it receives one, the clock skew or clock discrepancy is cancelled out between the reception time and the sending time at the relay node. In other words, the total OWL is independent from the clock discrepancies at the relay nodes along the relayed path. Details about one-way latencies along a relayed path and its independence of the clocks of the relayed nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. One of the relayed data paths is automatically selected if a total OWL and an average jitter associated with the relayed data path satisfy predetermine criteria in comparison to the direct path (step560). The selected relayed data path is the best performing among all the relayed path with lowest total OWL and data transfer jitters below a threshold. The selected relayed data path also has a total OWL shorter than the OWLs of other identified relayed data paths and the direct data path. The average jitter associated with a relayed data paths from the first node to the second node is calculated by a mean of jitters measured at all routing segments along the relayed data path. Details about jitters in data transfer latencies between nodes are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Once a relayed data path is selected within the peer-to-peer computer network, the first node can send data to the second node along the selected one of the relayed data paths (step570). It should be noted that the relay nodes can be physical nodes or SDN-defined virtual nodes in the peer-to-peer computer network. After successful relayed data routing, the relay nodes can be subsequently rewarded by the party (typically the first node or the source node) that has requested the data transport. The award can be in the form a transfer of tokens. The transactions can be recorded on a blockchain. Details about the awards, validation of transactions, and related tokenomics are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. In some embodiments, referring toFIG.6, the process of autonomously self-organizing nodes and autonomously finding best data routing paths between nodes in a peer-to-peer computer network can include one or more of the following steps: when a source node has the need to send data to a destination node in a peer-to-peer computer network, the destination node is identified to receive a data transfer in the peer-to-peer computer network (Step600). As described above, the nodes in the peer-to-peer computer network are identified by their node IDs. The node ID of a node can be derived from the public key of that node. The public key of node can also be obtained from Node ID. Other peer nodes can use the public key to authenticate a message cryptographically signed by this node using a private key (that is paired with the public key). The node ID (and the IP addresses, port numbers and protocols) of a node in the peer-to-pee network is stored in peer-node hash tables (275,FIG.2) of some other peer nodes (e.g., neighbor nodes). Since the nodes in the peer-to-peer computer network are interconnected in a cascading fashion (to neighbors, and in turn to neighbors' neighbors), a node can find any current peer node in the peer-to-peer computer network using Kademlia protocol and can send messages or data packages to any other peer node within the peer-to-peer computer network. Optionally, constraints for the data transfer from the source node to the destination node are defined (step605). Such constraints can include a maximum latency (defined by the total one-way latency along a routing path), a maximum jitter for the data transfer (i.e., variations in the data transfer latencies), and the maximum number of hops (i.e., number of relay nodes) allowed in a relayed data path from the source node to the destination node. The constraints can also be based on bandwidths, clock rate differences, etc. As disclosed in detail in relation toFIGS.1and2and steps410-460inFIG.4, the source node stores a list of neighbor nodes associated with a source node in orbital bins according to round-trip times (RTTs) between the source node and the neighbor nodes (step610). The list of neighbor nodes stored at the source node can be sorted into orbital bins ranked by RTT values such as [0, 10 ms], (10 ms, 20 ms], etc. It should be noted, as described above in relation to step470(FIG.4), that the neighbor nodes can be sorted in orbital bins based on other parameters such as jitters, bandwidths, and clock rate differences measured by pulse messages and return messages between the source node and the neighbor nodes. Furthermore, as described above in relation to step450(FIG.4), RTT calculations can compensate for close rate differences between source node and the neighbor nodes. The list of the neighbor nodes can be updated by removing nodes based on predetermined performance criteria (step615). For example, if recently measured RTTs and/or jitters between the source node and some of the nodes do not satisfy performance criteria (RTT too long or data-transfer jitter too large), these nodes can be removed from the list of neighbor nodes at the source node. Furthermore, new nodes can also be added to the list of neighbor nodes associated with the source node as previously described (step470inFIG.4). The source node can send one or more path packages to the destination node in a from direct data path (step620) from the source node to the destination node. The direct path is defined by conventional network routing protocols. One-way latency (OWL) and jitter in the direct path are measured using the one or more path packages received by the destination node (step625). Each path package is associated with a sending time recorded by the source node and a reception time recorded at the destination node. An OWL can be calculated using the reception time and the sending time independent of clock skew that may exist between the destination node and the source node as described in step530(FIG.5) and step675below. The OWL and jitter measured in the direct path are used as a benchmark for the candidate relayed data paths between the destination node and the source node. To find relayed data paths, path packages are sent from the source node to its neighbor nodes (step630). The neighbor nodes include a first hop node (step630). Each pack package can contain sending time recorded by the source node as well as a signature of the source node. The signature of the source node, as described above, can be verified by the public key (which can be obtained from the node ID) of the source node. As discussed previously in relation with step540(FIG.5), a node in the peer-to-peer network such as the source node may only be connected to a subset of all the nodes in the peer-to-peer network. But using Kademlia protocol, a node in the peer-to-peer network can find and reach another peer node in the peer-to-peer network by querying the other peer node at peer-node hash tables at different nodes and by sending cascaded path packages through the peer-to-peer network. In this step, the source node can send path packages simultaneously to all the updated neighbor nodes stored in the peer-node hash table (275,FIG.2) at the source node. Optionally, for security purpose, the neighbor nodes can verify the path packages received from the source node (step635). The neighbor nodes such as the first hop node can verify a cryptographic signature in the path package signed by the source node. If the path package is signed using a private key of the source node, the signature can be authenticated using a public key of the source node that is paired with its private key. As discussed above, the ID and the public key of the source node can be queried (e.g., using peer-node hash tables275inFIG.2) by the neighbor nodes in the peer-to-peer network. For multi-hop path packages (step665), a neighbor node can also verify the hop number and the signatures by the source node and all the intermediate hop nodes associated with the path package. The first hop node can update the path packet by with associated hop information (step640). The updated hop information can include reception time at the first hop node, the sending time of the path package to the next hop node or the destination node (step645and step660below) as well as a signature cryptographically signed by the first hop node. The updated hop information is inserted into the path packet to be sent to the next hop node or the destination node. Next, one or more path packages can be sent from the first hop node to the destination node in a second direct data path (step645) from the first hop node to the destination node. This step terminates additional hops and will be used to evaluate a relayed data path comprising only one relay node: the first hop node. As discussed above in relation toFIGS.1and2and steps410-460inFIG.4, and similar to step610relating to the source node, the first hop node can store information of a list of neighbor nodes associated with in orbital bins according to RTTs between the first hop node and its neighbor nodes (step650). Similar to step615, neighbor nodes can be removed from the list based on predetermined performance criteria (step655), which can include removal of nodes having RTT or data-transfer jitter over allowed respective thresholds. Furthermore, new nodes can also be added to the list of neighbor nodes associated with the first hop node as previously described. Moreover, as described above in relation to step470(FIG.4), the neighbor nodes can be sorted in orbital bins based on other parameters such as jitters, bandwidths, and clock rate differences measured by pulse messages and return messages between the first hop node and its neighbor nodes. Furthermore, as described above in relation to step450(FIG.4), RTT calculations can compensate for close rate differences between first hop node and its neighbor nodes. Steps660and step665can be skipped if the constraints defined in step605specify a maximum number of one hop node (that is, only the first hop node or one relayed node is allowed in a relayed data path). Furthermore, path packages updated with the hop information at the first hop node can be sent from the first hop node to its neighbor nodes including a second hop node (step660). These path packages are used to evaluate relayed data paths that include additional relay nodes (e.g., the second hop node, etc.). Then, steps635-660described above relating to the first hop node can be repeated for the second hop node or additional hop nodes (step665). UsingFIG.1as an example, node A can be the source node, node R can be the first hop node, node V2can be the second hop node, and without limiting to only two hop nodes, the destination node can be node Z. In the cascading manner as described above, steps630-665can reach all the peer nodes that are currently on the updated lists of neighbor nodes of one or more nodes in the peer-to-peer network. Under the Kademlia protocol, because each peer node is connected to multiple of its neighbors, all peer nodes are inter-connected; the source node will always have one or more pathways to reach the destination node in the same peer-to-peer network. The destination node receives all the path packages received from the source node (in the first direct path), from the first hop node (one hop then in the second direct path), and from other hop nodes (multiple hops) (step670). The path packages include information recorded at the source node as well updated information recorded at the intermediate hop nodes. Each of the path packages includes the IDs of the source node and the intermediate hop nodes, the sending times and the reception times from the source node to all the hop nodes, as well as cryptographic signatures by all the nodes along the paths. The signatures can be used for verifications using the public keys of the associated nodes. These path packages represent possible relayed data routing paths between the source node and the destination node with the first direct path being the benchmark. The total OWLs and other performance metrics are then calculated for the potential data routing paths associated with the path packages (step675) received by the destination node. As described above in relation to step550inFIG.5, the total OWL for the relayed path from the source node to the destination node is the sum of the OWLs of all the routing segments along the relayed data path (via one or more hop nodes). Since each hop node resends the updated path package right after the last version of the path package was received, the clock skew is cancelled out between the reception time and the sending time at the relay node. In other words, the total OWL is independent from the clock skews at the hop nodes along a relayed data path that is being evaluated. Details about one-way latencies along a relayed path and its independence of the clocks of the relay/hop nodes are discussed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021, the content of which is incorporated herein by reference. Other performance metrics calculated at the destination node can include jitter or variations in data-transfer times, bandwidths of data throughput, clock rate differences, and the number of hops in a relayed data path. A relayed data paths can be automatically selected for transferring data from the source node to the destination node based on the path packages received by the destination node if the associated total OWL and other performance metrics satisfy predetermine criteria (step680). The selected relayed path includes one or more relay nodes, which are the hop nodes such as the first hop node, the second hop node . . . used in finding data routing paths from the source node to the destination node. Typically, the data routing path having the lowest OWL and jitter can be selected. The predetermine criteria can require each relayed data path to have an OWL and jitter to be below respective thresholds (that low latency and low variation). The predetermine criteria can include a comparison of a potential relayed data path against the (first) direct path from the source node to the destination node: at least one of OWL and jitter should exceed the data-transfer performance of the direct path. The predetermine criteria can also be related to the constraints for the data transfer described in step605. For example, the constraints can specify a maximum number of hops to be 2, thus all potential relayed data paths having more than two hop nodes can be discarded from the evaluation. Using data path packages received, the destination node can maintain a list of potential data routing paths including the currently selected data routing path. The extra data routing paths can be used as alternative routing paths to the first selected path. One or more of the above steps (610-615,640-645) can be implemented by or under the data path discovery and routing protocols280(inFIG.2). One or more of the above steps (600,605,620-635,650-680) can be implemented by or under the network self-organization protocols270(inFIG.2). Once a relayed data path is selected within the peer-to-peer computer network, the source node can send data to the destination node along the selected one of the relayed data paths similar to step570. It should be noted that the source node, the destination node, as well as the relay nodes can be physical nodes or SDN-defined virtual nodes in the peer-to-peer computer network. After successful relayed data routing, the relay nodes can be subsequently rewarded by the party (typically the first node or the source node) that has requested the data transport. The award can be in the form a transfer of tokens. These transactions can be recorded on a blockchain. Details about the rewards, validation of transactions, and related tokenomics are disclosed in commonly assigned pending U.S. patent application Ser. No. 17/237,026, titled “Autonomously routing data using relay nodes pre-selected from a group of distributed computer nodes based on measured one-way latencies”, filed Apr. 21, 2021 and commonly assigned pending U.S. patent application Ser. No. 17/463,883, titled “Utility and governance for secure, reliable, sustainable, and distributed data routing over the Internet”, filed Sep. 1, 2021. The content of these patent applications is incorporated herein by reference. In some embodiments, referring toFIG.7, a hybrid decentralized data routing method in a peer-to-peer computer network can include one or more of the following steps: when a need arises to route data from a source node to a destination node in a peer-to-peer computer network, multiple paths are identified from the source node to the destination node in the peer-to-peer computer network. Each of the multiple paths can include two or more routing segments that each includes a sending node and a receiving node (step710). In the presently disclosed method, the protocols for selecting paths in a peer-to-peer computer network (such as measurements and evaluations of latencies and other data transfer metrics, the encryption of the path packages) and for maintaining connections between peer nodes (such as measuring round-trip times between nodes, the selections and organization of neighbor nodes) are pre-installed in the peer nodes within the peer-to-peer computer network. In identifying the multiple paths, the receiving node in one of the routing segments in one of the multiple paths is selected among a plurality of nodes in the peer-to-peer computer network based on round-trip times (RTTs) measured between the sending node and the plurality of nodes (step720). As described above in relation toFIGS.4-6, each node in the peer-to-peer computer network, such as the sending node in one of the routing segment, can maintain a list of neighbor nodes. The neighbor nodes associated with the sending node in the routing segment are selected among a plurality of nodes based on the RTTs between the sending node and the plurality of nodes. The RTT between the sending node and one of the plurality of nodes is measured using pulse messages sent between the sending node and one of the plurality of nodes. The RTT is calculated using a sending time stamp of a pulse message sent from the sending node and a reception time stamp of a return pulse message, received by the sending node, in response to the pulse message. Even if some computer clocks at the plurality of nodes in the peer-to-peer computer network can have skews relative to each other, the RTT calculations are independent of the skews between the computer clocks at the plurality of nodes in the peer-to-peer computer network. As previously described (270inFIG.2, steps420-470inFIG.4and step610inFIG.6), the neighbor nodes are sorted into a plurality of orbital bins according to RTTs between the sending node and the neighbor nodes (steps460-470inFIG.4). Each of the orbital bins is associated a specific interval for the RTT values. In identifying one of the multiple paths, the receiving node in one of the routing segments is selected from the neighbor nodes associated with the sending node in the same routing segment. In some embodiments, peer-node hash tables (275inFIG.2) are stored in the peer-to-peer computer network. Each of the peer-node hash tables each includes hash values of node IDs of neighbor nodes associated with a potential sending node (275inFIG.2). The step of identifying multiple paths from a source node to a destination node can include querying the destination node using the peer-node hash tables stored at the source node and other potential sending nodes in the peer-to-peer computer network. In some embodiments, the receiving node or the sending node (which can be a relay node) along a routing path can be a virtual node. Path packages are sent along the multiple paths from the source node to the destination node (step730). As described previously (280inFIG.2, step540inFIG.5, step620inFIG.6), the path packages are for quantitatively measuring and evaluating different routing path options from the source node to the destination node (steps630-660inFIG.6). A path packet can include a sending time stamp recorded at the source node. At each receiving node, the path packet can be updated to include a reception time stamp recorded at the receiving node and an identification of the receiving node. Moreover, the one of the path packets can be updated to include a cryptographic signature at the receiving node. The cryptographic signature can be signed with a private key paired with a public key associated with the receiving node. In some embodiments, the public key of the receiving node can be obtained from a node identification (ID) of the receiving node. Next, total one-way latencies (OWLs) associated with the multiple paths are measured using path packages from the source node to the destination node (step740). The total OWL for one of the multiple paths is obtained by summing OWLs measured by one of the path packages along all routing segments in the one of the multiple paths (280inFIG.2, step550inFIG.5, step675inFIG.6). Even if some computer clocks at the plurality of nodes may have skews relative to each other, the total OWLs measured in the multiple paths are independent of the skews between the computer clocks at the plurality of nodes (i.e., the relay nodes along the multiple paths) in the peer-to-peer computer network because offsets in the reception time and the sending time of the path package at the relay nodes can cancel out each other. A relayed data path can then be selected from the multiple paths at least in part based on the total OWLs respectively associated with the multiple paths from the source node to the destination node (step750). As discussed previously (280inFIG.2, step560inFIG.5, step680inFIG.6), the selected relayed data path has a total OWL lower than at least one other path in the multiple paths. In most situations, a selected relayed data path has among the shortest total OWL among all evaluated paths from the source node to the destination node. In some embodiments, multiple relayed paths can be selected from the source node to the destination node, which can serve as alternative data routing paths for providing redundant routing pathways in case one of them fails for some reason. The selection of relayed data path(s) can also include sending one or more path packages from the source node to the destination node in a direct data path from the source node to the destination node (steps520-530inFIG.5). The total OWL of the relayed data path is compared to that of the direct data path. The relayed data path is selected when it provides a lower total OWL than the direct data path (280inFIG.2, step560inFIG.5, step680inFIG.6). In some embodiments, jitters associated with the multiple paths are also measured using path packages from the source node to the destination node. The selection of the relayed data path from the multiple paths can further take into account of jitters associated with the multiple paths from the source node to the destination node (steps625,675in FIG.6). For example, a path is not selected if it is characterized with high data jitters even it has low total OWL. In some embodiments, the relayed data path is selected from the multiple paths further based on the numbers of routing segments respectively associated with the multiple paths from the source node to the destination node. In general, fewer routing segments (i.e., fewer relay nodes) are preferred for a routing path because it represents a more reliable routing option with few relay nodes and thus few failure mechanisms. The selection of a relayed data path can be based o an optimization of a shorter total OWL and a smaller number of routing segments (or relay nodes). For example, two routing paths, path A and path B, have similar total OWLs, but path B has one relay node (i.e., two routing segments) while path A has two relay nodes (i.e., three routing segments), then path B is preferred and can be selected due to its fewer number of relay nodes. Data can then be routed along the relayed data path selected from the source node to the destination node, (step760) in the peer-to-peer computer network. The above disclosed system and method provide a novel hybrid approach: nodes in a peer-to-peer network are qualified and maintained largely based on round-trip pulse measurements between peer nodes, while data routing paths are measured and selected based on one-way latency measurements. In other words, round-trip pulse measurements are used in peer node selection, and one-way latency measurements are used in routing path selection. One striking advantage of the disclosed method is in its vast scalability of the data routing method. Each node in the peer-to-peer network only needs to maintain a small number of neighbor nodes, which drastically reduces the burden of maintaining the peer network. Since all peer nodes in the network are connected in a cascading fashion, a node in the peer network can reach any other node in the same network. Thus, the decentralized data routing approach can perform data routing in a peer-to-peer network of hundreds of nodes as well as a billion nodes. Another important aspect of the above disclosed system and method is in its network security. The data messages and data packages sent between peer nodes can be signed cryptographically by the relay nodes using their private keys similar to blockchain technologies. The signatures can be verified using node identifications related to public keys. The above embodiments are only used to illustrate the technical solution of the present invention but not to limit it. Those skilled in the art can modify or equivalently replace the technical solution of the present invention without departing from the spirit and scope of the present invention. The scope of protection shall be subject to the claims. | 57,688 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.